Install EC2 Debian 12, Nginx
This is a stable installation as at May 2025 using ARM Debian 12 Linux and hardware architecture on a t4a instance – e.g. t4a.nano or t4a.micro.
I prefer Nginx to Apache2. Parts of the installation that overlap with my Debian11, Apache2 article will not be re-presented here. Please see those notes as there are important configurations there. Nginx can get more complicated if there are directories it will not access as compared to Apache.
First Linux configurations
On an iMac terminal, previously set up your ability to use root login for the ssh command to work. (Internet search to see how, or view previous articles). These small shell scripts can be helpful when you log into a terminal session and change to root with “sudo su”.
vi ssh.sh #!/bin/sh :>/var/root/.ssh/known_hosts exit [save and exit] chmod 777 ssh.sh [use your own .pem key file name and location, and the ssh command from the EC2 Connect tab.] vi domain.sh #!/bin/sh cd PEM ssh -i "domain.pem" admin@ec2-xx-xx-xx-xx.ap-southeast-2.compute.amazonaws.com exit [save and exit] chmod 777 domain.sh These can assist for quick logins and clearing the ssh keys on your iMac. For example: ./domain.sh
Log into the EC2 instance, and set your environment before working on the instance.
Assumption: you have attached an IP4 address and are able to SSH login. $ sudo su # All commands will be under root. Where needed I will make comments... (Use 1GB swap space. Less will cause issues. Use your own timezone. DOMAIN is you domain name, or possibley something else descriptive) export EXINIT='set noautoindent' export VISUAL=vim echo "vm.swappiness=10" >> /etc/sysctl.conf echo "vm.vfs_cache_pressure=200" >> /etc/sysctl.conf sysctl -w vm.swappiness=10 sysctl -w vm.vfs_cache_pressure=200 dd if=/dev/zero of=/swapfile bs=1024 count=1048576 mkswap /swapfile swapon /swapfile echo "/swapfile swap swap defaults 0 0" >> /etc/fstab chmod 0600 /swapfile a="Australia/Brisbane";export a;echo $a ln -sf /usr/share/zoneinfo/$a /etc/localtime date vi /etc/vim/vimrc.local let skip_defaults_vim = 1 if has('mouse') set mouse=r endif [save and exit] cd ~ vi .bashrc export EXINIT='set noautoindent' export VISUAL=vim export PS1="[\u@DOMAIN: \w]\\$ " alias rm='rm -i' alias cp='cp -i' alias mv='mv -i' [save and exit] cd /home/admin vi .bashrc export EXINIT='set noautoindent' export VISUAL=vim export PS1="[\u@DOMAIN: \w]\\$ " alias rm='rm -i' alias cp='cp -i' alias mv='mv -i' [save and exit] apt update apt upgrade SETUP A BACKDOOR EMERGENCY LOGIN - useful for EC2 serial console login if something has gone wrong. We will add a backup/backdoor user. If you get the sss_cache error shown below, please use the fix shown. Changes are based on building a new site before it goes live. We will use "snoopy" (the dog) as the user name... adduser snoopy (Give snoopy a password then add snoopy to /etc/sudoers - Using the vi editor, go to the end of the file (SHIFT G), and append the entry. Then use :w! to save the entry as it is a read only file.) (Again, all commands are from root user) vi /etc/sudoers snoopy ALL=(ALL) NOPASSWD:ALL [Exit the file after saving with SHIFT ZZ] (Add the user to groups admin and root: (for Linux 2023, it is wheel and root)) usermod -aG admin snoopy; usermod -aG root snoopy (We will make a copy of a good verion of /home/admin/.ssh to /home:) cd /home/admin cp -pr .ssh ../SSH_BACKUP This completes the creation of a backup user that you can use in an emergency on the EC2 Contact console. IF YOU GET THIS ERROR: ------------------------------- [sss_cache] [sysdb_domain_cache_connect] (0x0010): DB version too old [0.22], expected [0.23] for domain implicit_files! Higher version of database is expected! In order to upgrade the database, you must run SSSD. Removing cache files in /var/lib/sss/db should fix the issue, but note that removing cache files will also remove all of your cached credentials. Could not open available domains -------------------------------- To fix this, do the following: cd /var/lib/sss/db rm * sss_cache -E Then add the backup/backdoor user.
We will next add the Linux packages. Keep in mind, after doing all the work, stop and restart the instance and if adding major components, it can help to do the commands: sync;sync;reboot
Install Packages
I’m providing quite a number of packages to cover WordPress and general use of apps. I will show how to use letsencrypt, but in the immediacy will show use of a paid SSL certificate for laurenceshaw.au.
Preliminary work – your CAA records…
In your DNS settings, if using Comodo or Sectigo, add two CAA records. If these fail in the future, just contact the supplier to see what the records should be:
CAA 0 issue sectigo.com
CAA 0 issue digicert.com
If using Letsencrypt (certbot): 0 issue letsencrypt.org
If a paid certificate fails due to having letsencrypt in the DNS records, you could configure an authentication record instead, or temporarily remove the letsencrypt record. It should still be possible hoserver to install letsenscypt SSL without its CAA record.
Again, use root login to the EC2 instance:
PRELIMINARY WORK for a PAID SSL Certificate (and add CAA records to the DNS) Let's say we have a paid SSL certficate that you have edited to include all the chaining (this is a separate topic) Upload the .crt and .key files with FileZilla to /home/admin cd /etc/ssl/certs cp /home/admin/*crt . cd ../private cp /home/admin/*key . This is now out of the way. (Letsencrypt will be covered, but for now we use this.) There should be no errors or warnings when using apt update. If there are, check the notes on installing Debian 11. apt update apt upgrade apt install php8.2 apt -y install gnupg apt -y install curl gnupg2 ca-certificates lsb-release debian-archive-keyring apt -y install php8.2-cli php8.2-mbstring php8.2-xml php8.2-common php8.2-curl php8.2-imap php8.2-bz2 apt -y install mariadb-server apt -y install php8.2-mysqli php8.2-fpm gcc libjpeg* zip php8.2-zip apt -y install php8.2-gd apt remove *apache* apt -y install libgd-tools ipset For certbot/lets encrypt: (we do not install the certbot-apache plugin, and we only use pip for updating letsencrypt certificated) apt -y install python3-venv apt -y install php8.2-xmlrpc php8.2-soap php8.2-intl python3 -m venv /opt/certbot/ /opt/certbot/bin/pip install --upgrade pip /opt/certbot/bin/pip install certbot ln -s /opt/certbot/bin/certbot /usr/bin/certbot apt -y install certbot (now we install memcached as it improves Nginx performance) apt install memcached php2.3-memcached libmemcached-tools Append the following to the memcached.ini file: vi /etc/php/8.2/fpm/conf.d/25-memcached.ini PORT="11211" USER="memcached" MAXCONN="1024" CACHESIZE="64" OPTIONS="-l 127.0.0.1 -U 0,::1" [save and exit] systemctl enable memcached systemctl start memcached ps -ef /usr/bin/memcached -m 64 -p 11211 -u memcache -l 127.0.0.1 -P /var/run/memcached/memcached.pid
Install and configure Mariadb, and various PHP settings
NOTE: I am not installing memcached for apache2. I do however install it for Nginx. I never install ACPU - it has been too problematic for me. (apt install mariadb-server. --> this would be done in the previous section above.) mysql_secure_installation "Enter current password for root" (enter for none): OK, successfully used password, moving on... "Switch to unix_socket authentication [Y/n]" n "Change the root password?" [Y/n] Y (nominate your database password) Y for the remaining questions] systemctl stop mariadb systemctl start mariadb systemctl enable mariadb systemctl enable php8.2-fpm systemctl start php8.2-fpm (note: if php8.2-fpm is not running, nginx will not work.) VARIOUS CONFIGURATIONS cd / find . -name php.ini -print ./etc/php/8.2/fpm/php.ini ./etc/php/8.2/cli/php.ini ./etc/php/8.2/apache2/php.ini cd /etc/php/8.2/fpm Use your own timezone, and modify values to your own preference. If later uploading .sql files through phpMyAdmin, the max file sizees below will apply as a limit. I prefer 512MB for memory_limit. Others may use 128 or 256. I prefer upload_max_filesise and post_max_size as 100MB. cp -p php.ini php.ini.bak vi php.ini date.timezone = Australia/Brisbane max_execution_time = 300 max_input_time = 600 max_input_vars = 2500 memory_limit = 512M post_max_size = 50M upload_max_filesize = 50M [save and exit] cd / find . -name www.conf -print ./etc/php/8.2/fpm/pool.d/www.conf cd /etc/php/8.2/fpm/pool.d cp -p www.conf www.conf.bak Some fo the following values will already be correct: vi www.conf user = nginx group = nginx listen = /run/php/php8.2-fpm.sock listen.owner = www-data listen.group = www-data ;listen.mode = 0660 ; pm = dynamic pm = ondemand pm.max_children = 75 pm.start_servers = 10 pm.min_spare_servers = 5 pm.max_spare_servers = 35 pm.process_idle_timeout = 10s; pm.max_requests = 500 Then at the bottom of the file: (use the same memory value you had in php.ini) php_admin_value[error_log] = /var/log/fpm-php.www.log php_admin_flag[log_errors] = on php_admin_value[disable_functions] = exec,passthru,system php_admin_flag[allow_url_fopen] = off php_admin_value[memory_limit] = 512M [save and exit] We add the emergency lines. You can place these lines anywhere near the commented lines for "emergency". This helps prevent memory leaks, and ability to gracefully use systemctl reload php8.2-fpm on crontab once a night. cd .. cp -p php-fpm.conf php-fpm.conf.bak vi php-fpm.conf emergency_restart_threshold = 10 emergency_restart_interval = 1m process_control_timeout = 60s [save and exit] apt install cron Let's add "reload" for php8.2-fpm to crontab: crontab -e 15 0 * * * /usr/bin/systemctl reload php8.2-fpm >/dev/null 2>&1 [save and exit] cd /etc/php/8.2/mods-available cp -p opcache.ini opcache.ini.bak vi opcache.ini zend_extension=opcache.so opcache.jit=off opcache.enable=1 opcache.enable_cli=1 opcache.memory_consumption=128 opcache.interned_strings_buffer=16 opcache.max_accelerated_files=4000 [save and exit] apt update apt upgrade sudo apt autoremove sync;sync;reboot
Wait a bit and log back in as root. After all configs are done, please use the EC2 console to stop and start the instance for a “clean slate”.
Install Nginx
We want a minimum of Nginx v1.27, not less as the configuration code changed.
If you do the command apt search nginx, you will likely see an older version.
Take these steps:
curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor \ | sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null gpg --dry-run --quiet --no-keyring --import --import-options import-show /usr/share/keyrings/nginx-archive-keyring.gpg [This is the verification output:] pub rsa2048 2011-08-19 [SC] [expires: 2024-06-14] 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 uid nginx signing key <signing-key@nginx.com> echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \ http://nginx.org/packages/debian `lsb_release -cs` nginx" \ | sudo tee /etc/apt/sources.list.d/nginx.list echo -e "Package: *\nPin: origin nginx.org\nPin: release o=nginx\nPin-Priority: 900\n" \ | sudo tee /etc/apt/preferences.d/99nginx apt update apt install nginx nginx -v cd /etc/nginx ls systemctl enable nginx systemctl disable apache2 systemctl start nginx ps -ef IF WE GET A "RACING" ERROR when starting nginx, this is a fix: systemctl status -l nginx nginx.service: Can't open PID file /run/nginx If you get this, our reference for the fix is: https://serverfault.com/questions/1042526/open-run-nginx-pid-failed-13-permission-denied mkdir -p /etc/systemd/system/nginx.service.d [Create these lines:] vi /etc/systemd/system/nginx.service.d/override.conf [Service] ExecStartPost=/bin/sleep 0.1 [save and exit] systemctl daemon-reload systemctl restart nginx systemctl status -l nginx
ADD MEMCACHED to php and nginx…
cd /etc/nginx vi memcached.conf upstream memcached_backend { server 127.0.0.1:11211; } upstream remote { server 127.0.0.1:443; } [save and exit] You must have the memcached port open on the EC2 instance Security Group: Type:Custom TCP protocol:TCP Port range:11211 Source:127.0.0.0/16 Description:memcached Edit the /etc/php/8.2/fpm/php.ini file to have: ; session.save_handler = files session.save_handler = memcached session.save_path = "127.0.0.1:11211" Edit the www.conf file at the bottom to have: php_value[session.save_handler] = memcached php_value[session.save_path] = 127.0.0.1:11211 Note: we do not configure apcu. If we want opcache, we also may add to www.conf: php_value[opcache.file_cache] = /var/lib/php/opcache Then: cd /var/lib/php mkdir oopcache chmod 770 opcache chgrp nginx opcache cd .. ls -l drwxrwx--- 2 root nginx 4096 May 11 14:27 opcache If you get unexpected memory errors in the logs, perhaps consider commenting the php_value line out and reboot the server.
Configure memcached, Nginx http and https SSL (paid certificate)
First we test the base installation starts, then we edit files for our domain name.
cd /etc/nginx cp -p nginx.conf nginx.conf.bak nginx -t systemctl start nginx ps -ef IF WE GET A RACING ERROR - see the Install Nginx section above for the fix. Assuming all is now working: :>nginx.conf vi nginx.conf user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; keepalive_timeout 65; gzip on; server_names_hash_bucket_size 64; add_header Strict-Transport-Security "max-age=31536000; includeSubdomains"; ssl_session_timeout 10m; add_header Content-Security-Policy "default-src 'self' https: data: 'unsafe-inline' 'unsafe-eval';" always; add_header X-Xss-Protection "1; mode=block" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Content-Type-Options "nosniff" always; add_header Referrer-Policy "origin-when-cross-origin" always; client_max_body_size 50M; upstream _php { server unix:/run/php/php8.2-fpm.sock; } include /etc/nginx/memcached.conf; server { listen 80; listen [::]:80; server_name mydomain.com www.mydomain.com; # return 301 https://mydomain.com$request_uri; root /var/www/html; index index.php index.html index.htm; include /etc/nginx/default.d/*.conf; location / { index index.php index.html index.htm; try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { # SECURITY : Zero day Exploit Protection try_files $uri =404; # ENABLE : Enable PHP, listen fpm sock fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/run/php/php8.2-fpm.sock; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; include fastcgi_params; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } error_page 404 /404.html; location = /404.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } # end port 80 } # end nginx config } [save and exit]
At this stage, restart nginx and on a browser with cleared cache, try:
http://mydomain.com
You will see:
Apache2 Debian Default Page
As we are dealing with a paid SSL certificate in this example, we only need to add the port 443 stanza, and make sure the certificate .crt and .key files are current, and installed in /etc/ssl/certs and /etc/ssl/private
We uncomment the port 80 redirect line so http will go to https.
After the port 80 stanzas, and before the final } bracket, add these lines with your own domain name, assuming /var/www/html, and uncomment the redirect line: I will use laurenceshaw.au as the example to make sure ecrything below is clear... cd /etc/nginx vi nginx.conf # .............. server { listen 80; uncomment in this stanza: # return 301 https://laurenceshaw.au$request_uri; return 301 https://laurenceshaw.au$request_uri; # .............. # end port 80. } add the port 443 stanza below this section: # port 443 server { listen 443 ssl; listen [::]:443 ssl; http2 on; server_name laurenceshaw.au; root /var/www/html; ssl_certificate "/etc/ssl/certs/laurenceshaw_au.crt"; ssl_certificate_key "/etc/ssl/private/laurenceshaw_au.key"; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers EECDH+CHACHA20:EECDH+AES; ssl_ecdh_curve X25519:prime256v1:secp521r1:secp384r1; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:1m; ssl_session_timeout 10m; include /etc/nginx/default.d/*.conf; location / { set $memcached_key "$uri?$args"; error_page 404 502 504 = @fallback; default_type text/html; index index.php index.html index.htm; try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { # SECURITY : Zero day Exploit Protection try_files $uri =404; # ENABLE : Enable PHP, listen fpm sock fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/run/php/php8.2-fpm.sock; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; include fastcgi_params; } location = /robots.txt { allow all; log_not_found off; access_log off; try_files $uri /index.php?$args; } error_page 404 /404.html; location = /404.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } # end port 443 } # ........ this will be followed by the last { curly bracket [save and exit] Verify your SSL .crt and .key files are in /etc/ssl/certs and /etc/ssl/private with root ownership and group. e.g. in /etc/ssl/certs: -rw-r--r-- 1 root root 6359 May 11 12:59 laurenceshaw_au.crt and in /etc/ssl/private: -rw-r--r-- 1 root root 1730 May 11 12:58 laurenceshaw_au.key nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful If not okay, widen your terminal screen so you can read the contents and type: systemctl status -l nginx Once all is good, restart and test the URL in a browser. systemctl restart nginx systemctl restart memcached systemctl restart php8.2-fpm Clear your browser cache, and try: https://mydomain.com If it is not working, sync;sync;reboot your server and check the processes are running for nginx, memcached, and php8.2-fpm. If not, use the systemctl enable command, start them and retry or reboot again.
Nginx and LetsEncrypt free certificates
Please go through the steps for Nginx and a Paid certificates to see basic configurations.
Let’s assume the primary domain name will use LetsEncrypt. The CAA record is letsencrypt.org. If it conflicts with other CAA records, it is not vital to have it.
Do not add the Port 443 stanza.
Comment out the redirect 301 http line.
Let’s say your domain is called mydomain.com. In a shell script as root login, type the following with your own email address:
cd /home/admin /usr/bin/certbot certonly --non-interactive --agree-tos -m me@gmail.com -d mydomain.com --webroot -w /var/www/html --dry-run
Always use –dry-run for testing.
Once the certificate passes, remove the –dry-run option. Then edit nginx.conf to uncomment the 301 line, and add the Port 443 stanza before the last { curly bracket line:
cd /etc/nginx vi nginx.conf user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; keepalive_timeout 65; gzip on; server_names_hash_bucket_size 64; add_header Strict-Transport-Security "max-age=31536000; includeSubdomains"; ssl_session_timeout 10m; add_header Content-Security-Policy "default-src 'self' https: data: 'unsafe-inline' 'unsafe-eval';" always; add_header X-Xss-Protection "1; mode=block" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Content-Type-Options "nosniff" always; add_header Referrer-Policy "origin-when-cross-origin" always; client_max_body_size 50M; # upstream can be optional. I tend to check nignx error logs for how wordpress is hammering the system upstream _php { server unix:/run/php/php8.2-fpm.sock; } include /etc/nginx/memcached.conf; server { listen 80; listen [::]:80; server_name mydomain.com www.mydomain.com; # we uncomment the 301 line once SSL has been added return 301 https://mydomain.com$request_uri; root /var/www/html; index index.php index.html index.htm; include /etc/nginx/default.d/*.conf; location / { index index.php index.html index.htm; try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { # SECURITY : Zero day Exploit Protection try_files $uri =404; # ENABLE : Enable PHP, listen fpm sock fastcgi_split_path_info ^(.+\.php)(/.+)$; # note: you can find the correct sock entry within the www.conf file fastcgi_pass unix:/run/php/php8.2-fpm.sock; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; include fastcgi_params; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } error_page 404 /404.html; location = /404.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } # end port 80 } # port 443 server { listen 443 ssl; listen [::]:443 ssl; http2 on; server_name mydomain.com; root /var/www/html; ssl_certificate "/etc/letsencrypt/live/mydomain.com/fullchain.pem"; ssl_certificate_key "/etc/letsencrypt/live/mydomain.com/privkey.pem"; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers EECDH+CHACHA20:EECDH+AES; ssl_ecdh_curve X25519:prime256v1:secp521r1:secp384r1; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:1m; ssl_session_timeout 10m; include /etc/nginx/default.d/*.conf; location / { set $memcached_key "$uri?$args"; error_page 404 502 504 = @fallback; default_type text/html; index index.php index.html index.htm; try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { # SECURITY : Zero day Exploit Protection try_files $uri =404; # ENABLE : Enable PHP, listen fpm sock fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/run/php/php8.2-fpm.sock; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; include fastcgi_params; } location = /robots.txt { allow all; log_not_found off; access_log off; try_files $uri /index.php?$args; } error_page 404 /404.html; location = /404.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } # end port 443 } }
nginx -t
If issues: systemctl status -l nginx
When all is up and running, systemctl reload php8.2-fpm is okay to do.
We use the same process for multi-domains and LetsEncrypt, via include files, which we place before the port 80 stanza.
e.g.:
include /etc/nginx/tech.mydomain.com.conf;
Then, in your .conf file, you only add server stanzas.
For LetsEncrypt, add the port 80 stanza and comment out the 301 redirect line.
Run the –dry-run command. e.g.:
/usr/bin/certbot certonly --non-interactive --agree-tos -m me@gmail.com -d tech.domain.com --webroot -w /var/www/tech.domain.com --dry-run
Of course, you use your own domain or subdomain names and have your DNS records point to your IP address.
When the dry run works, remove the –dry-run option.
After SSL is installed, in your .conf file, uncomment the 301 line, and add the port 443 stanza. There will no additional top or bottom curly brackets, just the two main stanzas. Then restart nginx and test.
You can of course have a mix of paid and free certificates, with various /etc/nginx/abcdef.conf files of your own choosing.
Next we add a crontab shell script to check each night if it is time to automatically update the LetsEncrypt certificates on your site.
The method I use is to increase a counter from 0 to 65 in a .dat file. When it reaches 65, I renew the certificate. If this counter is wildly out of range, you may find your website does not renew the SSL certificate and you have to redo all the work above. YOu can modify the scripting to catch this problem if you wish.
Create a /home/admin/certbot.dat file with one line, typing in the number 0 (zero).
Then create a shell script…
cd /home/admin If you do not already have the info.log file: vi info.log [save and exit - have at least one blank line in it] chmod 777 info.log chown root info.log chgrp admin info.log vi certbot.dat 0 [save and exit] chmod 777 certbot.dat chown root certbot.dat chgrp admin certbot.dat vi certbot.sh mydomain() { d=`date` c1=`head -1 /home/admin/certbot.dat` c1=$(expr $c1 + 1) if [ "$c1" = "65" ] ; then echo "0" > /home/admin/certbot.dat echo "Certbot mydomain.com renewal" $d >> /home/admin/info.log /usr/bin/certbot certonly --non-interactive --agree-tos -m me@gmail.com -d mydomain.com --webroot -w /var/www/html >/dev/null 2>&1 sudo /usr/bin/systemctl reload nginx >/dev/null 2>&1 sudo /usr/bin/systemctl reload php8.2-fpm >/dev/null 2>&1 sleep 2 echo "Valid dates mydomain.com:" >> /home/admin/info.log sudo /usr/bin/openssl x509 -noout -dates -in /etc/letsencrypt/live/mydomain.com/cert.pem >> /home/admin/info.log else echo "Certbot mydomain.com day $c1 of 65 $d" >> /home/admin/info.log echo $c1 > /home/admin/certbot.dat fi } mydomain exit [save and exit] I use functions in the script so it is easier to manage multiple domains. Do not run it at this stage, as you will increase the certbot.dat counter value. If you do run it: sh -x ./certbot.sh Check the output, then edit certbot.dat and put back the value 0. We use 65 days as a good time before renewal. Less can fail. Notice we have to reload the nginx server. The line: /usr/bin/openssl x509 -noout -dates -in /etc/letsencrypt/live/mydomain.com/cert.pem tells you when the certificate expires. Add the following to crontab: crontab -e # 1:15am each night check certbot 15 1 * * * /home/admin/certbot.sh >/dev/null 2>&1 [save and exit]
I have these configurations in my Debian 11 article if it helps to compare to these notes if something does not quite add up.
Additional Nginx configurations
The examples so far have shown use of the add_header configurations. You can test your SSL on ssllabs which will return an A+ rating. You do not need A+, but the add_headers do this. If you get weird redirections from using add_header, just remove them.
As you saw in nginx.conf, you can reference your WordPress robots.txt file if you like using one. I do.
We now need additional include files for gzip and security.
Returning to mariadb (mysql) phpMyAdmin and /var/www/html
Now we have nginx working, we want to verify that php is using the correct values for php.ini, cache, and memcached:
Search the following results to see entries like this: https://mydomain.com/info.php PHP Version 8.2.28 php-fpm active Opcode Caching Up and Running Zip enabled memory_limit 512M max_input_vars 2500 session.save_handler memcached memcached session.save_path 127.0.0.1:11211 127.0.0.1:11211 Then move the info.php file for a little more security so simple snoopers don't see it: mv info.php info.php.bak
Let’s install phpMyAdmin, modify mysql,
cd /usr/share wget https://www.phpmyadmin.net/downloads/phpMyAdmin-latest-all-languages.tar.gz ls tar xvf phpMyAdmin-latest-all-languages.tar.gz rm phpMyAdmin-latest-all-languages.tar.gz ls Use the file name, e.g.: mv phpMyAdmin-5.2.2-all-languages phpMyAdmin cd phpMyAdmin mkdir tmp chmod 777 tmp cp -p config.sample.inc.php config.inc.php vi config.inc.php Search for the blowfish line and insert an appropriate value in the single quotes. I use the following blowfish generator: (paste the value into your line) https://phpsolved.com/phpmyadmin-blowfish-secret-generator/?g=[insert_php]echo%20$code;[/insert_php] Paste the generated value into the blowfish value. For exmample: $cfg['blowfish_secret'] = 'nzpC-qoCt}t/yTKOp5w0o,ULnX,xdKny'; Then after the SaveDir line, add TempDir... $cfg['SaveDir'] = ''; $cfg['TempDir'] = '/tmp'; [save and exit] We have to fix permissions of phpMyAdmin so that nginx will accept it: cd /usr/share chmod 2775 phpMyAdmin chown nginx phpMyAdmin chgrp nginx phpMyAdmin cd phpMyAdmin chown -R nginx * chgrp -R nginx * cd /var/www/html ln -s /usr/share/phpMyAdmin phpMyAdmin cd /var/www/html vi info.php <?php phpinfo();?> [save and exit] chmod 664 info.php chown nginx info* chgrp nginx info*
Freezing Problems with mariadb:
It is sometimes nigh impossible to find where the system freezes when editing or displaying a page even after we have given “good” values in our configurations.
phpMyAdmin has a statistics page that graphically can show all is ok. And Amazon EC2 can show CPU, disk writes, RAM all good.
I don’t have an answer to this, but this may help:
cd /etc/mysql cp -p my.cnf my.cnf.bak vi my.cnf [mysqld] innodb_buffer_pool_size=512M optimizer_search_depth=0 log_error=/var/log/mariadb-error.log log_warnings=9 [save and exit] At least we now have an error log. You could try omitting the pool size and optimizer lines. These configurations were found by users on public forums relating to a Help Desk app where the system kept freezing, even though no errors, slow logs, or graphical overloads were shown.
We now wish to add a helpful shell scripts to restart services:
cd /home/admin vi restart.sh #!/bin/sh systemctl restart nginx systemctl restart mariadb systemctl reload php8.2-fpm swapoff -a swapon -a free -m exit [save and exit] chmod 777 restart.sh chown root restart.sh chgrp admin restart.sh This helps when testing. If using crontab, you could use reload instead of restart. We use reload as our preference for php. swapoff/on is also optional but I use it. You can also use systemctl status -l SERVIVCE, e.g. /usr/bin/systemctl status -l nginx to make sure no errors or where errors are being notified. All log files are under /var/log - e.g. /var/log/mariadb, /var/log/nginx and so on. Please check all of these out.
phpMyAdmin basics
Using phpMyAdmin is another learning curve.
These configurations make you ready to install WordPress.
Assuming you made a softlink, e.g.:
cd /var/www/html
ln -s /usr/share/phpMyAdmin phpMyAmdin
Then log into phpMyAdmin as root user and password.
https://my_domain.com/phpMyAdmin
Log in as root with the mariadb password you created and add a database and user so we can install WordPress to that database.
Fix the initial error message at the bottom of the screen where it says “find out why”. Just click on the link and follow the fix.
The screen shots below are fairly old, so use utf8mb4_general_ci or utf8_general_ci instead of Latin.
aws software - optional but helpful
Amazon uses the aws command to access S3 buckets. This is useful for storing backup files and databases.
In other notes, I have shown how to create access to S3 buckets from your Linux instance using an IAM role. You can then add that to the EC2 instance from the EC2 console top right menu in your region. (Actions > Security > Modify IAM Role)
To create the Role if not already done, go to IAM > Roles > Add Role
Select Trusted Entity Type: Aws Service, Use Case: S3, then Permissions Policies:
AdministratorAccess
AmazonS3FullAccess
AmazonSESFullAccess
CloudWatchFullAccessV2
The Trust Relationships tab will look like this:
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Principal”: {
“Service”: “ec2.amazonaws.com”
},
“Action”: “sts:AssumeRole”
}
]
}
When completed, AWS IAM will give you a public and private key. I keep these in a spreadsheet, as you must not lose these.
Then add it to the instance from the Actions pop-down menu as shown above.
To add this to the instance:
cd ~ mkdir .aws cd .aws You must have the empty, two blank lines as shown below. Use the Keys you were given when creating the IAM role. vi config [default] aws_access_key_id = AKIA4......... aws_secret_access_key = 3vvWqH......... region = ap-southeast-2 [save and exit] Press the Enter key for each response during the next configuration: aws configure
Here are two shell scripts that show the use of the aws command. You can modify later when you have a WordPress site, assuming under /var/www/html, and a database with a user created in phpMyAdmin. These can be run via crontab. The shell scripts will use a bucket you previously created in your region. As a side-note, I use Oregon for all email buckets for Australia.
aws COMMAND - THIS IS OPTIONAL BUT VALUABLE - perhaps return to it later cd /home/admin vi info.log [save and exit - just add one blank line] chmod 777 info.log; chown root info.log; chgrp admin info.log Use your own S3 bucket (e.g. create one in your region), website directories and names: vi aws.sh #!/bin/sh d=`date | awk '{print $2,$3,$NF}'|tr " " "-"` echo backups $d >> /home/admin/info.log tar_html() { cd /var/www/ tar cvf domain.tar ./html # if enough disk space, you can gzip the tar file to save on S3 Bucket space: # gzip domain-$d.tar # aws s3 mv domain-$d.tar.gz s3://domain/domain-$d.tar.gz # if not using gzip: aws s3 mv domain-$d.tar.gz s3://domain/domain-$d.tar } # call the functions - you could have multiple functions for multiple domains and backups, hence the use of functions is easier tar_html exit [save and exit] vi awsdb.sh #!/bin/sh d=`date | awk '{print $2,$3,$NF}'|tr " " "-"` cd /home/admin echo database backups $d >> /home/admin/info.log # Your domain database - these can be any names you wish to use db_domain() { cd /home/admin mysqlcheck --user=DB_USER --password=DB_PASSWORD DB_NAME >> /home/admin/info.log mysqldump --user=DB_USER --password=DB_PASSWORD DB_NAME >> /home/admin/DB_NAME-$d.sql # You can gzip if you wish. This example will not do it. aws s3 mv DB_NAME-$d.sql s3://domain/.DB_NAME-$d.sql } db_domain exit [save and exit] You would have set DB_USER in phpMyAdmin, along with the DB_NAME and DB_PASSWORD. For example if the user is "snoopy", the database name is "snoo" and you set the password to "H0und#d0g", mysqldump --user=snoopy --password=H0und#d0g snoo >> /home/admin/DB_NAME-$d.sql It may seem labourious but it is highly useful. You can of course configure without the aws command and S3 buckets for now. If you configure all this, you can test it. Make sure your IAM role was added in the EC2 console. Debian configurations on other platforms like Akami/Linode can install aws software as well, but a few trial and errors if you do that for the first time, using AWS documentation. aws s3 ls s3://domain/ Notice how we use the trailing forward slash to list anything. You can also create subdirectories, such as domain/sub or domain/.sub. For example, s3 ls s3://snoopy.me/.backups/ AWS doco may suggest use of a line like this: aws configure < /home/admin/aws.txt where aws.txt has the same lines we put in ~/.aws/config.
Adding WordPress
You do need to learn how to use https://your_domain.com/phpMyAdmin.
See the section above on “phpMyAdmin basics”.
We can add a database now usually as utf8mb4_general_ci or utf8_general_ci.
Then we add a user for administrating it – username, localhost, password, and grant all permissions to that database.
If you do not use phpMyAdmin you need to see other articles or documents from Amazon’s installation of WordPress.
The default phpMyAdmin installation will show at the bottom of the screen a missing database that you simply click on to install.
Let’s assume you have this done.
Recall you would have used “ln -s /usr/share/phpMyAdmin phpMyAdmin” to link phpMyAdmin into your /var/www/html directory (or where you choose to have it)
Upload WordPress to your site, unzip and place the contents into /var/www/html or where you need for the domain.
e.g.
cd /var/www/html unzip wordpress-6.8.1-en_AU.zip cd wordpress mv * .. cd .. ./chdir.sh. ---> this is a script shown below.
We need a shell script to change permissions:
cd /var/www/html vi chdir.sh #!/bin/sh chown -R www-data * chgrp -R www-data * find . -type d -exec chmod 2775 {} \; find . -type f -exec chmod 0664 {} \; if [ -f "./.htaccess" ] ; then chown www-data .htaccess chgrp www-data .htaccess chmod 664 .htaccess fi chmod 777 *.sh chown root chdir.sh chgrp root chdir.sh chmod 770 chdir.sh exit [save and exit] chmod 777 chdir.sh After placing WordPress files, in this example into /var/www/html: ./chdir.sh I have /var/www/html as chmod 2775, chdir www-data and chgrp www-data, and /var/www as chmod 2775
Make sure your installation directory no longer has any test index.html file. We want PHP to run by default.
Then you can run the WP install script with https://my_domain.com
If the installation tries to use FTP, your permissions are incorrect. Check /var/www/html is www-data group and owner, and 2775 permissions. www can be root, but check it is 2775.
I prefer using the same admin user and password as the database itself.
Then test your WordPress installation can login and works.
If you add the Wordfence plugin and have issues, remember it is installing hidden files in /var/www/html and files in /var/www/html/wp-content/ and /var/www/html/wp-content/plugins. There will be a stanza in wp-config.php which may be referencing an incorrect PHP version. I once had to add these lines:
<IfModule mod_php8.c> php_value auto_prepend_file '/var/www/html/wordfence-waf.php' </IfModule>
I doubt you will, but you can see how configurations sometimes need problem solving and how code can change over time from the developers.
At the end of your wp-config.php file, you can add these lines for your memory, ability to upload any type of file to the media library, and use of the SMTP email plugin so that emails can be routed through Amazon. If using MS Exchange, as an example, you would use different settings.
define('WP_MEMORY_LIMIT', '512M'); define('DISALLOW_FILE_EDIT', true); define( 'ALLOW_UNFILTERED_UPLOADS', true ); define('DISABLE_WP_CRON', true); define( 'WPMS_ON', true ); define( 'WPMS_SMTP_PASS', 'YOUR AMAZON IAM SMTP PRIVATE KEY' );
Here is an example of the SMTP plugin using MS Exchange as the router. All this depends on DNS records being previously setup correctly.
WP Mail SMTP Plugin - example of MS Exchange From Email: contact@laurenceshaw.au Force From Email: ON From Name: Laurence Shaw Force From Name: ON Return Path: ON Mailer: Other SMTP SMTP Host: smtp-mail.outlook.com Encryption: TLS SMTP Port: 587 Authentication: ON SMTP Username: contact@laurenceshaw.au SMTP Password: THIS GOES INTO THE wp-config.php FILE
Here is an example of Amazon Oregon as the router:
WP Mail SMTP Plugin - example of Amazon AWS SES router, Oregon From Email: contact@laurenceshaw.au Force From Email: ON From Name: Laurence Shaw Force From Name: ON Return Path: ON Mailer: Other SMTP SMTP Host: email-smtp.us-west-2.amazonaws.com Encryption: TLS SMTP Port: 587 Authentication: ON SMTP Username: AKIA................ (YOur own IAM SES email keys) SMTP Password: THIS GOES INTO THE wp-config.php FILE