Helpful Shell Scripts
These scripts may assist with more functionality. The directories will differ depending on use of Debian or Linux2023, and php version and its socket.
For example, /home/admin or /home/ec2-user.
LetsEencrypt certbot auto-renewal script
This script is shown in the Debian 11/12 articles.
Notes on logins and file transfers
Notes on file transfers and logins
All my notes relate to use of iMac terminal shell under root, use of the vi editor, and FileZilla with a .pem file.
If you wish to clean out ssh logins from older configs that prevent new ones:
sh-3.2# cd /var/root/.ssh sh-3.2# ls known_hosts known_hosts.old sh-3.2# :> known_hosts You can put this into a script on iMac.
These are the shell commands I use. My iMac home directory has a subdirectory called PEM where I store all my key files (which are backed up elsewhere on the Cloud.)
ME@MY-iMac-2 ~ % su root Password: sh-3.2# pwd /Users/ME sh-3.2# cd PEM sh-3.2# ssh -i "mydomain.com.pem" admin@ec2-xxx-xxx-xxx-xxx.ap-southeast-2.compute.amazonaws.com Linux ip-xxx-xxx-xxx-xxx 5.10.0-31-cloud-amd64 #1 SMP Debian 5.10.221-1 (2024-07-14) x86_64. (---> If using x86. You should be using ARM in preference) The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Last login: Tue Aug 6 13:37:39 2024 from 144.6.125.35 [admin@mydomain.com: ~]$ sudo su [root@mydomain.com: /home/admin]# set -o vi [root@mydomain.com: /home/admin]#
As you can see, I am logged in as root user on the iMac. You need to set up root access on iMac.
See https://support.apple.com/en-au/102367 on how to do this.
Basically, Go to Finder > Go > /System/Library/CoreServices/Applications/ > Directory Utility.app
Unlock the app. To enable the root user, choose Edit > Enable Root User from the menu bar. Then enter the password you want to use.
Note: never configure anything on Debian or Linux2023 until you add the hard disk swap space.
FileZilla
Fill out the FileZilla details as shown below with your own values, save it, then click on Connect.
This assumes you have Port 22 open in your Security Group inbound rules, and that you have added the security group to the instance.
If you need to, you can use the instance’s Actions menu to add the security group.
I prefer to assign my broadband static IP address to the security group.
Disaster Recovery and Regular Backups or Transfers
Disaster Recovery and Regular backups
The best way to deal with Disaster Recovery is to imagine the entire system is wiped out, with nothing remaining for you to rely on.
How do you proceed? You may have a WordPress plugin to assist, but those plugins can fail – true.
These notes are a bit messy, but they represent experience.
If WordPress has been infected by a virus, my advice is to rebuild from a known clean state, or a full manual rebuild as corruption can be undetectable until it shows up again.
We also have nasty situations where a plugin invades carious configuration files with code. I now only use one of two firewall plugins as a result.
A rebuild usually loses global content, such as widgets, CSS, templates, etc. Rebuilds are complex. I usually have another domain name I build on, using an export of media files, then with the imports of pages, posts etc., I edit those .xml files to replace the live domain name with my rebuild domain name, then import. I also have plugins that give their own export files. I then compare widgets and CSS etc. with the original domain, and update accordingly until I have an exact copy under a new domain. Now, a plugin can do all this, but those sometimes fail, hence this option helps.
Rebuilds can cause havoc with galleries. When I commence a rebuild, I add WordPress to a new database, configure and add plugins. You may not want to rebuild using an older version of mysql. If someone has used a different mysql platform, there may be SQL commands to delete from the exported database in order to import into mariadb.
Another way is to export the original database, and edit it for your rebuild domain name. You create the new database and owner permissions in phpMyAdmin, and import. This can be awkward if there are licenses and other unique entries in the original database. It takes some experience to work with these situations. Anyway, an import can fail. So you cannot rely on a database backup completely.
One way around failed databases, is to have an export of the essential WordPress/Theme/Plugins as separate exports, and don’t bother with the other plugins, or keep them as separate exported tables.
I am mentioning these issues and techniques in general, as restoration can be complicated. It helps to know this when transferring a website, say from one provider to another.
The use of aws to make backups is shown in the Debian 11 article, with the aws.sh and awsdb.sh. scripts.
A disaster recovery backup needs more functionality in the aws.sh script. You should backup /home/admin, and have that backdoor user in sudoers, and a backup of /home/admin/.ssh. You may have other directories like /data where you add other content.
A full disaster recovery backup needs an EC2 snapshot:
Make sure the instance has no major file errors:
cd /
ls -lRa
If the system freezes, you have a problem, and a snapshot would only copy the problem.
We cannot attempt hard disk repair commands – it is now more complicated that the once well known use of fsck.
It is helpful to add a name to your live instance’s Volume and IP address. Get familiar with the EC2 panels and tabs.
To do a snapshot, stop the instance, and wait for it to stop.
From the snapshot menu, backup the volume – you can get practice at these things.
After the snapshot is made, start the instance. All is done.
As an exercise, create a dummy EC2 instance (no IP address needed) of the same type of instance for the snapshot you made.
Stop the dummy instance then detach its new volume. (Find out how to do this)
Delete the dummy volume (taking care it is the right one!)
Make a new volume from a prior snapshot and name the volume, e.g. “test”.
Attach the volume to the instance, using /dev/xvda (usually this is the device we use or type in.)
Retstart the dummy instance, and there it is. This process is important to know.
Now undo all this work – stop the instance, terminate it (when it is stopped), check the volume has been removed, and if not, delete it.
If you had an IP address on it, you would stop the instance, disassociate the IP address, then take steps to delete the instance.
If you do not want to keep the IP address, you then release it.
Never let EC2 automatically assign an ip address when creating an instance. You need to allocate and attach an address afterwards, so you can check the IP is not blacklisted first, and that there are no hiccups in releasing the address at some future point. If you cannot release an address, you lodge a help desk request to Accounting, as it impacts billing.
We do not do any of our work with paid help desk support. A business with an IT team could perhaps afford to do so.
When we use the mysqldump command, I put output into info.log. This helps ensure we are not backing up a bad database. As the database is crucial, a live site may want nightly gzip backups and use a life cycle of, say, three months in S3. It is up to you. A commerce site may need more backups.
The aws.sh script uses the tar command. The only way to test the backup is reasonably good is to ‘tar tvf’ the file to list the contents. An example of tar is:
cd /var/www/html
tar cvf /home/admin/backup.tar ./??* ./*
tar tvf /home/admin/backup.tar
In principle, never have weaker applications on the same database, or critical apps. For instance, a fully fledged email server should not be on the same EC2 instance as the live commerce site. Email servers are terrible things to work with, but they also chew up memory and swap space quickly, making it impossible to use WordPress as well. If you had a help desk ticketing system, or a forum, those have vulnerabilities, not to place on the same site as your commercially used WordPress site.
A database has no general requirement to be on another server, as that represent two failure points instead of one. Some providers do this and you end up with database errors on your browser screen too often.
If you have had many WordPress page edits, you can use an optimisation plugin (make a database export first) to reduce the database size. Large databases slow WordPress down. If databases are really big, another method is to run mysql manually in order to use the .sql import file from an EC2 instance directory, rather than timeouts from your PC.
crontab start up scripts
Crontab startup scripts
Add your scripts. e.g. blackllist.sh where you have a script that uses iptables/ipset to block ports from various countries.
Edit crontab: crontab -e @reboot sudo sh /home/admin/blacklist.sh >/dev/null 2>&1 [save and exit]
After reboot, in this example, run iptable -L -vn and you should see the output worked.
Further below are the blacklist.sh, cidr.sh and firewall.sh scripts.
iptables/ipset blacklisting scripts
Blacklisting scripts – using ipset/iptables
I use this with Nginx. I use ip2location with httpd/apache2. You can modify and test for apache if you wish. iptables is close to the kernel, so I like it.
This will block countries, selected CIDR ranges, domains, and individual IP addresses as they are needed.
Debian needs to install the ipset package which includes iptables, whereas Linux2023 needs both ipset and iptable from the dnf command.
In this version we add a directory /var/www/html/firewall to contain multiple country .txt files we want to block. These were created from ip2location's website. Each .txt file is under /var/www/html/firewall. You may only execute the script once, otherwise you have duplicates. cd /home/admin vi blacklist.sh #!/bin/sh # IP blacklisting script for Linux servers # Pawel Krawczyk 2014-2015 # documentation https://github.com/kravietz/blacklist-scripts # iptables logging limit LIMIT="10/minute" # try to load config file # it should contain one blacklist URL per line config_file="/etc/ip-blacklist.conf" if [ -f "${config_file}" ]; then source ${config_file} else # if no config file is available, load default set of blacklists # URLs for further blocklists are appended using the classical # shell syntax: "$URLS new_url" # Emerging Threats lists offensive IPs such as botnet command servers URLS="http://rules.emergingthreats.net/fwrules/emerging-Block-IPs.txt" # Blocklist.de collects reports from fail2ban probes, listing password brute-forces, scanners and other offenders URLS="$URLS https://www.blocklist.de/downloads/export-ips_all.txt" # YOUR OWN BLACKLISTS # URLS="$URLS https://mydomain.com/firewall/block.txt" # YOUR OWN WordPress bad loggins (if using it via the wplogin.sh script - see further below # URLS="$URLS https://mydomain.com/firewall/wplogin.txt" URLS="$URLS https://mydomain.com/firewall/japan.txt" URLS="$URLS https://mydomain.com/firewall/russia.txt" URLS="$URLS https://mydomain.com/firewall/china.txt" URLS="$URLS https://mydomain.com/firewall/northkorea.txt" URLS="$URLS https://mydomain.com/firewall/iran.txt" URLS="$URLS https://mydomain.com/firewall/india.txt" URLS="$URLS https://mmydomain.com/firewall/turkey.txt" URLS="$URLS https://mydomain.com/firewall/poland.txt" URLS="$URLS https://mydomain.com/firewall/romania.txt" URLS="$URLS https://mydomain.com/firewall/southkorea.txt" URLS="$URLS https://mydomain.com/firewall/netherlands.txt" URLS="$URLS https://mydomain.com/firewall/kazakhstan.txt" URLS="$URLS https://mydomain.com/firewall/germany.txt" URLS="$URLS https://mydomain.com/firewall/brazil.txt" fi link_set () { if [ "$3" = "log" ]; then iptables -A "$1" -m set --match-set "$2" src,dst -m limit --limit "$LIMIT" -j LOG --log-prefix "BLOCK $2 " fi iptables -A "$1" -m set --match-set "$2" src -j DROP iptables -A "$1" -m set --match-set "$2" dst -j DROP } # This is how it will look like on the server # Chain blocklists (2 references) # pkts bytes target prot opt in out source destination # 0 0 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 match-set manual-blacklist src,dst limit: avg 10/min burst 5 LOG flags 0 level 4 prefix "BLOCK manual-blacklist " # 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 match-set manual-blacklist src,dst # 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 match-set rules.emergingthreats src # 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 match-set rules.emergingthreats dst # 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 match-set www.blocklist.de src # 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 match-set www.blocklist.de dst # 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 match-set www.badips.com src # 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 match-set www.badips.com dst blocklist_chain_name=blocklists # check if we are on OpenWRT if [ "$(which uci 2>/dev/null)" ]; then # we're on OpenWRT wan_iface=pppoe-wan IN_OPT="-i $wan_iface" INPUT=input_rule FORWARD=forwarding_rule COMPRESS_OPT="" else COMPRESS_OPT="--compressed" INPUT=INPUT FORWARD=FORWARD fi # create main blocklists chain if ! iptables -nL | grep -q "Chain ${blocklist_chain_name}"; then iptables -N ${blocklist_chain_name} fi # inject references to blocklist in the beginning of input and forward chains if ! iptables -nL ${INPUT} | grep -q ${blocklist_chain_name}; then iptables -I ${INPUT} 1 ${IN_OPT} -j ${blocklist_chain_name} fi if ! iptables -nL ${FORWARD} | grep -q ${blocklist_chain_name}; then iptables -I ${FORWARD} 1 ${IN_OPT} -j ${blocklist_chain_name} fi # flush the chain referencing blacklists, they will be restored in a second iptables -F ${blocklist_chain_name} # create the "manual" blacklist set # this can be populated manually using ipset command: # ipset add manual-blacklist a.b.c.d set_name="manual-blacklist" if ! ipset list | grep -q "Name: ${set_name}"; then ipset create "${set_name}" hash:net fi link_set "${blocklist_chain_name}" "${set_name}" "$1" # download and process the dynamic blacklists for url in $URLS do # initialize temp files unsorted_blocklist=$(mktemp) sorted_blocklist=$(mktemp) new_set_file=$(mktemp) headers=$(mktemp) # download the blocklist set_name=$(echo "$url" | awk -F/ '{print substr($3,0,21);}') # set name is derived from source URL hostname curl -L -v -s ${COMPRESS_OPT} -k "$url" >"${unsorted_blocklist}" 2>"${headers}" # this is required for blocklist.de that sends compressed content regardless of asked or not if [ -z "$COMPRESS_OPT" ]; then if grep -qi 'content-encoding: gzip' "${headers}"; then mv "${unsorted_blocklist}" "${unsorted_blocklist}.gz" gzip -d "${unsorted_blocklist}.gz" fi fi # autodetect iblocklist.com format as it needs additional conversion if echo "${url}" | grep -q 'iblocklist.com'; then if [ -f /etc/range2cidr.awk ]; then mv "${unsorted_blocklist}" "${unsorted_blocklist}.gz" gzip -d "${unsorted_blocklist}.gz" awk_tmp=$(mktemp) awk -f /etc/range2cidr.awk <"${unsorted_blocklist}" >"${awk_tmp}" mv "${awk_tmp}" "${unsorted_blocklist}" else echo "/etc/range2cidr.awk script not found, cannot process ${unsorted_blocklist}, skipping" continue fi fi sort -u <"${unsorted_blocklist}" | egrep "^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}(/[0-9]{1,2})?$" >"${sorted_blocklist}" # calculate performance parameters for the new set if [ "${RANDOM}" ]; then # bash tmp_set_name="tmp_${RANDOM}" else # non-bash tmp_set_name="tmp_$$" fi new_list_size=$(wc -l "${sorted_blocklist}" | awk '{print $1;}' ) hash_size=$(expr $new_list_size / 2) if ! ipset -q list ${set_name} >/dev/null ; then ipset create ${set_name} hash:net family inet fi # start writing new set file echo "create ${tmp_set_name} hash:net family inet hashsize ${hash_size} maxelem ${new_list_size}" >>"${new_set_file}" # convert list of IPs to ipset statements while read line; do echo "add ${tmp_set_name} ${line}" >>"${new_set_file}" done <"$sorted_blocklist" # replace old set with the new, temp one - this guarantees an atomic update echo "swap ${tmp_set_name} ${set_name}" >>"${new_set_file}" # clear old set (now under temp name) echo "destroy ${tmp_set_name}" >>"${new_set_file}" # actually execute the set update ipset -! -q restore < "${new_set_file}" link_set "${blocklist_chain_name}" "${set_name}" "$1" # clean up temp files rm "${unsorted_blocklist}" "${sorted_blocklist}" "${new_set_file}" "${headers}" done [save and exit] chmod 777 blacklist.sh; chown root blacklist.sh; chgrp admin blacklist.sh
We now add the firewall_clear.sh script to set the iptable back to nothing, which is good if you want to renew settings without a reboot.
cd /home/admin vi firewall_clear.sh #!/bin/bash iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT iptables -P FORWARD ACCEPT exit [save and exit] chown root firewall_clear.sh; chgrp admin firewall_clear.sh; chmod 777 firewall_clear.sh
We now add the cidr.sh script that is designed to stop people flooding a port by leaving the port open until the operating system freezes.
I also add lines to block ranges of IP addresses. (Careful you don’t block Amazon, your own IP, or Letsencrypt etc.]
cd /home/admin vi cidr.sh #!/bin/sh iptables -A INPUT -p tcp --syn --dport 443 -m connlimit --connlimit-above 10 -j DROP iptables -A INPUT -p tcp --syn --dport 80 -m connlimit --connlimit-above 10 -j DROP iptables -A INPUT -p tcp --syn --dport 22 -m connlimit --connlimit-above 5 -j DROP # ALLOW Uptime Robot IP4 address in Sydney iptables -A INPUT -s 54.79.28.129 -j ACCEPT # Block bad domains for i in $(cat https://mydomain.com/firewall/domains.txt); do echo "Blocking all traffic to and from $i" iptables -I INPUT -s $i -j DROP iptables -I OUTPUT -d $i -j REJECT done # Drop CIDR ranges # USA DIGITALOCEAN-ASN scammers - this is an example. Your /var/log/nginx/ logs will show bad actors. iptables -A INPUT -s 104.248.64.0/22 -j DROP # Tokyo and Kagoya problems from Japan iptables -A INPUT -s 210.128.0.0/13 -j DROP iptables -A INPUT -s 153.127.224.0/19 -j DROP iptables -L -vn exit
Create a file /var/www/html/firewall/domains.txt
Here is an example:
cd /var/www/html/firewall vi domains.txt binance.com [save and exit]
If you want to add your own block.txt file to catch bad actors individually, here is an example I use. It keeps growing as I add addresses.
You uncomment the block.txt line in blacklist.sh
cd /var/www/html/firewall vi block.txt 152.32.252.116 157.230.236.124 167.172.160.194 167.99.129.24 24.199.92.243 2.63.211.145 80.66.76.130 91.238.181.21 92.255.85.107 92.255.85.253 167.94.145.100 185.224.128.67 34.31.81.100 172.203.163.52 119.28.156.186 51.75.147.222 134.122.26.31 185.196.220.26 66.249.66.21 109.202.99.36 159.65.82.252 47.88.6.178 138.68.178.173 45.113.95.191 172.66.40.245 139.59.123.61 104.234.115.58 206.168.34.57 1.14.7.100 162.142.125.199 151.80.67.229 35.185.221.46 156.233.225.192 [save and exit]
Finally, we can write our own scripts to check for attempted bad logins to WordPress, and add them to a list if not already in the list.
This tends to keep growing. A reboot or firewall_clean.sh will clear the growing list, then you can manually run blacklist.sh and cidr.sh again using a line in blacklist.sh for the file wplogin.txt. Initially create a wplogin.txt file with no content in it.
I do the script with crontab, and use my own static IP address to ensure I don’t block myself. Hence, this script is only if you have a static IP address at home. You can modify the script for additional addresses.
Create a dummy file called /var/www/html/firewall/wplogin.txt. Files are permissions 664, owner/group nginx. ./firewall is 2775, nginx.
If using apache2 or httpd the owner/group will likely be www-data or ec2-user (just check your permissions and modify scripts for /home/ec2-user or /home/admin. You would have to modify the script below if not nginx and test it.
This only works with Nginx and the /var/log/nginx/access.log file name, and your own static IP4 address. cd /home/admin vi wplogin.sh #!/bin/bash # BASED ON HAVING MY OWN STATIC IP ADDRESS d=`date | awk '{print $2,$3,$NF}'|tr " " "-"` echo "wplogin hacker IPs:" $d >> /home/admin/info.log for i in `sort /var/log/nginx/access.log|grep "wp-login.php" | awk '{print $1}'|grep -v "XXX.XXX.XXX.XXX" | sort -u` do a="" a=`grep $i /var/www/html/firewall/wplogin.txt` if [ "$a" = "" ] ; then echo $i >> /var/www/html/firewall/wplogin.txt iptables -I INPUT -p tcp -s $i -j DROP echo $i >> /home/ec2-admin/info.log fi done cp -p /var/log/nginx/access.log /var/log/nginx/archive-$d-access.log :> /var/log/nginx/access.log exit [save and exit] chmod 777 wplogin.sh; chown root wplogin.sh; chgrp admin wplogin.sh;
Your crontab entry can be extended to include blacklist.sh and cidr.sh, and optionally the WordPress wplogin.sh script (using bash, not sh)
crontab -e @reboot sudo sh /home/admin/blacklist.sh >/dev/null 2>&1 @reboot sudo sh /home/admin/cidr.sh >/dev/null 2>&1 # 3am each night 0 3 * * * sudo bash /home/ec2-user/wplogin.sh >/dev/null 2>&1 [save and exit]
You should be set to manually test both scripts. The command iptables -L -vn will show you the output.
Here is the countries.tar.zip file with 14 countries. You need to add your own block.txt, wplogin.txt, and domains.txt files, and set the file permissions when you extract them to /var/www/html/firewall. (YOu can of course configure your own directories and names.)
This will not automatically capture all IP4 addresses from each country in the list. You could add those manually is an issue. You need to use ip2location’s website to download or update lists manually, using the format shown in the .txt files.
Fix permissions in WordPress
Fix file permissions in WordPress
If WordPress cannot transfer files or it shows an FTP form, yu need to fix the file permissions.
After uploading WordPress from a zip file to your website root directory, for instance, /var/www/html, place this script there and execute it. Use www-data, apache or nginx depending on the Operating System. I have shown nginx below.
From 'sudo su' root shell login... replace chgrp admin with your own OS group, e.g. ec2-user. cd /var/www/html vi chdir.sh cd /home/ec2-user vi chdir.sh #!/bin/sh chown -R nginx * chgrp -R nginx * find . -type d -exec chmod 2775 {} \; find . -type f -exec chmod 0664 {} \; # in case any hidden files: chown nginx .??* chgrp nginx .??* chmod 664 .??* chmod 777 *.sh chown root chdir.sh chgrp root chdir.sh chmod 770 chdir.sh exit [save and exit] chmod 777 chdir.sh; chown root chdir.sh; chgrp admin chdir.sh Run the script from root: ./chdir.sh
I also set ./html to either nginx, www-data, or apache.
e.g. cd /var/
chmod 2775 www
cd www
chmod 2775 html
chown nginx html; chgrp nginx html
Clear server memory each night
Clear the server memory and swap space each night
Use your own value (e.g.250). I find on the smaller EC2 instances, things get slow or start to freeze up around 300 to 400. I still allocate 1GB of swap space as I’ve had issues with 750MB.
I’ve shown /home/admin. You would have /home/ec2-user on Linux2023
This example relates to Nginx with memcached. (We never use memcache, only memcached. I never use apcu.)
We reload nginx. A restart would shut it down entirely, which is helpful when making system changes. A live system should use reload.
We use restart to combine both stop and start.
I show swap space, but it is up to you to decide. You can monitor free space with variations on the script.
I use this in crontab if the ‘free -m’ memory is above any nominated value, such as 200.
cd /home/admin vi services.sh #!/bin/sh g=200 f=0 h=`free -m|grep Swap|awk '{print $3}'` f=$(expr $h + 1) if [ $f -le $g ] ; then d=`date` echo services.sh: date: $d freespace $h >> /home/admin/info.log else d=`date` /usr/bin/systemctl reload nginx /usr/bin/systemctl reload php8.2-fpm /usr/bin/systemctl restart mariadb /usr/bin/systemctl restart memcached /usr/sbin/swapoff -a /usr/sbin/swapon -a k=`free -m|grep Swap|awk '{print $3}'` echo services.sh: date: $d freespace before: $h freespace after: $k>> /home/ec2-admin/info.log fi exit [save and exit] chmod 777 services.sh; chown root services.sh; chgrp admin services.sh crontab -e 0 0 * * * /home/admin/services.sh >/dev/null 2>&1 [save and exit]
Manually restart all services
Manually restart services during development and testing
This is helpful when developing more complicated services. Below is shown for /home/admin and nginx
cd /home/admin vi restart.sh #!/bin/sh /usr/bin/systemctl stop nginx /usr/bin/killall nginx /usr/bin/systemctl stop mariadb /usr/bin/systemctl stop memcached /usr/bin/systemctl reload php8.2-fpm /usr/bin/systemctl start memcached /usr/bin/systemctl start mariadb /usr/bin/systemctl start nginx exit [save and exit] chmod 777 restart.sh; chown root restart.sh; chgrp admin restart.sh Test it: /home/admin/restart.sh
This can be modified – e.g. you may not want the kill command, you may want to add the swapoff/on commands, or status -l, or free -m.