AWS Storage – Base Options


Warning: errors in /etc/fstab will likely make your system unbootable

If experimenting with adding or subtracting mounted systems, ensure the fstab file is correct or comment out entries you make before rebooting.

It is best one learns how to mount EFS or GP3 by learning from Amazon documentation.


S3 Storage


S3 Bucket Storage - Linux2023 or Debian (or cPanel)

You can use aws s3 commands to work with buckets, and by adding your IAM role (in a previous article) to the EC2 instance IAM profile.

Create and use a local region bucket. Use the default settings.

For systems on 3rd party platforms, you can install aws. Refer to AWS documentation. It is very useful for backups, but a bit tricky to set up.

Refer to: https://docs.aws.amazon.com/cli/v1/userguide/install-linux.html

I will go through these details for cPanel users. Open your cPanel terminal shell and create the aws.sh script:

pwd
(This should be your /home/domain name under cPanel. I will call it MYCPANEL_NAME. MYDOMAIN is for example the domain name you ae backing up. MYCPANEL_DB is your cPanel database name. MYPASSWD is your database password.)
(Note: we will have previously installed the aws commands onto cPanel in the terminal shell, showing in this example, the binary aws file is /home/MYCPANELDOMAIN/etc/public/aws

vi aws.sh

#!/bin/sh
d=`date | awk '{print $2,$3,$6}'|tr " " "-"`
echo "file backup: $d MYDOMAIN.COM" >> /home/MYCPANEL_NAME/info.log
cd /home/MYCPANEL_NAME/public_html
tar cvf /home/MYCPANEL_NAME/MYDOMAIN-$d.tar ./.??* ./*
/home/MYCPANEL_NAME/etc/public/aws s3 mv /home/ajsartau/MYDOMAIN-$d.tar s3://MYS3_BUCKET/MYDOMAIN-$d.tar

OR (it depends where it was actually installed)

/home/MYCPANEL_NAME/aws/bin/aws in the above line

# SQL ajsartau_wp_pzdm5
d=`date | awk '{print $2,$3,$6}'|tr " " "-"`
echo "db backup: $d MYCPANEL_DB" >> /home/MYCPANEL_NAME/info.log >> /home/MYCPANEL_NAME/info.log
mysqlcheck --user=MYCPANEL_DB --password=MY_PASSWORD MYCPANEL_DB >> /home/MYCPANEL_NAME/info.log
mysqldump --user=MYCPANEL_DB --password=MYPASSWORD MYCPANEL_DB > /home/MYCPANEL_NAME/MYCPANEL_DB-$d.sql
/home/MYCPANEL_NAME/etc/public/aws s3 mv /home/MYCPANEL_NAME/MYCPANEL_DB-$d.sql s3://MYS3_BUCKET/MYCPANEL_DB-$d.sql

OR 

/home/MYCPANEL_NAME/aws/bin/aws. in the above line

exit

[save and exit]

From your home directory:
mkdir aws
cd aws

(download the awscli-bundle.zip file to this aws directory)

unzip awscli-bundle.zip

(You can find where python3 is and use that version.)

/opt/alt/python38/bin/python3.8 ./awscli-bundle/install -i /home/MYCPANEL_NAME/aws -b /home/MYCPANEL_NAME/aws

Create the aws.txt file with your IAM S3 keys and two blank lines after it. e.g.:

vi aws.txt
AKIXXXXXXXXXXXXX
3vvXXXXXXXXXXXXXXXXXXXXXX


[save and exit, two blank lines above. Use your own public and private S3 keys. Not the SMTP keys.]

You can create a crontab entry to backup your WordPress files and database to S3 with a life cycle on the bucket or sub-directories.
See my article on Debian or Linux 2023 AWS.

Your s3 bucket permissions tab should show something like this:

{
    "Version": "2012-10-17",
    "Id": "ExamplePolicy",
    "Statement": [
        {
            "Sid": "AllowSSLRequestsOnly",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::MYBUCKET_NAME",
                "arn:aws:s3:::MYBUCKET_NAME/*"
            ],
            "Condition": {
                "Bool": {
                    "aws:SecureTransport": "false"
                }
            }
        }
    ]
}

The permissions in an email bucket are different.

They would look like this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowSESPuts-1674865967951",
            "Effect": "Allow",
            "Principal": {
                "Service": "ses.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::MY_EMAIL_BUCKET/*",
            "Condition": {
                "StringEquals": {
                    "AWS:SourceAccount": "MY_AWS_ACCOUNT_NUMBER"
                },
                "StringLike": {
                    "AWS:SourceArn": "arn:aws:ses:*"
                }
            }
        }

 

 

For cPanel download and unzip into the cPanel home directory, and move to your aws sub directory.

Mount an S3 Bucket on EC2 Linux2023 and/or list files in a browser

This is totally unreliable, but can be useful if you want to make a tar file backup of your S3 bucket(s) to an EFS disk, or if you have space on a GP3 hard disk.
We use the sf3s command.

The information below shows how to install the s3fs software, the shell script required to mount it, the crontab error checking for infrequent situations where the “nfs” mount point is corrupt, and examples of how we may use the mount point to access documents or images.

As an aside, all of my archived websites place the wp-content/uploads files onto S3 buckets without any need for scripting.

That is, if we create a bucket, say s3mydomain.au, and create a mount point called /var/content that points to this bucket, we can create a soft link:

/var/www/html/some_archive/wp-content/uploads -> /var/content/some_archive/wp-content/uploads

For calling documents or photographs, we could use:

/var/www/html/content -> /var/content, and from there access subdirectories with images, such as /var/content/brisbane and so on, or
/var/www/html/content/mydocuments -> /var/content/mydocuments.

This gives a sense of what we are doing. There is no configuration of CDN in this case, or use of problematic WordPress plugins that access buckets.

Something to keep in mind, there is a limit on the size of a .tar backup file in an S3 bucket, so if you have thousands of photographs, you create several tar files. S3 bucket speed it great, and practical for use compared to other cloud systems I have tested.
Install the software…

cd /home/admin

dnf -y install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel
cd /home/ec2-user
git clone https://github.com/s3fs-fuse/s3fs-fuse.git
cd s3fs-fuse
./autogen.sh 
./configure --prefix=/usr
make
make install
[let's create a mount point...]
cd /var
mkdir content
chmod 2775 content
cd /var/www/html
ln -s /var/content content
ls -la

Now create an IAM role , with any name such as s3admin. (Yuu may already have this from my previous articles)

Give it these permission policies:

AmazonS3FullAccess
AdministratorAccess
CloudFrontFullAccess

I add Cloudfront in my own role as I use that elsewhere.

Now add this role to the EC2 instance under the EC2 console, Instances (then click on the instance) > Security > Modify IAM Role

Using the above examples, this is how you mount the bucket:
(You would use your own region, bucket name, and so on.)

 

s3fs -o iam_role="s3admin" -o use_path_request_style -o url="https://s3-ap-southeast-2.amazonaws.com" -o endpoint=ap-southeast-2 -o dbglevel=info -o curldbg -o allow_other -o use_cache="" s3mydomain.au /var/content

df -hT

You may notice caching is set to null. Otherwise copies of files are made to your instance under /tmp, which is not good!

Then issue the Unix command to view the mount point.
df

You may add a test file to your bucket from the Amazon S3 console, and try viewing it with:

ls /var/content/*

If the mount point gets “twisted up” during your installation, you have to restart the instance from the instance console.

 

When you add files to the mount point, simply use Unix commands. For example:

cd /var/content

mkdir documents

chmod 2775 documents

cd /home/ec2-user

mv mydocument /var/content/documents

cd /var/content/documents

chmod 744s myfile

You can experiment with permissions.

In the example below, we use no caching, otherwise /tmp fills up rapidly with the content!

s3fs -o iam_role="s3admin" -o use_path_request_style -o url="https://s3-ap-southeast-2.amazonaws.com" -o endpoint=ap-southeast-2 -o dbglevel=info -o curldbg -o allow_other -o use_cache="" s3mydomain.au /var/content

I never put this in a shell script or crontab for reboot, as it is way too buggy. It can freeze commands if it does not mount. If it does, CTRL-C or D or Z to get out of it. I execute the command from /.

If it freezes, you need to use the command: (Use your own mount point)

fusermount -u /var/content

Then check commands are working, and try the s3fs mount again.

It is not recommended to change any file permissions (objects) in S3 via this mount. Again, a tar file backup is a good use of s3fs.

You may experiment for Debian. I forget, but I think it does not work on 3rd party platforms.

As a side note, cPanel has php versions and options which can come into play if you want to run exec() commands for shell scripts to access PHP commands, which in turn call the buckets.

How to List S3 Files on your Website

This takes a while to set up and get familiar with. I create a PHP shortcode to display files from an S3 directory:

I have added this content for completeness. It is involved.

This shows the basic code to display files in a bucket. To retreive the files from a browser, they need to be in a public S3 bucket with public access granted to the files.

cd /var/www/html/scripts

vi public.sh

#!/bin/bash
IFS=$'\n'
font="Open Sans"
bucket="$1"
dir="$2"
for a in `aws s3 ls s3://$1/$dir/|grep -v ".DS_Store"|tail -n +2`
do
file=`echo $a|awk '{print $4}'`
inf=`echo $a|awk '{print $1, $2, $3}'`
echo $inf $file
done
exit

[save and exit]

chmod 775 public.sh


For example:

./public.sh MYBUCKET public -> this would display all files in your nominated bucket under the sub-directory called public.

e.g. here is some output:

2025-05-19 16:21:29 319425 ec2-door_640.jpg


The requirement now is to put this into a shortcode such as this:

[pubic bucket="MYBUCKET" dir="SUBDIRECTORY"]

I have used various shortcode PHP plugins. You would need to work this out.

Here is sample PHP code to make this work. I have put in lines for keys to access the buckets if the code is in cPanel instead of using the S3 bucket IAM profile added to the EC2 instance.

putenv('AWS_DEFAULT_REGION=ap-southeast-2');
putenv('AWS_ACCESS_KEY_ID=AKIXXXXXXXXXXXXX');
putenv('AWS_SECRET_ACCESS_KEY=3vvYYYYYYYYYYYYYYYYYYYYYYY');
$bucket = "$atts[bucket]";
$dir = "$atts[dir]";

$command = "export AWS_DEFAULT_REGION; export AWS_ACCESS_KEY_ID; export AWS_SECRET_ACCESS_KEY; /var/www/html/scripts/public.sh $bucket $dir 2>/dev/null";
$output = shell_exec($command);
echo "$output";
PHP_EOL;

In cPanel you have to enable shell_exec in the PHP verions/options settings.

However, the public.sh script can no longer contain double quotes in the return strings.
Here is a modified verion of the script:

#!/bin/bash
IFS=$'\n'
font="Open Sans"
bucket="$1"
dir="$2"
for a in `aws s3 ls s3://$1/$dir/|grep -v ".DS_Store"|tail -n +2`
do
file=`echo $a|awk '{print $4}'`
inf=`echo $a|awk '{print $1, $2, $3}'`
echo $inf $file
d1="$a"
echo "$d1 <span style='position:absolute;left:350px;'>$ext</span>  <span style='position:absolute;left:390px;'>file: <a href='https://s3.ap-southeast-2.amazonaws.com/MYBUCKET/$dir/$a' target=_blank><span style=text-decoration:underline;color:#444;font-size:17px;font-family:$font;>$pre$a</span></a></span>"
done
exit

[save and exit]

The shortcode would be:

[public bucket="MYBUCKET" dir="SUBDIRECTORY"]

You notice the use of single quotes in the echo line, surrounded by the double quotes of the command itself.
This is pretty complicated, and errors will not show anything on the web browser screen.

You can also add a case statement to show an image against each file in the list, if PDF or DOCX etc.

Here is a script that uses files on hard disk, rather than the s3 bucket.
This example only uses one bucket, so a version of the shortcdoe would be [public dir="SUBDIRECTORY"]. The PHP code would need altering.

#!/bin/bash
IFS=$'\n'
font="Merriweather&#32;Sans"
dir="$1"
for a in `ls /public/$dir|grep -v ".DS_Store"`
do
file=`echo $a|awk '{print $4,$5,$6,$7,$8, $9}'`
inf=`echo $a|awk '{print $1, $2, $3}'`
e=`echo $a|awk -F. '{print $NF}'`
case $e in
pdf)
ext="<img src=https://mydomain.com/wp-content/uploads/adobe-reader.jpg width=20 height=20 alt=shaw-au.net />"
;;
docx|doc)
ext="<img src=https://mydomain.com/wp-content/uploads/msword.jpg width=20 height=20 alt=shaw-au.net />"
;;
xlsx|xls)
ext="<img src=https://mydomain.com/wp-content/uploads/excel-icon.jpg width=20 height=20 alt=shaw-au.net />"
;;
pptx|ppt)
ext="<img src=https://mydomain.com/wp-content/uploads/ppt-icon.jpg width=20 height=20 alt=shaw-au.net />"
;;
png|jpg)
ext="<img src=https://mydomain.com/wp-content/uploads/media-icon.jpg width=20 height=20 alt=shaw-au.net />"
;;
mp3|aac|wav|m4a|mpeg|ogg)
ext="<img src=https://mydomain.com/wp-content/uploads/audio-icon.jpg width=20 height=20 alt=shaw-au.net />"
;;
mov|avi|mp4|mpeg4|3gp|3gpp|3gpp2|3gp2|quicktime)
ext="<img src=https://mydomain.com/wp-content/uploads/movie-icon.jpg width=20 height=20 alt=shaw-au.net />"
;;
*)
ext="<img src=https://mydomain.com/wp-content/uploads/file-icon.jpg width=20 height=20 alt=shaw-au.net />"
;;
esac
pre="<img src=https://mydomain.com/wp-content/uploads/null-icon.jpg width=0 height=0 alt=shaw-au.net />"
d1="$a"
echo "$d1 <span style='position:absolute;left:350px;'>$ext</span>  <span style='position:absolute;left:390px;'>file: <a href='https://mydomain.com/public/$dir/$a' target=_blank><span style=text-decoration:underline;color:#444;font-size:17px;font-family:$font;>$pre$a</span></a></span>"
done
exit

[save and exit]

The public directory is under /public.  I have various icons to represent file type in the case section. /public is accessed by a soft link in /var/www/html:
lrwxrwxrwx   1 root  nginx          8 Jun 22  2024 public -> /public

The shortcode is the same as above.

Remember that VPN IP addresses to buckets can sometimes be blocked on mobile phones.


This is the sort of thing you can do:

<div style="background-color: #d6d6d6; padding-left: 20px; padding-right: 20px;">
<table style="text-align: left; background-color: #d6d6d6;">
<thead>
<tr>
<th style="color: #333; text-align: left; padding-top: 20px;" align="left">My Documents</th>
</tr>
</thead>
<tbody>
<tr style="text-align: left;">
<td style="color: #444; text-align: left; font-family: Courier, Times,sans-serif;" align="left">
<pre>[public a="MYBUCKET" b="SUBDIRECTORY]</pre>
</td>
</tr>
</tbody>
</table>
</div>

GP3 Storage


Mount an additional GP3 DIsk

I don’t play around with increasing the size of an existing GP3 disk.
I make sure the minimal installation is 12GB, and for several multi-sites, larger – e.g. 15GB.
A multi-site will have heavier demands on it, so one cannot be excessive.

I rather add a GP3 mounted disk…

In EC2 volumes, create a volume of some size, as GP3 disk, and attach it to the instance - e.g. /dev/xvdb
Wait for its state to become available.

After the AWS volume is created and attached to the instance, it is not yet mounted.

Reference: docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html

As sudo su root login:

mkfs -t xfs /dev/xvdb
lsblk -f

This example will mount the volume to /data:

cd /
mkdir data
mount /dev/xvdb /data
cp /etc/fstab /etc/fstab.bak
blkid

You will see data similar this:

/dev/nvme1n1: UUID="3c4ddbbc-9e14-4b6d-acb2-ea24a89a76" TYPE="xfs"
/dev/nvme0n1: PTUUID="61856571-99dd-4505-96fc-1d882022f6" PTTYPE="gpt"
/dev/nvme0n1p1: LABEL="/" UUID="5d259081-e60a-4c60-8f74-78ad0e2562" TYPE="xfs" PARTLABEL="Linux" PARTUUID="f257d0ed-ac68-4b9b-8ee1-1f8cbe74e9e6"
/dev/nvme0n1p128: PARTLABEL="BIOS Boot Partition" PARTUUID="eea7b90-ecc8-4dcb-82e7-3413561bc7f"

Use the UUID value from the new disk with /etc/fstab - use your own values:

vi /etc/fstab

UUID=3c4ddbbc-9e14-4b6d-acb2-ea164a38a76a  /data  xfs  defaults,nofail  0  2

[Save and exit]


umount /data
mount -a
df
df -hT /dev/xvdb
 
You should see data like this:
/dev/nvme1n1   xfs   2.0G   48M  2.0G   3% /data

I would stop and start the instance from scratch to ensure no issues. This would cover off that a reboot will by default also work. 

You have to be careful with disks, as mistakes can lose ability to log into the instance.

A snapshot could be of the entire instance, or multiple snapshots of the hard disks associated with it.

You would have to remove the entry in /etc/fstab, stop the instance, detach the disk before restarting the instance or removing the instance. This should keep the disk for another instance with the above commands and edits.
If a disk runs out of space, you could add /dev/xvdc for example, move the files across to it, and delete the xvdb disk, or see how you go with resizing the disk.

 


EFS Storage


EFS Storage - Linux2023 or Debian

Documentation: https://ap-southeast-2.console.aws.amazon.com/efs/home?region=ap-southeast-2#/get-started

I am cautious of using this with Debian as there are so many packages installed, and dependencies on correct versions.

Access the EFS menu in your local region.

EFS gives immediate storage/retrieval response, but has a cost you can review in the Account > Billing due to use of file transfer and access. If you need fast response, EFS is good. EFS is mounted, so there is no blocking of a VPN by Amazon or elsewhere. It also means you can increase site security in .htaccess or nginx.conf to img-src ‘self’; whereas you cannot if using S3 buckets.

EFS transfers files to archive when not in use – low cost. When a visitor views files, they returns to standard EFS storage – a transfer cost. After the files remain unused for a nominated period, they go back to archive.

EFS will not transfer files of 128MB or less to archive. These have a higher cost.

EFS has a configuration option for backing up files to other servers in the region, where those data centers are physically located apart. If you have your files on your PC, or synced copies on S3, the cost is lower without the backup options.

EFS is handy to mount if you want to temporarily store large files. For example, create a tar backup file on EFS, gzip it in the same location, then transfer the gzip file to S3 storage, and remove the EFS file.

If you delete the instance and want to transfer the EFS disk to another instance, unmount it, remove it from /etc/fstab, and then reconfigure on the new instance. Your EFS storage will not mount to different regions.

Here are my own settings, using no backups of the EFS storage.

Performance mode:
General Purpose
Throughput mode:
Enhanced, Elastic
Lifecycle management:
Transition into Infrequent Access (IA):
7 day(s) since last access
Transition into Archive:
None
Transition into Standard:
On first access
Automatic backups:
Disabled
Replication overwrite protection:
Enabled

Give it the name of your intended mount point, e.g.:
data

Here are my Access Point settings:

Root directory path:
/

Give it a tag name,say, the same as your intended mount point. e.g.:
data

For Linux 2023, install the following package:

dnf install amazon-efs-utils

I can’t recall why, but I once had to create a Security Group for EFS, and add that to the instance. I currently have no need to do that, but I do notice the present filesystem is listed a NFS4, which may be why.

For Debian:

https://github.com/aws/efs-utils?tab=readme-ov-file#on-other-linux-distributions

cd /home/admin

(Use option 1)

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
. "$HOME/.cargo/env"


sudo apt-get update
sudo apt-get -y install git binutils rustc cargo pkg-config libssl-dev gettext
git clone https://github.com/aws/efs-utils
cd efs-utils
./build-deb.sh
sudo apt-get -y install ./build/amazon-efs-utils*deb

Assuming you got it to work, manually mount, and then umount and put into fstab in the same way as shown for Linux2023.
You will need systemctl daemon-reload for Debian. Test fstab by using umount -a, then to unmount, umount /data

 

 

Amazon provides other information such as encryption in transit. I will not go into these details.

Reference:

https://docs.aws.amazon.com/efs/latest/ug/installing-amazon-efs-utils.html

https://docs.aws.amazon.com/efs/latest/ug/upgrading-stunnel.html

 

Let’s say you want EFS under /data. Create that directory on your server with chmod 2775. After creating the storage, manually mount it to test:

It is best to see AWS EFS documentation in preference to my notes. The info below may help/

cd /etc
cp -p fstab fstab.bak

Here is an example of a manual command: (the _ID values are on the EFS AWS panel you create EFS on. In the example, data is the mount point under /.

mount -t efs -o tls,accesspoint=ACCESS_ID FILE_ID data

Use df -hT to view the mount. Then unmount it (e.g. umount /data; df -hT) so you can edit fstab and reboot.

Here is an /etc/fstab mount example for Linux2023: where fs-0133a4927dXXXXXXX needs your own File ID, and fsap-09727507e89bYYYYYYY needs to be your own ACCESS_ID

cd /etc
vi fstab

fs-0133a4927dXXXXXXX /data efs _netdev,tls,accesspoint=fsap-09727507e89bYYYYYYY 0 0

[save and exit]

Check it mounts using:

mount -a
df -hT

Reboot the server at some point to convince yourself it mounts.
Be sure you do this reboot process before adding lots of data to your site in case you corrupt something.

 

Start typing and press Enter to search