EFS, GP3, S3 Storage, S3FS – Amazon AWS

EFS, GP3, S3 Storage, S3FS – Amazon AWS

Use of EFS, GP3, S3 Storage, S3FS – Amazon AWS

Additional Storage Options – EFS, GP3, S3

Errors in /etc/fstab will likely make your system unbootable. If experimenting with adding or subtracting mounted systems, ensure the fstab file is correct or comment out entries you make before rebooting.

It is best one learns how to mount EFS or GP3 by learning from Amazon documentation.

EFS gives immediate response, but has a costly use of file transfer/access. If you need fast response, EFS is good. I have around 8GB of photography files online. I don’t want this on GP3 storage. I place the thumbnails onto GP3, as it is around 300MB – the rest in EFS.

I have used S3 buckets, but one needs to run s3fs github software to simulate NFS mounting – all sorts of issues arise. S3 is slower, but quite quick. It is lowest cost.It is faster than any other cloud service I looked at.

Whatever method one uses, there still needs to be .tar file backups to your PC, as we are not paying for copies of files on Amazon.

Amazon provides “aws s3” commands which can come in handy. The commands are slow, but okay for simple use. The aws sync command is fastest. I backup WordPress files and the database using the aws command to a local region bucket for low cost storage.

From experience, I see no value in using buckets for CDN storage. CDN should be using a proper method. I also see no value in using S3 buckets for static web hosting or redirection. I would rather find normal methods to do so if it is possible. There are some domain names Amazon will not process, such as .app domain names.

I like using buckets for larger wordpress files, like client document downloads or video files.

Another use of S3 is to use a free product like Cyberduck to copy critical personal files from your PC.

NOTE: Linux 2023 is now changed…

Use FILE_ID /content efs _netdev,tls 0 0     for /etc/fsatb with no ACCESS_ID anymore, and for manually mount testing:

mount -t efs fs-0c3cbe6013d70a7de /content    again, no file_access_id anymore. Example is  with /content

If your files are used infrequently, they can go into EFS with migration to IA (infrequent access) after a nominated number of days. When the files are accessed again, they come out of IA storage more quickly than use of an S3 bucket’s access. I configure EFS such that access to IA files go back into standard EFS storage, which then go back into IA after the set number of days. This keeps costs lower. But, EFS is a premium price. Why not use additional GP3 storage if possible? EFS does not transfer small files (128kb or less) to EFS IA storage.

EFS has mount limitations between regions, so you may or may not be able to use it across multiple customers. A good use would be sharing one blacklist.sh script for all your clients.

To install EFS, simply create a file system with your own settings from the EFS console, then create an access point. Use the AWS documentation to configure Linux – e.g, dnf install amazon-efs-utils, (nfs-common and nfs-utils packages are already installed), create a / directory mount point, and edit /etc/fstab.

Your EFS storage needs a security group that only has NFS on Port 2049. You create this first, and use this for the EFS configuration from the EFS console.

Then mount as follows:s (I am using an example of /content)

cd /
mkdir content
chmod 2775 content
(If you want webfiles here, then cd /var/www/html, ln -s /content content)
Manually mount:
cd /
mount -t efs -o tls,accesspoint=ACCESS_ID FILE_ID content
Then df -HT to view it.
Then unmount it.
Then add it to /etc/fstab:
FILE_ID /content efs _netdev,tls,accesspoint=ACCESS_ID 0 0
mount -a
df -hT

Mounting another GP3 disk is a different configuration. You create a volume (not as a snapshot) and attach it to the instance, say, /dev/xvdb. Amaxon documentation shows the commands you use and how to edit fstab.

NOTES: s3fs simply will not work – you could have it working and then the next day your site is frozen as the mount is corrupted.

We use the Amaxon “aws” command to access the email S3 bucket.

In a prior learning attempt, I used s3fs (pseudo NFS mount) to access the email files. This is not reliable but it can help when developing shell scripts to quickly observe file behaviour.

Verify “aws” is installed: (It should be. If not, do an internet search on installing aws commands.)

aws help

Configure aws for your region:

You will previously as part of prerequisites, have created an IAM “user” entry for EC2 and S3 bucket access. This has your public and private key.

For example, under IAM Users, I have this entry: (it may not need all this, but this works)

xxxxxxx (my user name):
AdministratorAccess (Some configs may previously show “AdminAccess” depending on when you created them on the platform.)
AmazonS3FullAccess
AWSLambda_FullAccess
CloudWatchFullAccess

When creating the user, we are given ACCESS Keys (public and private). We must never lose these. (We could but there would need to be new configurations made where we have used them before.)

Then, use these commands for your region and keys:

cd ~
ls -la

[you should not see the folder .aws]

mkdir .aws
vi config
[add 6 lines only, two of which are blank lines at the end. You must have two blank lines.]
[default]
region=ap-southeast-2
aws_access_key_id=USER_ACCESS_KEY
aws_secret_access_key=USER_ACCESS_PRIVATE_KEY


[save and exit]
[now run the aws configure command and simply press the Enter key after each prompt]

aws configure
AWS Access Key ID [****************LKVG]: 
AWS Secret Access Key [****************Wcae]: 
Default region name [ap-southeast-2]: 
Default output format [None]: 

[Validate you can access a bucket. With one of your own buckets, try: (in this example, there is already a bucket called domain.au.inbox)]

aws s3 ls s3://domain.au.inbox/

[If the command works, there will be no error trying to access the bucket. We can now proceed with a shell script.]

Under IAM > Roles, you will previously have made a role (or do so now) that lets your EC2 instance access buckets.
For example:

yyyyyyy (my IAM role name. Later it will be in the IAM role’s list and show it as “AWS Service: ec2”.):
AdministratorAccess
AmazonS3FullAccess
CloudWatchFullAccess

Now, I am not an expert on IAM, and some of my configurations are historic. I did have an IAM group with Admin and S3 full access, and I made my user part of that group. Anyway, once you have your IAM Role configured, you make sure in the EC2 console it is attached to the instance:

EC2 (Sydney for Australia) > Actions > Security > Modify IAM Role —-> then add it.

Important note:

When you previously created your email bucket (with public access) – in Australia’s region, then added your SES Email Rule(s), the bucket would have needed permissions to do so.

That is, in the permissions tab of your bucket would have this code in it:

(where domain.au.inbox is replaced with your own bucket name)

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowSESPuts",
            "Effect": "Allow",
            "Principal": {
                "Service": "ses.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::domain.au.inbox/*",
            "Condition": {
                "StringEquals": {
                    "AWS:SourceAccount": "839054678433"
                },
                "StringLike": {
                    "AWS:SourceArn": "arn:aws:ses:*"
                }
            }
        }
    ]
}

Errors in /etc/fstab will likely make your system unbootable. If you remove a disk from the instance, but fstab looks for it, you will have a problem.

[ In the example below, after the AWS volume is created and attached to the instance, it is not yet mounted. I would suggest a full stop and start of the instance rather than a reboot. Then move onto the operating system commands to mount it. My reference is docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html ]

[ Create a blank volume say of 2GB size with GP3 storage, without a snapshot image, in your AWS instance’s region. Wait for it to become “available” and then attach it to your instance from the Volume’s menu in the EC2 console menus. Let us assume you only have /dev/xvda mounted on your instance. Make the new volume a file system and then use the lsblk command to look at it: ]

mkfs -t xfs /dev/xvdb
lsblk -f

[This example will mount the volume to /data:]

cd /
mkdir data
mount /dev/xvdb /data
cp /etc/fstab /etc/fstab.orig
blkid

[ You will see data like this:
/dev/nvme1n1: UUID="3c4ddbbc-9e14-4b6d-acb2-ea24a389a76a" TYPE="xfs"
/dev/nvme0n1: PTUUID="61856571-99dd-4505-96fc-1dd8820522f6" PTTYPE="gpt"
/dev/nvme0n1p1: LABEL="/" UUID="5d259081-e60a-4c60-8f74-78ad0ee25652" TYPE="xfs" PARTLABEL="Linux" PARTUUID="f257d0ed-ac68-4b9b-8ee1-1f8cbe74e9e6"
/dev/nvme0n1p128: PARTLABEL="BIOS Boot Partition" PARTUUID="eea7b390-ecc8-4dcb-82e7-34131561bc7f"

Use the UUID value with /etc/fstab - use your own values:]

vi /etc/fstab

UUID=3c4ddbbc-9e14-4b6d-acb2-ea24a389a76a  /data  xfs  defaults,nofail  0  2

[Save and exit]

umount /data
mount -a
df
df -hT /dev/xvdb

[ 
You should see data like this:
/dev/nvme1n1   xfs   2.0G   48M  2.0G   3% /data

I would stop and start the instance from scratch to ensure no issues. This would cover off that a reboot will by default also work. 
]

This shows the use of https://mydomain.au/content rather than a subdomain https://content.mydomain.au. There is no need for a subdomain when using S3 files.

URL’s such as https://openec2.com/content/brisbane-river-northshore/DSD_4626.jpg with my own files, are far better for search engines than a fully qualified S3 Bucket URL such as https://s3.ap-southeast-2.amazonaws.com/myS3bucketname/brisbane-river-northshore/DSD_4626.jpg.

The URL is able to keep protection so that it is only viewable from the website, rather than directly accessing the true bucket URL.

The information below shows how to install the s3fs software, the shell script required to mount it, the crontab error checking for infrequent situations where the “nfs” mount point is corrupt, and examples of how we may use the mount point to access documents or images.

As an aside, all of my archived websites place the wp-content/uploads files onto S3 buckets without any need for scripting.

That is, if we create a bucket, say s3mydomain.au, and create a mount point called /var/content that points to this bucket, we can create a soft link:

/var/www/html/some_archive/wp-content/uploads -> /var/content/some_archive/wp-content/uploads

For calling documents or photographs, we could use:

/var/www/html/content -> /var/content, and from there access subdirectories with images, such as /var/content/brisbane and so on, or
/var/www/html/content/mydocuments -> /var/content/mydocuments.

This gives a sense of what we are doing. There is no configuration of CDN in this case, or use of problematic WordPress plugins that access buckets.

Something to keep in mind, there is a limit on the size of a .tar backup file in an S3 bucket, so if you have thousands of photographs, you create several tar files. S3 bucket speed it great, and practical for use compared to other cloud systems I have tested.

Install the software…

dnf -y install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel
cd /home/ec2-user
git clone https://github.com/s3fs-fuse/s3fs-fuse.git
cd s3fs-fuse
./autogen.sh 
./configure --prefix=/usr
make
make install

[let's create a mount point...]

cd /var
mkdir content
chmod 2775 content
cd /var/www/html
ln -s /var/content content
ls -la

Now create an IAM role , with any name such as s3admin.

Give it these permission policies:

AmazonS3FullAccess
AdministratorAccess
CloudFrontFullAccess

I add Cloudfront in my own role as I use that elsewhere.

Now add this role to the EC2 instance under the EC2 console, Instances (then click on the instance) > Security > Modify IAM Role

Using the above examples, this is how you mount the bucket:
(You would use your own region, bucket name, and so on.)

s3fs -o iam_role="s3admin" -o use_path_request_style -o url="https://s3-ap-southeast-2.amazonaws.com" -o endpoint=ap-southeast-2 -o dbglevel=info -o curldbg -o allow_other -o use_cache="" s3mydomain.au /var/content

You may notice caching is set to null. Otherwise copies of files are made to your instance under /tmp, which is not good!

Then issue the Unix command to view the mount point.
df

You may add a test file to your bucket from the Amazon S3 console, and try viewing it with:

ls /var/content/*

If the mount point gets “twisted up” during your installation, you have to restart the instance from the instance console.

 

When you add files to the mount point, simply use Unix commands. For example:

cd /var/content

mkdir documents

chmod 2775 documents

cd /home/ec2-user

mv mydocument /var/content/documents

cd /var/content/documents

chmod 744s myfile

You can experiment with permissions.

Here is an example of crontab, to keep a check on the mount point:

15 1 * * 1 s3fs -o iam_role="s3admin" -o use_path_request_style -o url="https://s3-ap-southeast-2.amazonaws.com" -o endpoint=ap-southeast-2 -o dbglevel=info -o curldbg -o allow_other -o use_cache="" s3mydomain.au /var/content >/dev/null 2>&1

Here is a shell script you can manually use, /home/ec2-user/remount.sh with file permissions of 775.

#!/bin/sh
s3fs -o iam_role="s3admin" -o use_path_request_style -o url="https://s3-ap-southeast-2.amazonaws.com" -o endpoint=ap-southeast-2 -o dbglevel=info -o curldbg -o allow_other -o use_cache="" s3mydomain.au /var/content 
exit

Here is an example of a crontab entry to check the mount point is not corrupted, and if so to remount it and send you a courtest e-mail via postfix…

5 * * * * /home/ec2-user/mount_error.sh

Now create the shell scripts below. Let’s assume you have a general log file called /home/ec2-user/info.log that your shell scripts can use as required.

cd /home/ec2-user
touch t1.dat
vi /home/ec2-user/mount_error.sh

#!/bin/sh
:>/home/ec2-user/t1.dat
a="ok"

s3fs -o iam_role="s3admin" -o use_path_request_style -o url="https://s3-ap-southeast-2.amazonaws.com" -o endpoint=ap-southeast-2 -o dbglevel=info -o curldbg -o allow_other -o use_cache="" s3mydomain.au /var/content >/home/ec2-user/t1.dat 2>&1
a=`grep "Transport endpoint is not connected" /home/ec2-user/t1.dat|awk -F: {'print $3'}|awk  '{print $1}'`
if [ "$a" = "Transport" ] ;
then
        fusermount -u /var/content
    s3fs -o iam_role="s3admin" -o use_path_request_style -o url="https://s3-ap-southeast-2.amazonaws.com" -o endpoint=ap-southeast-2 -o dbglevel=info -o curldbg -o allow_other -o use_cache="" s3mydomain.au /var/content >/dev/null 2>&1
    d2=`date`
        echo "/var/content error" $d2 >> /home/ec2-user/info.log
sudo /usr/sbin/postfix start
sleep 2
sudo /usr/sbin/postfix reload
sleep 2
sudo /usr/sbin/sendmail -f admin@mydomain.au admin@mydomain.au < /home/ec2-user/mount_error.txt
sleep 5
sudo /usr/sbin/postfix stop
else
    :
fi

exit

[save and exit the editor]

vi mount_error.txt
From: admin <admin@mydomain.au>
Subject: mydomain.au s3fs was down
mydomain.au s3fs was down and is now remounted
.

[You must have a single blank line below the fullstop above. Save and exit the editor]
[You would of course use your own domain name and email address, and already have postfix running correctly.]

chown root mount_error.sh mount_error.txt t1.dat
chgrp ec2-user mount_error.sh mount_error.txt t1.dat
chmod 775 mount_error.sh
chmod 777 mount_error.txt t1.dat

[You may test the above script...]

sh -x ./mount_error.sh

We now come to the point of showing examples to use these mount points. This can be quite detailed and depends on your own programming skills.

 

Example 1 – list documents.

 

[Use your won font family, domain name, and icon images. I will supply the images as well for you.
We will call this script from PHP, so we use single quotes in many places ratehr than double quotes.]

cd /var/www/html
vi s3ls.sh

#!/bin/sh
IFS=$'\n'
font="Open&#32;Sans"
cd /var/content/$1
for a in `ls -l --time-style=+"%d %m %Y"`
do
p="                      "
b=`echo $a|awk '{print $5}'`
bt=`echo bytes: $b$p|cut -c 1-17`
bz=`echo $bt|awk '{print $2}'`
if [ "$bz" == "0" ] || [ "$bz" == "" ]
then
:
else
file=`echo $a|awk '{print $9" "$10" "$11" "$12" "$13" "$14" "$15}'|sed 's/^ *//;s/ *$//;s/  */ /;'`
e=""
e=`echo $file|awk -F. '{print $NF}'`

case $e in
pdf)
ext="<img src=https://mydomain.au/wp-content/uploads/adobe-reader.jpg width=20 height=20 alt=mydomain.au />"
;;
docx|doc)
ext="<img src=https://mydomain.au/wp-content/uploads/msword.jpg width=20 height=20 alt=mydomain.au />"
;;
xlsx|xls)
ext="<img src=https://mydomain.au/wp-content/uploads/excel-icon.jpg width=20 height=20 alt=mydomain.au />"
;;
pptx|ppt)
ext="<img src=https://mydomain.au/wp-content/uploads/ppt-icon.jpg width=20 height=20 alt=mydomain.au />"
;;
png|jpg)
ext="<img src=https://mydomain.au/wp-content/uploads/media-icon.jpg width=20 height=20 alt=mydomain.au />"
;;
mp3|aac|wav|m4a|mpeg|ogg)
ext="<img src=https://mydomain.au/wp-content/uploads/audio-icon.jpg width=20 height=20 alt=mydomain.au />"
;;
mov|avi|mp4|mpeg4|3gp|3gpp|3gpp2|3gp2|quicktime)
ext="<img src=https://mydomain.au/wp-content/uploads/movie-icon.jpg width=20 height=20 alt=mydomain.au />"
;;
*)
ext="<img src=https://mydomain.au/wp-content/uploads/file-icon.jpg width=20 height=20 alt=mydomain.au />"
;;
esac

pre="<img src=https://openec2.com/wp-content/uploads/null-icon.jpg width=0 height=0 alt=axoncpn />"
d1=`echo $a|awk  '{print $6}'`
d2=`echo $a|awk  '{print $7}'`
d3=`echo $a|awk  '{print $8}'`
dt="date: $d1 $d2 $d3"
# echo $dt $a
echo "$dt $bt $ext file: <a href='https://mydomain.au/content/$1/$file' target=_blank><span style=text-decoration:underline;color:#444;font-family:$font;>$pre$file</span></a>"
fi
done
exit

I create a PHP shortcode with the following code. I use the plugin called ‘Add Shortcodes Actions And Filters’ from the author Michael Simpson. There are other plugins that do the same thing. Let’s call it ‘s3listfiles’.
This plugin will need you to tick the boxes, Activated, Shortcode, Code echoes output.

$place = "$atts[opt]";
$command = "/var/www/html/s3ls.sh $place 2>/dev/null";
$output = shell_exec($command);
echo "$output";
PHP_EOL;

You will notice how there is only one echo output.
Once this is done, you call the shortcode on a WordPress page.
For example:[s3listfiles opt=”mydocuments”] where you place these square brackets around this code.
We have only programmed for one subdirectory below the bucket’s root directory. So, you could have directories like, personal, public, projectX, projectY and so on, but not /var/content/public/projectX. Rather, you would have /var/content/projectX.

I format the shortcode into a table format. For example:

<div style="background-color: #d6d6d6; padding-left: 20px; padding-right: 20px;">
<table style="text-align: left; background-color: #d6d6d6;">
<thead>
<tr>
<th style="color: #333; text-align: left; padding-top: 20px;" align="left">My Documents</th>
</tr>
</thead>
<tbody>
<tr style="text-align: left;">
<td style="color: #444; text-align: left; font-family: Courier, Times,sans-serif;" align="left">
<pre>[s3listfiles opt="mydocuments"]</pre>
</td>
</tr>
</tbody>
</table>
</div>

The icons I used can be downloaded here: openec2.com/wp-content/uploads/s3ls_icons.zip
I refrain from giving further examples due to complexities involved.