Serverless static blog using AWS and Hugo.

TL’DR

  1. Hugo is a static website engine.
  2. You can use it to produce your blog which then compiles down to a bunch of files
  3. Storing files is what AWS S3 does!
  4. This article guides you through using the AWS cli to combine S3, Cloudfront and Route53 to have a blog / site / portfolio / whatever up and running.
  5. This will literally take you less than 15 minutes
  6. This will most likely cost less than a Starbucks super-duper latte to host per month (or per year if its not very popular)!

Intro

AWS gives some good advise on hosting a static website on AWS. But with modern static website engines we can take this even further!

Hugo is a great modern static website engine. What does that mean exactly? Well rather than run your blog with a dynamic backend using something like Wordpress that generates HTML dynamicaly, per request, a static website engine produces the HTML when you actually write and publish content, leaving you with simple static HTML, images, JS files etc.

This greatly simplifies the infrastructure needed to support a “dyanmic” blog or website. Especially something as simple as a blog or company website where realistically the content is not-that-dynamic and generally only edited by a few trusted individuals.

There are a few static website engines around. I’ve choosen Hugo based on a few criteria:

  • Its simple to understand, you can edit offline and deployment is ~30 seconds!
  • It literally installs in a minute (using brew, and why would’nt you?)
  • It has a number of community supported themes to choose from
  • You write posts in simple mark-down, which is very easy to understand
  • Its written in golang, and I love go :-)

Cost

If your on the AWS free tier (free for a year) please skip this section. All the costs of this setup are FREE for you!.

The actual fixed cost of this setup is literally less than 1 cent. S3 costs $0.03 per GB, most likely your blog is unlikely to go above 100MB so really storage costs are just not something to factor in.

The traffic costs will vary based on how popular you are! Cloudfront charges ~ 8.5c per GB of data and .075c per 10,000 HTTP requests.

Lets presume you might get 50K visits per month, this blog is around 700K in size for all the resources, and takes around 13 requests (as measured in Chrome), so lets round up and say that each visit will deliver 1MB of data across 20 HTTP requests.

Cost Calculation:

Storage Cost: 1c
Data Cost = ( (50,000 x 1) / 1024 ) x 0.085 =  $4.15
Request Cost = ( (50,000 x 20) / 10000 ) x 0.0075 = $0.75

Total cost: $4.87 (per month)

So < $5 a month, for quite a heavily viewed blog! and this isnt even including the fact that most browsers cache resources, so I would defiently expect less than this.

Setup

Ok so first this guide is going to be command-line based. Windows / Linux / Mac, it doesnt matter all are equal (apart from Windows clearly) when dealing with the command line. Dont be scared its easy, I promise. But you are going to need to install:

  1. The AWS cli - I actually very much recomend aws-shell as it supercharges the aws cli!
  2. Create your FREE AWS account (if you dont already have one) and get your access key ID and secret access key
  3. Install Hugo

For those with homebrew, this is as simple as

brew install awscli
brew install aws-shell
brew install hugo

For the AWS commands you will be running you will need a couple JSON files. I’ve created gist’s containing examples that can be used as good templates. Download the Cloudfront Template, S3 Policy Template and finally the Route53 Template

Start site

Ok, we have the tools. Next up follow Hugo’s excellent quick start to create your site, and first post. From zero to “something” shouldn’t take more than a couple of minutes (of course it then takes “hours” to choose just-the-right images, and look as were all super designers at heart)

Make sure you do a hugo --theme=<my-super-theme> at the end to produce the public site, ready for upload to S3

One caveat to hosting server-less on S3 (using Cloudfront) is that Cloudfront does not support “directory” default files, read more about it here. This means we need to tell Hugo to use slightly more ugly URL’s for blog posts. To do this alter your config.toml and replace the “post” line within permalinks to:

       post = "/:filename/index.html"

This will turn blog post URL’s from http://myblog.mysite.com/post/ilikeblogs/ to http://myblog.mysite.com/post/ilikeblogs/index.html which is a little ugly, but will work through Cloudfront. I’m still looking into getting directory style links to work through Cloudfront but for now this works.

Setup AWS hosting

Alright now comes the interesting bit. Lets use aws-shell to create our sites S3 bucket and a separate logging bucket. Change the region to feet your needs, your notice we keep the buckets private - this is because the S3 buckets themselves are not going to be exposed directly to the internet - we will route through cloudfront instead.

aws-shell
s3api create-bucket --bucket "myblog" --acl private --region us-west-2 --create-bucket-configuration LocationConstraint=us-west-2
s3api create-bucket --bucket "myblog-logs" --acl private --region us-west-2 --create-bucket-configuration LocationConstraint=us-west-2
s3api put-bucket-website --bucket myblog --website-configuration file://s3website.json

The final command sets the bucket to have a website config, mainly we need this so that accessing the root of the bucket i.e. “/” forces “/index.html” instead.

Now lets allow the AWS LogWriter to read from the our main site bucket, and write to our logging bucket.

s3api put-bucket-acl --bucket myblog --grant-read URI=http://acs.amazonaws.com/groups/s3/LogDelivery
s3api put-bucket-acl --bucket myblog-logs --grant-write URI=http://acs.amazonaws.com/groups/s3/LogDelivery --grant-read URI=http://acs.amazonaws.com/groups/s3/LogDelivery --grant-read-acp URI=http://acs.amazonaws.com/groups/s3/LogDelivery

Now lets enable logging on our main site’s bucket.

s3api put-bucket-logging --bucket myblog --bucket-logging-status '{ "LoggingEnabled": { "TargetBucket": "myblog-logs", "TargetPrefix": "s3_logs/" }}'

Alright, so now we have our main sites bucket setup. Any access to it will write log files into our logging bucket (just like a regular website). So now its time to sync our public Hugo directory to our sites bucket

s3 sync ./public s3://myblog

Well that was super-hard wasnt it!. :-) If we wanted to we could leave it at that, make our bucket public and let our users access the site directly through the S3 bucket. But lets not do that. For super speedy access lets use Cloudfront, AWS’s CDN (Content Delivery Network) so our site can be served through edge servers around the world!

So first we need to enable Cloudfront beta API access

configure set preview.cloudfront true

Now lets setup a cloudfront identity for the site. This returns a unique UserID which we then need to use to grant Cloudfront access to our bucket (so keep the response handy)

cloudfront create-cloud-front-origin-access-identity --cloud-front-origin-access-identity-config CallerReference="myblog_identity",Comment="Cloudfront Ident for  myblog"

This should return something like:

{
    "CloudFrontOriginAccessIdentity": {
        "CloudFrontOriginAccessIdentityConfig": {
            "Comment": "Cloudfront Ident for  my blog",
            "CallerReference": "myblog_identity"
        },
        "S3CanonicalUserId": "XXXXXXXXXXXXXXXXX",
        "Id": "YYYYYYYYYYYYYYY"
    },
    "ETag": "ZZZZZZZZZZZZZZZZ",
    "Location": "https://cloudfront.amazonaws.com/2016-01-28/origin-access-identity/cloudfront/YYYYYYYYYYYYYYY"
}

You will need to keep the Canonical UserID and Id fields handy as we will use them to grant Cloudfront access to the bucket later on. For Now lets create our cloudfront web distribution, specifying our sites S3 bucket as the origin (backend)

cloudfront create-distribution --distribution-config file://cloudfront.json

Now that returns a huge piece of JSON, check for the “status” field it should be “InProgress” and also make sure to capture the ID, we will use this to confirm our distribution has been setup.

cloudfront get-distribution --id <INSERT_YOUR_ID_HERE>

Now this takes a while to setup (it has alot of edge servers to configure :-). So in the meantime, lets grant access to our cloudfront CDN to both S3 buckets

s3api put-bucket-acl --bucket myblog --grant-read id=<insert_your_S3CanonicalUserId_here> --grant-read URI=http://acs.amazonaws.com/groups/s3/LogDelivery
s3api put-bucket-policy --bucket myblog --policy file://s3bucketpolicy.json

The first command grants access to the bucket itself, the second adds a policy allowing read access to any object within the bucket.

Ok now were on the home stretch, final stop Route 53 to configure a CNAME DNS record for the domain we choose for our site. Lets grab the CDN domain that Cloudfront has given us

cloudfront get-distribution --id <insert_your_id_here>

Look for the field “DomainName”. Now if you havent used Route53 before, your going to have to setup your custom domain from scratch, including setting up the domain, setting the Nameserver records etc. I’m going to presume you have either done that or know how to google. In which case we just need to add a CNAME record for our site to our existing hosted-zone

route53 change-resource-record-sets --hosted-zone-id ZZZZZZZZZZZZZ --change-batch file://route53.json

Now we just need to wait for the cloudfront CDN setup to complete (keep issuing those get-distribution commands) and fire up your browser to visit your newly setup site. Anytime you want to update your sites content the only thing to do is to resync the public Hugo folder to the bucket, so re-issue:

s3 sync ./public s3://myblog

It should literally take seconds, just remember that Cloudfront caches your content so it will take some time for your updated content to be visible.

Andy's AWS Blog. This is in no way affiliated with Amazon Web Serices. Here be Dragons! expect plenty of spelling errors and in no way rely on the completeness of anything written. I often make mistakes (thats how we learn) and their will be plenty here! Published with Hugo