Setting up AWS resources to host Hugo blog

Posted June 14, 2022 by Jacob Sauni ‐ 5 min read

I had a go at getting existing AWS resources into Terraform. A few errors and learnings along the way.

Current state + Goals

So I had the resources in AWS already.

  • Domain, jsauni.com, registered in Route53.
  • S3 buckets for hosting the blog files.
  • CloudFront distribution (required for https).

But I wanted to implement the following.

  • Manage resources with code using Terraform.
  • Move resources to ap-southeast-2 region (all of them were in us-east-1).

Get resources into Terraform

EZ. Use terraform import.

Terraform is able to import existing infrastructure. This allows you take resources you’ve created by some other means and bring it under Terraform management.

I created resources in terraform files, split by type ie s3.tf, cloudfront.tf.

resource "aws_s3_bucket" "jsauni" { ... }
resource "aws_s3_bucket_website_configuration" "jsauni" { ... }
resource "aws_s3_bucket_acl" "jsauni" { ... }
resource "aws_cloudfront_distribution" "jsauni" { ... }

I created the above resources, once for jsauni.com and a second time for www.jsauni.com. Not important here but www.jsauni.com is setup to redirect to jsauni.com.

I then imported the resources, which is pretty straight forward. Not all resource’s terraform import syntax is the same, so I usually referenced the docs to find out. This is what I used for importing the S3 bucket for hosting the blog files.

terraform import aws_s3_bucket.jsauni jsauni.com

Move resources to ap-southeast-2

Sweet, so now I have resources being managed by Terraform. One of the things I wanted to do was move the resources to the ap-southeast-2 region. The only reason for this is this is the closest AWS region to me.

So my thought process to change the regions was:

  1. Run terraform destroy to delete the resources
  2. Update the AWS region set in the terraform AWS provider settings to ap-southeast-2
  3. Run terraform apply to recreate the resources in new region

Seemed legit. So I ran terraform destroy, then ran into my first error.

Error: deleting S3 Bucket (jsauni.com): BucketNotEmpty: The bucket you tried to delete is not empty
...

Oops. So I manually deleted the objects in the S3 bucket via the AWS console. I reran the terraform destroy command and it completed successfully. Cool.

I then updated the AWS region to ap-southeast-2. Then I ran a terraform plan, everything looked good, so I ran terraform apply. Then I waited for the resources to be created. And waited. And waited. 55 minutes later and my AWS token had expired. What the?

I did a quick google search and found some StackOverflow thread mention that it takes about 1 hour before the bucket name will become available. The only official documentation/comment I could find on this was “some time might pass before you can reuse the name of a deleted bucket”.

There was also some error about releasing the terraform state lock. I won’t go into what state locking is but you can read more about it here. I actually didn’t see this error and continued to authenticate against AWS to get a new token and tried to run terraform apply again, failed.

This time I read the error message properly. So because my initial terraform apply was in progress when my AWS token expired, when the command errored out the lock I put on state was not released. So I needed to release the lock. Trivial issue and Terraform docs had me covered. I had to run terraform force-unlock <LOCK_ID>, I got the LOCK_ID from the terraform apply output. Once the lock was released, terraform apply ran all good. Now the S3 buckets and CloudFront distributions were created in the ap-southeast-2 region.

Configure S3 for website hosting

There are a few more Terraform resources you need to create in order to get S3 serving your static website.

First is to add an aws_s3_bucket_website_configuration resource for your S3 bucket.

resource "aws_s3_bucket_website_configuration" "jsauni" {
  bucket = aws_s3_bucket.jsauni.bucket

  index_document {
    suffix = "index.html"
  }
}

Once you’ve got this in place, you’ll have a S3 bucket endpoint. But when I tried to hit this endpoint, it returned a 403. For the index.html file, AWS documentation (which I’ll link to at the bottom of this post) provided some dummy code to create one. I manually uploaded this file to my S3 bucket via the AWS console.

In order to get the S3 bucket serving the index.html file successfully, I had to create the following resources.

resource "aws_s3_bucket_public_access_block" "jsauni" { ... }
resource "aws_s3_bucket_policy" "public_read" { ... }
data "aws_iam_policy_document" "public_read" { .. }

With the bucket policy and policy document in place, which granted read access to the S3 bucket. Accessing the S3 bucket endpoint was working successfully.

CloudFront Distribution for https

Now this part is optional for static site hosting in AWS S3, but if you want to use https this is what the AWS docs recommend. I had to create a CloudFront distribution per bucket. Once the CloudFront distribution was in, I just needed to update my domain A records to point to the new CloudFront distribution domain names.

Now when I visit https://jsauni.com, this serves the index.html file. Nice.

Future improvements

Overall I was pretty happy to get this sorted over the weekend. I got some reps in with terraform which was cool and can easily deploy changes to AWS when I need to. There were also some improvements I could think of that I might look at, at a later date.

  • Manage the domain via Terraform.
  • Move the domain and certificates to the ap-southeast-2 region.
  • Automate code deployment with Github Actions.

References