Shipping to Production: Terraform, Caddy, and the DNSSEC Disaster
From localhost to the Internet
In the first article, I built this blog from scratch: SvelteKit for the frontend, Rust/Axum for the backend, SQLite for the database, all wired together with Docker Compose. By the end of that session, docker compose up gave me a fully working blog on localhost. Articles rendered, auth worked, images uploaded. Ship it.
But "works on my machine" is not a blog. It is a Docker Compose file on a laptop. The next step was getting it onto the actual internet, at nige.wtf, with real TLS certificates and a real domain. The plan: a single EC2 instance in ap-southeast-2, Terraform for all the infrastructure, Caddy for automatic HTTPS, ECR for Docker images, and Route 53 for DNS. Claude continuing as copilot for the infrastructure work.
This should be straightforward. Famous last words.
The Architecture
The production topology is deliberately simple. One box, three containers, one reverse proxy:
Internet → Route 53 → Elastic IP → EC2 t4g.micro → Caddy :443
├── nige.wtf → frontend:3000
└── api.nige.wtf → backend:8080
Every architectural decision here optimises for one thing: simplicity. This is a personal blog, not a distributed system. I want the cheapest, smallest, most boring setup that gets HTML onto the internet with TLS.
The t4g.micro is the smallest current-gen instance AWS offers at roughly $6/month. It has 1GB of RAM, which is plenty for three Docker containers — the Rust backend uses about 5MB, SvelteKit around 40MB, and Caddy about 30MB. The ARM Graviton processor is a bonus: since I develop on Apple Silicon, docker build produces native ARM images with zero emulation overhead. No cross-compilation, no QEMU, no waiting.
I chose Caddy over Nginx for one reason: automatic HTTPS. With Nginx, you need certbot, cron jobs, renewal hooks, and a dozen lines of SSL configuration. With Caddy, you declare the domain name and it handles everything — certificate provisioning, renewal, OCSP stapling, HTTP-to-HTTPS redirects. The entire reverse proxy config for two domains is six lines.
ECR over building on EC2 keeps the instance clean. No build tools, no source code, no compilation on a 1GB machine. Build locally (or in CI), push the images, pull on the server. Lifecycle policies keep only the last 5 images per repo so storage stays negligible.
Route 53 via Terraform means DNS records live in code alongside everything else. Point the NS records at the registrar once, manage everything else with terraform apply. And I am using local Terraform state — this is a solo project, the state file sits on my machine, and I can migrate to an S3 backend later if I ever need to.
The total monthly cost is almost comically low:
| Resource | Monthly Cost |
|---|---|
| EC2 t4g.micro | ~$6.13 |
| Elastic IP (attached) | ~$3.60 |
| Route 53 hosted zone | $0.50 |
| ECR storage | ~$0.10 |
| Data transfer | ~$0.50 |
| Total | ~$11/month |
Eight dollars a month for a server-rendered blog with a Rust backend. I have spent more on a single coffee.
The Terraform
The entire infrastructure is defined in about 200 lines of Terraform spread across six files. I will walk through each one. If you are building something similar, you should be able to take these files, change the domain name, and have a working deployment.
Provider Configuration
The provider block is standard. Pin the AWS provider to the 5.x series, include the TLS provider for SSH key generation, and the local provider for writing the private key to disk:
terraform {
required_version = ">= 1.5"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
tls = {
source = "hashicorp/tls"
version = "~> 4.0"
}
local = {
source = "hashicorp/local"
version = "~> 2.0"
}
}
}
provider "aws" {
region = var.region
}
ECR Repositories
Two ECR repositories, one for each service. The names are derived from the domain name with dots replaced by hyphens, so nige.wtf becomes nige-wtf-frontend and nige-wtf-backend:
locals {
domain_hyphenated = replace(var.domain_name, ".", "-")
}
resource "aws_ecr_repository" "frontend" {
name = "${local.domain_hyphenated}-frontend"
image_tag_mutability = "MUTABLE"
force_delete = true
}
resource "aws_ecr_repository" "backend" {
name = "${local.domain_hyphenated}-backend"
image_tag_mutability = "MUTABLE"
force_delete = true
}
Lifecycle policies keep storage costs near zero by expiring old images:
resource "aws_ecr_lifecycle_policy" "frontend" {
repository = aws_ecr_repository.frontend.name
policy = jsonencode({
rules = [
{
rulePriority = 1
description = "Keep only the last 5 images"
selection = {
tagStatus = "any"
countType = "imageCountMoreThan"
countNumber = 5
}
action = {
type = "expire"
}
}
]
})
}
The EC2 instance needs permission to pull images from these repositories. The IAM policy is scoped to just the two repos — the instance can authenticate with ECR and pull images, but nothing else:
resource "aws_iam_role" "ec2_ecr" {
name = "${local.domain_hyphenated}-ec2-ecr"
assume_role_policy = data.aws_iam_policy_document.ec2_assume_role.json
}
data "aws_iam_policy_document" "ecr_pull" {
statement {
sid = "AllowAuthToken"
actions = ["ecr:GetAuthorizationToken"]
resources = ["*"]
}
statement {
sid = "AllowPullImages"
actions = [
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchCheckLayerAvailability",
]
resources = [
aws_ecr_repository.frontend.arn,
aws_ecr_repository.backend.arn,
]
}
}
Networking
The security group is minimal. SSH is restricted to the ssh_allowed_cidrs variable so you can lock it to your IP. HTTP and HTTPS are open to the world — Caddy needs port 80 for Let's Encrypt HTTP-01 challenges and automatic redirects, and port 443 for serving traffic. Everything runs in the default VPC to avoid unnecessary networking complexity:
data "aws_vpc" "default" {
default = true
}
resource "aws_security_group" "nigewtf" {
name = "nigewtf-sg"
description = "Security group for the nigewtf blog instance"
vpc_id = data.aws_vpc.default.id
}
resource "aws_vpc_security_group_ingress_rule" "ssh" {
for_each = toset(var.ssh_allowed_cidrs)
security_group_id = aws_security_group.nigewtf.id
description = "SSH access"
from_port = 22
to_port = 22
ip_protocol = "tcp"
cidr_ipv4 = each.value
}
resource "aws_vpc_security_group_ingress_rule" "http" {
security_group_id = aws_security_group.nigewtf.id
from_port = 80
to_port = 80
ip_protocol = "tcp"
cidr_ipv4 = "0.0.0.0/0"
}
resource "aws_vpc_security_group_ingress_rule" "https" {
security_group_id = aws_security_group.nigewtf.id
from_port = 443
to_port = 443
ip_protocol = "tcp"
cidr_ipv4 = "0.0.0.0/0"
}
resource "aws_vpc_security_group_egress_rule" "all_outbound" {
security_group_id = aws_security_group.nigewtf.id
description = "Allow all outbound traffic"
ip_protocol = "-1"
cidr_ipv4 = "0.0.0.0/0"
}
EC2 Instance
This is the main event. The AMI data source finds the latest Amazon Linux 2023 ARM image automatically, so the instance always launches on a current image without hardcoding AMI IDs:
data "aws_ami" "amazon_linux" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["al2023-ami-*-arm64"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
}
The SSH key pair is generated entirely by Terraform — no manual key management, no copying public keys around. The private key is written to disk with 0600 permissions:
resource "tls_private_key" "nigewtf" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "nigewtf" {
key_name = "nigewtf-key"
public_key = tls_private_key.nigewtf.public_key_openssh
}
resource "local_file" "private_key" {
content = tls_private_key.nigewtf.private_key_pem
filename = "${path.module}/nigewtf-key.pem"
file_permission = "0600"
}
The instance itself uses a templatefile to inject all configuration — domain name, ECR URIs, secrets — into the user-data bootstrap script at plan time. A 30GB gp3 root volume gives plenty of space for Docker images and the SQLite database. The Elastic IP ensures the instance keeps the same public address across stop/start cycles:
resource "aws_instance" "nigewtf" {
ami = data.aws_ami.amazon_linux.id
instance_type = var.instance_type
key_name = aws_key_pair.nigewtf.key_name
vpc_security_group_ids = [aws_security_group.nigewtf.id]
iam_instance_profile = aws_iam_instance_profile.ec2_ecr.name
user_data = templatefile("${path.module}/templates/user-data.sh", {
github_client_id = var.github_client_id
github_client_secret = var.github_client_secret
allowed_github_id = var.allowed_github_id
jwt_secret = var.jwt_secret
domain_name = var.domain_name
region = var.region
ecr_frontend_uri = aws_ecr_repository.frontend.repository_url
ecr_backend_uri = aws_ecr_repository.backend.repository_url
})
root_block_device {
volume_size = 30
volume_type = "gp3"
}
tags = {
Name = "nigewtf-blog"
}
}
resource "aws_eip" "nigewtf" {
instance = aws_instance.nigewtf.id
domain = "vpc"
}
User-Data Bootstrap
This is the most interesting part of the infrastructure. A single bash script that turns a blank Amazon Linux instance into a running blog. Terraform's templatefile substitutes the variables — domain name, ECR URIs, secrets — at plan time, so the script is fully self-contained when it runs:
#!/bin/bash
set -euxo pipefail
# Install Docker
dnf install -y docker
systemctl enable docker
systemctl start docker
# Install Docker Compose plugin
mkdir -p /usr/local/lib/docker/cli-plugins
curl -SL "https://github.com/docker/compose/releases/latest/download/docker-compose-linux-aarch64" -o /usr/local/lib/docker/cli-plugins/docker-compose
chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
# Create app directory
mkdir -p /opt/nigewtf
# Write .env file
cat > /opt/nigewtf/.env << 'ENVEOF'
GITHUB_CLIENT_ID=${github_client_id}
GITHUB_CLIENT_SECRET=${github_client_secret}
ALLOWED_GITHUB_ID=${allowed_github_id}
JWT_SECRET=${jwt_secret}
DATABASE_URL=sqlite:///app/data/blog.db
UPLOAD_DIR=/app/uploads
FRONTEND_URL=https://${domain_name}/auth/github/callback
RUST_LOG=info
ORIGIN=https://${domain_name}
API_URL=http://backend:8080
PUBLIC_API_URL=https://api.${domain_name}
PUBLIC_SITE_NAME=nigewtf
PUBLIC_SITE_URL=https://${domain_name}
CORS_ORIGINS=https://${domain_name}
ENVEOF
# Write Caddyfile
cat > /opt/nigewtf/Caddyfile << 'CADDYEOF'
${domain_name} {
reverse_proxy frontend:3000
}
api.${domain_name} {
reverse_proxy backend:8080
}
CADDYEOF
# Write docker-compose.prod.yml
cat > /opt/nigewtf/docker-compose.yml << COMPOSEEOF
services:
caddy:
image: caddy:2-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "443:443/udp"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
frontend:
image: ${ecr_frontend_uri}:latest
restart: unless-stopped
env_file: .env
depends_on:
- backend
backend:
image: ${ecr_backend_uri}:latest
restart: unless-stopped
env_file: .env
volumes:
- db-data:/app/data
- uploads:/app/uploads
volumes:
caddy_data:
caddy_config:
db-data:
uploads:
COMPOSEEOF
# Log into ECR and pull images
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
REGION=${region}
aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin $ACCOUNT_ID.dkr.ecr.$REGION.amazonaws.com
# Start services
cd /opt/nigewtf
docker compose pull
docker compose up -d
Look at that Caddyfile. Six lines. Two domain blocks, two reverse_proxy directives. Caddy reads those domain names, provisions Let's Encrypt certificates for both, sets up HTTP-to-HTTPS redirects, enables OCSP stapling, and starts serving traffic. Compare that to the Nginx + certbot equivalent and you will understand why I chose Caddy.
Route 53
DNS is two A records pointing both domains at the same Elastic IP. Caddy handles the virtual hosting based on the hostname in the request:
resource "aws_route53_zone" "main" {
name = var.domain_name
}
resource "aws_route53_record" "apex" {
zone_id = aws_route53_zone.main.zone_id
name = var.domain_name
type = "A"
ttl = 300
records = [aws_eip.nigewtf.public_ip]
}
resource "aws_route53_record" "api" {
zone_id = aws_route53_zone.main.zone_id
name = "api.${var.domain_name}"
type = "A"
ttl = 300
records = [aws_eip.nigewtf.public_ip]
}
Outputs
The outputs expose everything the deploy script needs: the public IP, a ready-to-use SSH command, the ECR URIs, and — critically — the Route 53 nameservers. That last one is the bridge between Terraform and the domain registrar. After terraform apply, you run terraform output route53_nameservers and copy those four nameservers into your registrar's NS configuration:
output "public_ip" {
description = "The Elastic IP address of the EC2 instance"
value = aws_eip.nigewtf.public_ip
}
output "ssh_command" {
description = "Ready-to-use SSH command to connect to the EC2 instance"
value = "ssh -i ${path.module}/nigewtf-key.pem ec2-user@${aws_eip.nigewtf.public_ip}"
}
output "ecr_frontend_uri" {
description = "Frontend ECR repository URL"
value = aws_ecr_repository.frontend.repository_url
}
output "ecr_backend_uri" {
description = "Backend ECR repository URL"
value = aws_ecr_repository.backend.repository_url
}
output "route53_nameservers" {
description = "Nameservers to configure at the domain registrar"
value = aws_route53_zone.main.name_servers
}
The Deploy Script
With the infrastructure in place, I needed a way to build, push, and deploy new versions. The deploy script reads everything it needs from Terraform outputs — no hardcoded URIs, no config files to keep in sync:
#!/bin/bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
TERRAFORM_DIR="$PROJECT_ROOT/terraform"
cd "$PROJECT_ROOT"
# Read terraform outputs
ECR_FRONTEND_URI=$(terraform -chdir="$TERRAFORM_DIR" output -raw ecr_frontend_uri)
ECR_BACKEND_URI=$(terraform -chdir="$TERRAFORM_DIR" output -raw ecr_backend_uri)
PUBLIC_IP=$(terraform -chdir="$TERRAFORM_DIR" output -raw public_ip)
ECR_REGISTRY="${ECR_FRONTEND_URI%%/*}"
do_build() {
docker build --platform linux/arm64 -t nigewtf-frontend ./frontend
docker build --platform linux/arm64 -t nigewtf-backend ./backend
}
do_push() {
aws ecr get-login-password --region ap-southeast-2 \
| docker login --username AWS --password-stdin "$ECR_REGISTRY"
docker tag nigewtf-frontend "$ECR_FRONTEND_URI:latest"
docker tag nigewtf-backend "$ECR_BACKEND_URI:latest"
docker push "$ECR_FRONTEND_URI:latest"
docker push "$ECR_BACKEND_URI:latest"
}
do_update() {
ssh -i "$TERRAFORM_DIR/nigewtf-key.pem" \
-o StrictHostKeyChecking=no \
"ec2-user@$PUBLIC_IP" \
"cd /opt/nigewtf \
&& aws ecr get-login-password --region ap-southeast-2 \
| docker login --username AWS --password-stdin '$ECR_REGISTRY' \
&& docker compose pull \
&& docker compose up -d"
}
STEP="${1:-all}"
case "$STEP" in
build) do_build ;;
push) do_push ;;
update) do_update ;;
all) do_build; do_push; do_update ;;
esac
Three steps: build creates ARM64 Docker images locally, push tags and uploads them to ECR, and update SSHes to the EC2 instance, pulls the new images, and restarts the containers. You can run each step individually or all at once with ./scripts/deploy.sh. The --platform linux/arm64 flag is key — on Apple Silicon, Docker builds ARM images natively with no emulation penalty.
The DNS Disaster
This is where the afternoon went sideways. What should have been a five-minute DNS configuration turned into a two-hour debugging session involving three separate problems, each one invisible until the previous one was fixed.
Problem 1: "Why Isn't It Loading?"
terraform apply completed cleanly. The deploy script built both images, pushed them to ECR, SSHed to the instance, and pulled them down. All three containers came up healthy. I SSHed in to verify:
$ curl localhost:3000
<!DOCTYPE html><html>... # SvelteKit HTML
$ curl localhost:8080/api/health
{"status":"ok"}
Everything worked on the box. But opening nige.wtf in a browser returned nothing. The domain was not resolving to the EC2 instance.
A quick nslookup revealed the problem immediately:
$ nslookup nige.wtf
Server: 8.8.8.8
Non-authoritative answer:
*** Can't find nige.wtf: No answer
The NS records were still pointing at the old nameservers. The domain had previously been managed through Google Domains, and the registrar (name.com, where the domain ended up after Google Domains shut down) was still delegating to Google's nameservers. Route 53 was authoritative for nothing.
The fix was simple: copy the four Route 53 nameservers from terraform output route53_nameservers into name.com's NS configuration. Wait for propagation. Should be minutes.
Problem 2: DNSSEC — The Invisible Wall
DNS propagated. nslookup returned the correct Elastic IP. The browser connected to the server. I could see Caddy's default placeholder page. Progress.
But the site was not serving over HTTPS. Caddy was supposed to automatically provision Let's Encrypt certificates for both nige.wtf and api.nige.wtf, but something was wrong. The ACME HTTP-01 challenge kept failing silently. No certificate, no HTTPS, just Caddy serving its default page over plain HTTP.
I SSHed in and checked Caddy's logs. And there it was:
DNSSEC: DNSKEY Missing: validation failure
DNSSEC. I had completely forgotten about DNSSEC.
Here is what happened, and why it is so insidious. When the domain was managed through Google Domains, DNSSEC was enabled. DNSSEC adds a layer of cryptographic verification to DNS: the authoritative nameservers publish DNSKEY records containing their public keys, and the registrar publishes DS (Delegation Signer) records that contain hashes of those keys. Together, they form a chain of trust — any resolver can verify that the DNS responses it receives actually came from the authoritative nameservers and were not tampered with in transit.
The DS records live at the registrar level, not at the nameserver level. They are a cryptographic promise from the registrar to the world: "this domain uses DNSSEC, and here is how to verify it."
When I switched the NS records from Google's nameservers to Route 53, the DS records stayed. They were still sitting at name.com, still advertising DNSSEC, still pointing at Google's DNSKEY records. But Google's nameservers were no longer authoritative for the domain — Route 53 was. And Route 53 did not have the matching DNSKEY records, because I never configured DNSSEC on the Route 53 side.
So the DS records were making a promise that nobody could keep. They said: "this domain uses DNSSEC, verify responses using these keys." But the keys did not exist at the current nameservers. Any DNSSEC-aware resolver would do the right thing: see the DS records, attempt to validate, find no matching DNSKEY, and fail the lookup.
This is not a bug. This is DNSSEC working exactly as designed. A broken DNSSEC chain is treated as a security failure, not a missing-feature fallback. The entire point of DNSSEC is that a missing or invalid signature means the response cannot be trusted.
The insidious part is how selectively this breaks things. Most consumer DNS resolvers — the ones your browser uses by default — do not validate DNSSEC. They happily resolve the domain, return the IP, and everything appears to work. You can browse to the site, see Caddy's placeholder page, even SSH to the instance. From the browser's perspective, nothing is wrong.
But Let's Encrypt's ACME validation uses DNSSEC-aware resolvers. When it tries to verify the HTTP-01 challenge, the lookup fails DNSSEC validation and returns SERVFAIL. Let's Encrypt sees this as "domain does not resolve" and refuses to issue a certificate. No error in the browser. No helpful message from Caddy beyond the log line. Just silent failure, with Caddy retrying in the background every few minutes.
I spent an embarrassing amount of time checking security groups, verifying port 80 was open, confirming the ACME challenge path was reachable, before finally reading Caddy's logs carefully enough to see the DNSSEC error. The fix, once identified, was trivial: log into name.com and disable DNSSEC. This removes the DS records from the parent zone. Once the DS records are gone, DNSSEC validation is no longer attempted, and the domain resolves normally for everyone — including Let's Encrypt's validators.
The lesson: If you are migrating a domain between DNS providers, disable DNSSEC at the registrar before switching nameservers. The correct migration order is: disable DNSSEC, wait for DS record removal to propagate, switch NS records, then optionally re-enable DNSSEC on the new provider. Doing it in the wrong order leaves you in a state where DNSSEC-aware resolvers cannot validate your domain, while non-DNSSEC resolvers work fine — the worst kind of partial failure.
Problem 3: Caddy's Staging Certs
With DNSSEC sorted, I waited for the DS record removal to propagate, then watched Caddy's logs. This time, the ACME challenge succeeded. Caddy obtained certificates. The site loaded over HTTPS.
But the browser showed a certificate warning. The certificate was untrusted.
During all those failed attempts while DNSSEC was still broken, during the debugging process, I had pointed Caddy at Let’s Encrypt’s staging CA to avoid hitting rate limits while troubleshooting. The staging environment is more lenient with validation — it is designed for exactly this kind of iterative debugging. But after fixing the DNSSEC issue, I forgot to switch Caddy back to the production endpoint. Caddy had cached the staging certificates and was happily serving them. The site loaded, but every browser rejected the certificate as untrusted.
The fix: nuke the cached certificates and let Caddy start completely fresh.
docker compose down
docker volume rm nigewtf_caddy_data nigewtf_caddy_config
docker compose up -d
Caddy restarted with no cached state, hit the production ACME endpoint, passed the HTTP-01 challenge (which now worked because DNSSEC was disabled), and provisioned valid Let's Encrypt certificates for both nige.wtf and api.nige.wtf. The whole process took about ten seconds.
Three problems. Two hours. Each one only visible after fixing the previous one. DNS is never the easy part.
Victory
Both domains now responded with valid TLS certificates. I ran the final verification from my local machine:
$ curl -s https://nige.wtf | head -1
<!DOCTYPE html>
$ curl -s https://api.nige.wtf/api/health
{"status":"ok"}
The blog was live. All that remained was publishing the first article — which meant generating a JWT, POSTing the article content to the admin API, and flipping the publish flag. The same flow I had tested locally, now running on a $6/month ARM instance in Sydney.
The final production stack:
| Component | Technology |
|---|---|
| Cloud | AWS (ap-southeast-2) |
| IaC | Terraform |
| Compute | EC2 t4g.micro (ARM) |
| Container Registry | ECR |
| DNS | Route 53 |
| Reverse Proxy / TLS | Caddy 2 |
| Containers | Docker Compose |
| Frontend | SvelteKit 2 (SSR) |
| Backend | Rust / Axum |
| Database | SQLite (WAL mode) |
| Auth | GitHub OAuth + JWT |
| Monthly Cost | ~$8 |
What I Would Do Differently
- Disable DNSSEC before switching nameservers. This should be step zero of any DNS migration. The correct order is: disable DNSSEC, wait for DS removal to propagate, switch NS records, re-enable on the new provider. I will never make this mistake again, and writing it down here hopefully means you will not either.
- Add health checks to the deploy script. Right now,
scripts/deploy.shreports success as soon asdocker compose up -dreturns, but the containers might not be healthy yet. A simplecurl --retry 5 --retry-delay 2 https://nige.wtf/api/healthat the end would catch startup failures before I close the terminal and walk away. - S3 backend for Terraform state. Even on a solo project, local state means the state file only exists on one machine. If my laptop dies, I lose the ability to manage the infrastructure through Terraform. A locked S3 backend with DynamoDB for state locking is cheap insurance.
- Staging environment. Even a second Docker Compose profile on the same EC2 instance, running on a different port behind a staging subdomain, would catch production-only bugs earlier. The localhost-to-production gap bit me more than once during the first article's deployment.
Closing
The blog went from "works on localhost" to "live on the internet" in two sessions with Claude — one to build it, one to deploy it. The Terraform took about 20 minutes to write. The deploy script took 5. The DNS debugging took two hours. As every ops engineer knows, the infrastructure is never the hard part — DNS is.
Total cost: roughly $8/month for a blog that serves server-rendered pages from a Rust backend using 5MB of RAM. The Caddyfile is 6 lines. The user-data script provisions the entire server from a blank Amazon Linux AMI. And the DNSSEC lesson is one I will not forget.
The source code for everything — Terraform, deploy scripts, application code — is at github.com/nigeldunn/nigewtf. Next up: maybe adding image optimization, maybe adding comments, maybe just writing more posts. The hard part is done.