How I built and deployed this blog


Intro

I wanted to deploy a simple site with no backend for my personal site and blog, mostly because I wanted to avoid paying for a server or database when I didn’t have to (technically DNS is a distributed database, but treating it like one is a bit too much for me… so far). I had read about static site generators and Astro looked like a good match for what I want — I can use a lot of the Typescript skills I’ve honed while still delivering static files with no server backend.

To get a static site up and running, all you need is some place to store your output files where visitors can access them, and some method of getting those files from the storage to the user. Having been working exclusively in AWS lately (for some reason…) I settled on S3. It wasn’t quite as straightforward as I expected, but I’ve got it up and running now, and I’ve outlined the steps for deploying a static site to S3, with a GitHub Actions CI/CD workflow just for fun.

Prerequisite

This guide assumes that you own a domain and have an active SSL certificate for that domain. While a lot of the rest of this guide uses automation, this is one step you will need to do manually. I personally generated a certificate for my domain on Amazon Certificate Manager, then added a CNAME record for that certificate in the DNS settings of my domain registrar.

Infrastructure-as-Code

Whenever possible, I like to avoid using “click-ops”, where a user creates cloud infrastructure though a web console. There are advantages to working through the console, particularly if you are still learning the different options you have available, but ultimately it’s best to have a record of exactly what you have up there, and a history of changes. The way to do that is Infrastructure-as-Code (IaC): writing the infrastructure as a code file and checking it in to version control. The infrastructure can then be deployed by running an IaC tool command that deploys the infrastructure defined in your code file.

There are several IaC options for AWS: Terraform, CloudFormation, and AWS CDK come to mind.

  • Terraform: Terraform is a third-party tool that uses yml templates to define infrastructure, and can be used not only with AWS but with other cloud providers and on-prem infrastructures.
  • CloudFormation: CloudFormation is an AWS-specific tool that uses yml or json templates to define infrastructure on AWS.
  • AWS CDK: AWS CDK allows you to write IaC in a number of different languages, which the cdk tool automatically converts to CloudFormation templates and deploys.

I’ve gone with AWS CDK, as AWS has invested heavily in its adoption and I get to use TypeScript, which I’ve been using nearly exclusively at work for a few years now.

GitHub Actions CI/CD

Continuous Integration (CI) is a coding practice where code changes by contributors are automatically built when they are pushed to the repository. Continuous Delivery is another practice where these code changes are automatically deployed to production if the build succeeds. GitHub Actions is a CI/CD platform integrated into GitHub. I’m using GitHub Actions to automatically deploy changes and new blog posts to my static site, which will require me to set some permissions in my CDK Stack before I define my workflow in the repository.

AWS CDK Stack for Astro on S3 using CloudFront

The simplest way to deploy Astro on AWS is to create an S3 bucket and configure it for static website hosting. S3 is an object storage service that allows users to store objects (often files) on the cloud. However, using the S3 bucket to host a web site requires you to allow public access to the bucket, which is a security risk. I can keep access to the S3 bucket private by directing access through a CloudFront distribution. CloudFront is a Content Delivery Network (CDN) Service that caches static web content in locations closer to the user. In a static site this can greatly speed up the behavior of the site because the files do not need to be accessed from the S3 bucket every time a user navigates to the site.

The CDK Stack, then, needs to make the following resources:

  • An S3 bucket
  • A Cloudfront distribution

To give Github Actions permission to update the bucket and refresh the CloudFront distribution:

  • An IAM role with permissions to read and write to the S3 bucket and to invalidate the CloudFront distribution

After some trial and error, I realized that by default you would receive a 403 Forbidden response any time you tried to go to a subdirectory, i.e. “mydomain.com/subdomain”. This is because there is nothing to tell S3 to serve the index.html of the subdirectory to the user. Luckily, CloudFront allows you to execute functions on every request, so you can write a subfolder redirect function to automatically serve the index.html in these subdomains

  • A CloudFront function to attach to the distribution

After some more trial and error, I realized the permissions policies attached to the S3 bucket weren’t sufficient for allowing GitHub Actions to publish or to allow the CloudFront distribution to access subdirectories, so I added permissions to the resource policy on the bucket.

  • Add policies to the S3 bucket

To use AWS CDK in javascript to create this infrastructure, follow these steps:

  1. Make sure to download and install Node.js.
  2. Use NPM to install AWS CDK globally: npm install -g aws-cdk
  3. Make a directory for your new project. mkdir my-cdk-app && cd my-cdk-app.
  4. Initialize a new CDK app in javascript. cdk init app --language typescript.
  5. Log into your AWS account. aws configure
  6. Bootstrap your AWS environment. This creates resources on your account for deployment, such as a S3 bucket to store the deployment resources and IAM permissions. cdk bootstrap

The code that runs your CDK is in the /bin folder, usually an app.ts file. You can potentially deploy multiple stacks, if your infrastructure is complex, but this is not, so we will be writing a single stack file describing a basic deployment stack.

The CDK Stack file, located in the /lib folder, should look something like this:

import { CfnOutput, RemovalPolicy, Stack, StackProps } from 'aws-cdk-lib';
import { Certificate } from 'aws-cdk-lib/aws-certificatemanager';
import { 
    CachePolicy, 
    Distribution, 
    Function, 
    FunctionCode, 
    FunctionEventType, 
    OriginRequestPolicy, 
    ViewerProtocolPolicy 
} from 'aws-cdk-lib/aws-cloudfront';
import { S3BucketOrigin } from 'aws-cdk-lib/aws-cloudfront-origins';
import { 
    ArnPrincipal, 
    Effect, 
    FederatedPrincipal, 
    OpenIdConnectProvider, 
    PolicyStatement, 
    Role, 
    ServicePrincipal 
} from 'aws-cdk-lib/aws-iam';
import { BlockPublicAccess, Bucket } from 'aws-cdk-lib/aws-s3';
import { Construct } from 'constructs';
import path from 'path';

const GITHUB_OIDC_ISSUER_URL = 'token.actions.githubusercontent.com';

/**
 * Properties required to configure the static site stack.
 *
 * NOTE: The certificate must be created and DNS validated in the 'us-east-1' region
 * regardless of the region where this stack is deployed.
 */
export interface StaticSiteStackProps extends StackProps {
  /** The full domain name (e.g., 'www.mycooldomain.com'). */
  readonly domainName: string;

  /** The bucket name */
  readonly bucketName: string;

  /** ARN of the ACM certificate for the domain, must be in us-east-1. */
  readonly certificateArn: string;
}

export class AstroIacStack extends Stack {
  constructor(scope: Construct, id: string, props: StaticSiteStackProps) {
    super(scope, id, props);

    // 1. Content Bucket (Private)
    // We use a private S3 bucket to store the website content. 
    // This bucket is blocked from public access, ensuring content is only served 
    // via CloudFront.
    const siteBucket = new Bucket(this, 'SiteBucket', {
      bucketName: props.bucketName,
      publicReadAccess: false,
      blockPublicAccess: BlockPublicAccess.BLOCK_ALL,
      removalPolicy: RemovalPolicy.DESTROY, // NOTE: Use RETAIN for production
      autoDeleteObjects: true, // Only for easy cleanup, set to false for production
    });

    // Output the Bucket Name (for reference)
    new CfnOutput(this, 'BucketName', {
      value: siteBucket.bucketName,
    });

    // 2. Certificate Reference
    // Reference the existing certificate created and validated in us-east-1.
    const certificate = Certificate.fromCertificateArn(
      this,
      'SiteCertificate',
      props.certificateArn
    );

    // 3. Configure IAM to trust GitHub's OIDC identity provider
    const githubOidcProvider = new OpenIdConnectProvider(this, 'GitHubOIDCProvider', {
      url: `https://${GITHUB_OIDC_ISSUER_URL}`,
      clientIds: ['sts.amazonaws.com'],
      thumbprints: ['6938fd4d98bab03faadb97b34396831e3780aea1'],
    });

    const githubActionsRole = new Role(this, 'GitHubActionsRole', {
      assumedBy: new FederatedPrincipal(
        githubOidcProvider.openIdConnectProviderArn,
        {
          StringLike: {
            [`${GITHUB_OIDC_ISSUER_URL}:sub`]: `repo:<company/repository in GitHub>:*`,
          },
          StringEquals: {
            [`${GITHUB_OIDC_ISSUER_URL}:aud`]: 'sts.amazonaws.com',
          },
        },
        'sts:AssumeRoleWithWebIdentity',
      ),
      description: 'IAM Role for GitHub Actions to deploy resources',
    });

    githubActionsRole.addToPolicy(
      new PolicyStatement({
        actions: [
          's3:ListBucket',
          's3:GetObject',
          's3:GetObjectTagging',
          's3:PutObject',
          's3:DeleteObject',
          's3:PutObjectTagging',
        ],
        resources: [
          siteBucket.bucketArn,
        ],
        sid: 'S3ReadWriteAccess',
      }),
    );

    // 4. CloudFront Distribution (CDN)
    // This is the CloudFront function for handling subdomain redirects
    const subfolderRedirectFunction = new Function(this, 'SubfolderRedirectFunction', {
      code: FunctionCode.fromFile({
        filePath: path.join(
          __dirname,
          './subfolderRedirect.js'
        ),
      }),
    });

    // This is the public entry point for your website, providing caching and HTTPS.
    const distribution = new Distribution(this, 'SiteDistribution', {
      defaultRootObject: 'index.html',
      domainNames: [props.domainName], // List of custom domain names
      certificate: certificate,
      defaultBehavior: {
        origin: S3BucketOrigin.withOriginAccessControl(siteBucket),
        viewerProtocolPolicy: ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
        cachePolicy: CachePolicy.CACHING_OPTIMIZED,
        functionAssociations: [
          {
            function: subfolderRedirectFunction,
            eventType: FunctionEventType.VIEWER_REQUEST,
          },
        ],
      },
    });

    // Give GitHub Actions permissions to invalidate the CDN cache
    // If we do not do this, the CDN will retain the old version of the site
    // even after new content is published.
    githubActionsRole.addToPolicy(
      new PolicyStatement({
        actions: [
          "cloudfront:CreateInvalidation",
        ],
        resources: [
          distribution.distributionArn,
        ],
        sid: 'CloudFrontDistributionAccess',
      }),
    );

    // Give GitHub Actions permissions to update the site S3 bucket
    siteBucket.addToResourcePolicy(
      new PolicyStatement({
        effect: Effect.ALLOW,
        principals: [new ArnPrincipal(githubActionsRole.roleArn)],
        actions: ['s3:PutObject'],
        resources: [
          siteBucket.bucketArn,
          `${siteBucket.bucketArn}/*`,
        ],
      }),
    );

    // Give CloudFront permission to read from the site S3 bucket subfolders
    siteBucket.addToResourcePolicy(
      new PolicyStatement({
        effect: Effect.ALLOW,
        principals: [new ServicePrincipal('cloudfront.amazonaws.com')],
        actions: ['s3:GetObject'],
        resources: [
          `${siteBucket.bucketArn}/*`,
        ],
        conditions: {
          StringEquals: { "AWS:SourceArn": distribution.distributionArn },
        }
      }),
    );

    // Output the CloudFront Distribution URL (for verification)
    new CfnOutput(this, 'DistributionDomainName', {
      value: distribution.distributionDomainName,
      description: 'The domain name of the CloudFront distribution',
    });
  }
}

The subfolderRedirect.js file, also located in the /lib folder, should look like this:

function handler(event) {
    var request = event.request;
    var uri = request.uri;
    
    // Check whether the URI is missing a file name.
    if (uri.endsWith('/')) {
        request.uri += 'index.html';
    } 
    // Check whether the URI is missing a file extension.
    else if (!uri.includes('.')) {
        request.uri += '/index.html';
    }
  
    return request;
}

The app.ts file, located in the /bin folder, should look something like this:

#!/usr/bin/env node
import * as cdk from 'aws-cdk-lib';
import { AstroIacStack } from '../lib/astro-iac-stack';

const app = new cdk.App();
new AstroIacStack(app, 'AstroIacStack', {
  domainName: '<My Domain>',
  bucketName: '<My Bucket>',
  certificateArn: '<My Certificate arn>',
  env: {
    region: 'us-east-1',
  },
});

app.synth();

With all that written, you can deploy your infrastructure using AWS CDK:

  1. Synthesize the CDK code into a CloudFormation template: cdk synth.
  2. Deploy the infrastructure: cdk deploy.

You should be prompted to approve changes to your AWS account. When the deployment is complete, the distribution domain name will be shown in the terminal, which you should copy for the next step.

Add CNAME Record for your Distribution Domain

In the DNS settings for your domain in your domain registrar, add a new CNAME record. The HostName should be www and the Target Name should be the distribution domian you copied from your CDK run. This step tells your domain registrar where to send the traffic directed at your domain, in this case the Cloudfront distribution domain.

GitHub Actions Workflow for publishing Astro changes to S3

GitHub Actions allows you to define a workflow in yml that checks out your Astro repository, builds the newest distributable version of the blog, copies that new version to your S3 bucket, and invalidates the CloudFront cache for your site.

Before you write and deploy your GitHub Actions workflow, you need to add secrets to your repository in GitHub. You can do this by selecting Settings on your repository in GitHub, then looking under Security, then Secrets and variables, and selecting Actions. Add a new repository secret for the following values:

  • BUCKET_ID - the ID of your S3 bucket, taken from AWS
  • DISTRIBUTION_ID - the ID of your Cloudfront distribution, taken from AWS
  • ROLE_ARN - the ARN of the gitHubActions IAM role that you defined in the CDK

Next you need to write the yml to define the GitHub Actions workflow, located at /.github/workflows/main.yml:

name: Deploy Website

# Controls when the action will run. Invokes the workflow on push events but only for the main branch
on:
  push:
    branches: [ main ] # Run when you push a new version to the main branch

env:
  AWS_REGION : us-east-1

# Permission can be added at job level or workflow level    
permissions:
      id-token: write   # This is required for requesting the JWT
      contents: read    # This is required for actions/checkout
jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Git clone the repository
        uses: actions/checkout@v3
      - name: configure aws credentials
        uses: aws-actions/configure-aws-credentials@v1.7.0
        with:
          role-to-assume: ${{ secrets.ROLE_ARN }}
          role-session-name: GitHub_to_AWS_via_FederatedOIDC
          aws-region: ${{ env.AWS_REGION }}
      - name: Install modules
        run: npm ci
      - name: Build application
        run: npm run build
      - name: Deploy to S3
        run: aws s3 sync --delete ./dist/ s3://${{ secrets.BUCKET_ID }}
      - name: Create CloudFront invalidation
        run: aws cloudfront create-invalidation --distribution-id ${{ secrets.DISTRIBUTION_ID }} --paths "/*"

Check in and publish this file to your GitHub repository, and you should see the status of the workflow runs in your Actions tab on your GitHub repository.