How to run Ghost CMS cloud native on AWS

How to run Ghost CMS cloud native on AWS

As you may know the Ghost application is a light-weight opensource Content Management System (CMS) which is ideal for blogs and magazine websites. Just like and because  it allows both an headless implementation and customizing your own themes it gives much flexibility towards the future.

There are lots of manuals out there to run it either with Docker IAAS or NodeJS IAAS, but al always we wanted a more cloud native approach with features like auto-scaling and self-healing, but also minimal operational tasks. So we started to implement Ghost on AWS ECS Fargate.

How to run Ghost CMS cloud native on AWS

AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS).  Fargate makes it easy to focus on development and not your infrastructure since it removes the need to provision and manage servers and offers an interesting pay-per-use model

Docker image

Since we wanted a custom theme and additional configuration (adapter and routes) we add some layers  to the ghost docker image. If we don’t do this the theme and settings would get lost when a container scales up.

So here is how our Dockerfile looks like where we copy the theme, route and install the custom storage adapter for S3

FROM ghost:3.1.0

WORKDIR /var/lib/ghost

COPY ./ghost_theme /var/lib/ghost/content/themes/terra10
COPY ./ghost_config/routes.yaml /var/lib/ghost/content/settings/routes.yaml

RUN npm install -g ghost-storage-adapter-s3@2.8.0 && 
    ln -s /usr/local/lib/node_modules/ghost-storage-adapter-s3 ./current/core/server/adapters/storage/s3

We build, tag and upload the image to ECR

$(aws ecr get-login --no-include-email --region eu-west-1)
docker build -t ghostt10 .
docker tag ghostt10:latest
docker push

The ECS Task Definition

AWS ECS works with TaskDefinitions which hold your container configuration and settings. The TaskDefinitions runs in a DMZ behind an AWS Regional WAF with AWS Application LoadBalancer that handles the TLS termination, which means we can simply expose the default Ghost port 2368 for the AWS Targergroup.

    Type: AWS::ECS::TaskDefinition
    DependsOn: LogGroup
      - FARGATE
      NetworkMode: awsvpc
      Family: !Sub 't10-${ENV}-ghost'
      ExecutionRoleArn: !Ref ExecutionRole
      TaskRoleArn: !Ref TaskRole
      Cpu: '1024'
      Memory: '2048'
      - Name: ghost
        Image: !Sub T10.dkr.ecr.${AWS::Region}
        Essential: true
          LogDriver: awslogs
            awslogs-group: !Sub '/ecs/${ENV}/ghost'
            awslogs-region: eu-west-1
            awslogs-stream-prefix: ecs
        - ContainerPort: 2368
          Protocol: tcp  
          - Name:  url
          - Name:  database__client
            Value: mysql
          - Name : database__connection__host
          - Name: database__connection__user
            Value: ghost   
          - Name:  database__connection__password
            Value: verySecret
          - Name:  database__connection__database
            Value: ghostprd
          - Name: storage__active
            Value: s3          
          - Name: storage__s3__bucket
            Value: nl-terra10-content-prd  
          - Name: storage__s3__region
            Value: eu-west-1   
          - Name: storage__s3__assetHost

Environment variables

The database client is an AWS Aurora Serverless (MySQL 5.6) engine which seems to work perfectly with Ghost 3.1.0. The variables:

  • database__connection__host
  • database__connection__user
  • database__connection__password
  • database__connection__database

make sure the Ghost container can connect to the RDS instance.

For the S3 connection the variables:

  • storage__s3__bucket
  • storage__s3__region
  • storage__s3__assetHost

hold the bucketname, AWS region and the endpoint for the CloudFront content delivery network which exposed the files on S3. The assetHost is basically an alternative DNS name for your content, which is ideal for CloudFront since we now can use it’s EDGE endpoints to serve the images much faster. Notice the assetHost endpoints requires the https:// as prefix since it will be used 1-on-1 for the new URL.

What is confusing in both documentation and examples online is that often the variables are named GHOST_STORAGE_ADAPTER_S3_xxx instead of storage__s3__XXX. But if you check the adapter it’s code on Github you can see it supports both:

An AWS S3 storage adapter for Ghost. Contribute to colinmeinke/ghost-storage-adapter-s3 development by creating an account on GitHub.
How to run Ghost CMS cloud native on AWS

Ghost Storage Adapter S3 github project

The IAM Roles

Since ECS Tasks require an ExecutionRole and an optional TaskRole, we need the last one in this case. Since we don’t need to pass access and secret keys to the container environment variables if we can just use below IAM Role for that.

    Type: AWS::IAM::Role
      RoleName: !Sub 't10-${ENV}-ecstask'
          - Effect: Allow
            Action: 'sts:AssumeRole'
      - PolicyName: ghost-task-storage
          Version: 2012-10-17
            - Effect: Allow
                - s3:ListBucket
              Resource: arn:aws:s3:::nl-terra10-content-prd 
            - Effect: Allow
                - s3:DeleteObject
                - s3:GetObject
                - s3:PutObjectVersionAcl
                - s3:PutObject
                - s3:PutObjectAcl
              Resource: arn:aws:s3:::nl-terra10-content-prd /*     


We have our Ghost application running on AWS ECS Fargate and Aurora Serverless meaning we did not have to touch any server. And while doing this we get auto-scaling, self-healing and life cycle management on our whole infrastructure out of the box.

Hope it helps!