Hosting a Ghost blog on Google Cloud

· 3 min read
Hosting a Ghost blog on Google Cloud

I switched from the paid (although cheap) AWS Lightsail to Google Compute Engine for hosting this blog.

NOTE: I bought the support for one month to get support on disabling DNSSEC
I hope to run the Google Cloud VM Instance for free (using the monthly credits)

The difference between this instance and the old one which ran at AWS is also a newer Debian distribution 11, and Ghost 5 instead of Ghost 3.

Compute Engine: Virtual Machines (VMs) | Google Cloud
Compute Engine delivers configurable virtual machines running in Google’s data centers with access to high-performance

Google has free micro instances:

“All customers get a general purpose machine (e2-micro instance) per month for free, not charged against your credits.”

This is what i selected:

Zone: europe-west4-a
Machine type: e2-micro x86/64
Image: debian-11-bullseye-v20220920
Disk: Standard persistent disk

Old entry based in US, just for reference:

Don't forget to remove RDP and SSH from default firewall rules.

After creation, connect with web SSH terminal
If you want to connect with local SSH client and/or WinSCP import the key.

Update OS


sudo apt-get update
sudo apt-get upgrade
sudo apt-get autoclean
sudo apt-get clean
sudo apt-get autoremove
history -c

chmod +x

Install Docker

curl -fsSL -o
sudo sh

[this can takes some time]

sudo usermod -aG docker ${USER}

Install docker compose

sudo curl -SL -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Now disconnect and reconnect to activate new group membership

Caddy and Ghost installation

I choose /home/${USER}/docker as docker's base directory.

mkdir -p ~/docker/caddy/data
mkdir -p ~/docker/caddy/config
cd ~/docker

vi caddy/Caddyfile

    # Global options block. Entirely optional, https is on by default
    # Optional email key for lets encrypt
    email your@email.address
    # Optional staging lets encrypt for testing. Comment out for production.
    # acme_ca
} {
    reverse_proxy ghost:2368
} {

vi docker-compose.yaml

                image: caddy:2-alpine
                restart: always
                container_name: caddy
                        - 443:443
                        - ./caddy/Caddyfile:/etc/caddy/Caddyfile
                        - ./caddy/data:/data
                        - ./caddy/config:/config
                        - com.centurylinklabs.watchtower.enable=true

                image: ghost:5-alpine
                restart: always
                container_name: ghost
                        - 2368:2368
                        - ./ghost/data:/var/lib/ghost/content
                        - NODE_ENV=production
                        - url=
                        - database__client=sqlite3
                        - database__connection__filename="content/data/ghost.db"
                        - database__useNUllAsDefault=true
                        - database__debug=False
                        - com.centurylinklabs.watchtower.enable=true

        		image: v2tec/watchtower
                restart: always
                container_name: watchtower
                        - /var/run/docker.sock:/var/run/docker.sock
                        - com.centurylinklabs.watchtower.enable=true
                command: --schedule "0 0 4 * * *" --cleanup --label-enable

As you can see I also install and run Watchtower to keep the images up to date.

Now you can start them with:

docker-compose start -d


tar_opts="--exclude='/var/run/*' --exclude='/home/ron/backup/*' --exclude='/home/ron/backup-server.tar.gz' --exclude='/home/ron/docker/ghost/data/logs/*'"
cd "${BASH_SOURCE%/*}" || exit
rm -rf $backup_path
mkdir -p $backup_path
cp $backup_path

for i in `docker inspect --format='{{.Name}}' $(docker ps -qa) | cut -f2 -d\/`
  do container_name=$i
  mkdir -p $backup_path/$container_name
  echo -n "$container_name - "
  docker run --rm \
    --volumes-from $container_name \
    -v $backup_path:/backup \
    -e TAR_OPTS="$tar_opts" \
    piscue/docker-backup \
    backup "$container_name/$container_name-volume.tar.xz"
  echo "OK"

tar -czvf ./backup-server.tar.gz --exclude=".[^/]*" ./backup
rm -rf $backup_path