I have been working on figuring out a way to rapidly deploy rails apps to Digital Ocean while also not worrying about data loss. I have found that having a nice CI/CD pipeline speeds up development enormously even if it takes a bit of time to set up. My approach is as follows:
- keep all code in a self hosted gitea instance, one repo per project
- use cron to download all repos, compress, encrypt, and back them up offsite
- when I want to spin up digital ocean droplet, use the bash script shown below
- use gitea actions to push code updates
- new features get merged into a dev branch
- dev branch automatically deployed to a local VM
- when features are ready to go to production dev gets merged into main
- gitea actions push the code to the droplet
- on the droplet, use cron to compress, encrypt, and back up the SQLite production database file
This approach is simple and fast. It allows for rapid development and deployment which is exactly what I'm trying to optimize for in my quest to release a bunch of rails applications in a short time. If I ever want to deploy to a different cloud provider, the bash script below can be turned into a go tool with a cloud provider interface as well as concrete implementations for different providers. Here is the script as well as a cloud-init file I use:
provision.sh
#!/bin/bash
# Parse arguments
cleanup=0
website_name=""
while (( "$#" )); do
case "$1" in
-c|--cleanup)
cleanup=1
shift
;;
-n|--name)
website_name="$2"
if [ -z $website_name ]; then
echo "Error: website name must be provided using -n flag"
exit 1
fi
shift 2
;;
-*|--*=)
echo "Unsupported flag $1" >&2
exit 1
esac
done
if [ -z $website_name ]; then
echo "Error: website name must be provided using -n flag"
exit 1
fi
if [ "$cleanup" = 1 ] ; then
if [ ! -f "$website_name-key" ]; then
echo "Error: $website_name-key" does not exist
exit 1
fi
if ! rm -rf "$website_name-key"; then
echo "Error: failed to remove $website_name-key"
exit 1
fi
if [ ! -f "$website_name-key.pub" ]; then
echo "Error: $website_name-key.pub does not exist"
exit 1
fi
if ! rm -rf "$website_name-key.pub"; then
echo "Error: failed to remove $website_name-key.pub"
exit 1
fi
if [ ! -f "$website_name-config.yml" ]; then
echo "Error: $website_name-config.yml does not exist"
exit 1
fi
if ! rm -rf "$website_name-config.yml"; then
echo "Error: failed to $website_name-config.yml"
exit 1
fi
doctl compute droplet delete "$website_name"
exit
fi
# Generate ssh keys to use for the droplet
if [ -f "./$website_name-key" ]; then
echo "Error: $website_name-key already exists"
exit 1
fi
if ! ssh-keygen -t ed25519 -f "./$website_name-key" -q -N "" ; then
echo "Error: failed to create new ssh key"
exit 1
fi
if [ ! -f "base-config.yml" ]; then
echo "Error: base-config.yml does not exist"
exit 1
fi
public_key=$(cat $website_name-key.pub)
if ! sed -e "s/website-name/$website_name/" -e "s/website-public-key/$public_key/" <base-config.yml >"$website_name-config.yml"; then
echo "Error: failed to create $website_name-config.yml"
exit 1
fi
# Add ssh key to digital ocean team account (in case we need to recover on root)
if ! doctl compute ssh-key import "$website_name-key" --format ID --public-key-file "./$website_name-key.pub"; then
echo "Error: failed to import ssh key"
exit 1
fi
# Create a new droplet in ny region with the name of the website.
if ! doctl compute droplet create "$website_name" --region sfo2 --image ubuntu-22-04-x64 --size s-1vcpu-1gb --user-data-file "$website_name-config.yml"; then
echo "Error: failed to create droplet"
exit 1
fi
base-config.yml
#cloud-config
users:
- default
- name: website-name
shell: /bin/bash
lock_passwd: true
groups:
- sudo
- docker
sudo:
- ALL=(ALL:ALL) NOPASSWD:ALL
ssh_authorized_keys:
- website-public-key
apt:
sources:
docker.list:
source: deb [arch=amd64] https://download.docker.com/linux/ubuntu $RELEASE stable
keyid: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
packages:
- docker-ce
- docker-ce-cli
- containerd.io
- docker-buildx-plugin
- docker-compose-plugin
- ca-certificates
- curl