My wonderful wife has been trying to convince me to start a blog for years. I've finally caved, mostly because I need a place for notes as I have a tendency to forget things. This first one will be just my notes on my setup. The “what I use and how”.
The What
I use Hugo for site generation, templating, and serving the content. Yeah, I know it's for static generation and you're meant to host it on something like nginx, but…
To do the deploy and run, Docker to the rescue. With Docker, I setup a reverse proxy with a wonderful image from @jwilder along side an image from @JrCs to give me HTTPS via Let's Encrypt. I've had several toys running on a variety of servers in this manner.
Hosting
I use Linode for my hosting. They exist in that sweet spot of cost vs size.
The server is a small CoreOS instance. I personally fell in love with this distro several years back. The idea that everything has to be run in a container… *gush* perfect.
Docker Setup
I want the ability to run other servers from the same box, so we have to use a reverse proxy. In my not-at-all-humble (naah) opinion, something that shouldn't have to happen is a restart on the revere proxy for any new containers deployed that produce a vhost config. Not sure if this is still an issue on things like Apache or Nginx, but in the past, any new vhost being added / modified required a restart of the reverse proxy application. This is where the 2 previously mentioned images come in. They detect a new container being run and look for specific environment variables to do their work. Specifically VIRTUAL_HOST and VIRTUAL_PORT for the ngnin-proxy container and LETSENCRYPT_HOST and LETSENCRYPT_EMAIL for the -companion image. For the email property, be sure to set a valid one. They do send you relevant notifications.
Since they work together, I use docker-compose to stand these up.
NGINX
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- dhparam:/etc/nginx/dhparam
- certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
network_mode: bridge
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-proxy-le
volumes_from:
- nginx-proxy
volumes:
- certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
network_mode: bridge
volumes:
conf:
vhost:
html:
dhparam:
certs:
NOTE: This does require you to mount your docker.sock into the container. I'm typically not a huge fan of this, but this particular setup hasn't been an issue. Plus, this is how they detect the start of other containers to trigger the appropriate processes.
Next, you'll see the prod and local Hugo instances. They're mostly the same with the command being the only real difference. For the Docker image, I could easily bake CMD or ENTRYPOINT into the image, but I wanted the ability the run the container using bash and controlling the startup / commands. Yes, I use docker-compose on this as well, because I'll never remember all the flags. I also alias docker-compose to compose just to make my life simpler.
PROD SERVER
version: '3'
services:
tkwcafe:
restart: always
image: blog.tkwcafe.com:latest
container_name: blog
network_mode: bridge
volumes:
- ./content:/home/hugo/blog.tkwcafe.com/content
- ./config:/home/hugo/blog.tkwcafe.com/config
expose:
- "xxxx"
environment:
- VIRTUAL_PORT=xxxx
- VIRTUAL_HOST=blog.tkwcafe.com
- LETSENCRYPT_HOST=blog.tkwcafe.com
- LETSENCRYPT_EMAIL=xxxxxxxxxxxxxxxxxxx
command: [
"hugo",
"serve",
"--environment=production",
"--bind=0.0.0.0",
"--appendPort=false",
"--baseURL=//blog.tkwcafe.com/",
"--theme=code-editor",
"--disableLiveReload",
"--configDir=config/",
"--config=config/production.toml",
"--verbose"
]
LOCAL SERVER
version: '3'
services:
tkwcafe:
restart: always
image: blog.tkwcafe.com:latest
container_name: blog
network_mode: bridge
volumes:
- ./content:/home/hugo/blog.tkwcafe.com/content
- ./config:/home/hugo/blog.tkwcafe.com/config
ports:
- "8080:8080"
command: [
"hugo",
"serve",
"--buildDrafts",
"--bind=0.0.0.0",
"--port=8080",
"--configDir=config/"
]
I'm mounting the config/ and content/ directories to the host in order to allow for updates without a rebuild on the Docker image. Specifically for the content/, I wanted to do something where I run a local instance and deploy to the server with rsync. So, I did. Super simple blog-sync script:
#!/bin/bash
rsync --delete \
-r ${HOME}/code/docker/blog.tkwcafe.com/content \
tkwcafe.com:~/code/docker/blog.tkwcafe.com/
Configuration
In order to have certain things in PROD (e.g. Google Analytics) that shouldn't be local, the config/ dir looks a little something like:
$ tree config
config
├── development
│ └── config.toml
└── production.toml
1 directory, 2 files
The for the config toml files, the only difference between production.toml and development/config.toml is the googleAnalytics code. Don't need to track myself in development.
baseURL = "https://blog.tkwcafe.com/"
languageCode = "en-us"
theme = "code-editor"
title = "TKWCafe"
googleAnalytics = "UA-xxxxxxxxx-x"
[taxonomies]
tag = "tags"
category = "categories"
[params]
AuthorName = "Elliott Polk"
GitHubUser = "elliottpolk"
TwitterUser = "ElliottPolk"
enableemoji = true
watch = true
locale = "en-US"
Wrap Up
And that's really it. This isn't anything super out there and I suspect there are better things available. I've just always wanted to setup this configuration of things. I'll always be tweaking and making additional changes in the coming days.