Easy Self Host Icon

My Personal Self-Hosting Workflow in 2024

Video

Transcript

Hello, this is channel Easy Self Host.

In this video, I'm going to show you my personal self-hosting workflow.

This video is about how I personally run my self-hosted applications, so it is not a tutorial.

First, I want to introduce the hardware and the software environments I use for self-hosting.

My server is a mini PC with a 12th gen Intel Pentium chip. It is a lower-end chip,

but it's already an overkill for all the self-hosting applications I have.

I run Proxmox virtual environment on top of the server because I need separated operating systems

for both my home server and experimenting ideas for the youtube videos.

Next is the software tools I use for self-hosting. Like all my tutorial videos,

I'm using docker and docker compose for self-hosting. Besides this, I'm using a tool

developed by myself. It's a command line tool that wraps a set of docker commands

to work with the config structure that I personally use.

The tool is developed with Node.js and is hosted on npm for downloads.

It's mostly just used by myself and it's not a stable software to use.

Then I'm going to show you how I use the tool for the configuration files I have.

Personally, I prefer exercising the idea of infrastructure as code for self-hosting applications.

So here I have all the configuration written in files managed by a single git repository.

Each app in this repository has their own directory.

In each directory, there is a docker compose file with some other configurations.

I prefer grouping an application with its dependencies in separated docker compose files

because I can manage each app individually. It can be cumbersome to run docker compose

command in every directory. This is why I developed a tool to automate this process.

You will see there are some other configuration files that's not in docker compose standard.

They are actually managed by esh-scripts tool.

For example, the server.yaml file defines some global settings.

There are settings for secret management and there are global docker resources

like volumes and networks. To demonstrate how this will work,

I can dry run the esh-scripts up command. This will print out the docker commands the script's

going to use. Here at the beginning, we can see the script's trying to create a bunch of volumes

and networks the configuration is specifying. Scrolling down, we can see the tool is running

docker compose up command in each directory to bring up the applications.

The other part of the configuration is for secret management.

Personally, I store all my credentials like API keys in the repository in an encrypted manner.

If the encryption key is present, I can decrypt the credentials using this command.

After this, we can see a secrets.yaml file appears in the repo.

In this file, secrets are specified using environment variables for each app.

They will be injected to the app when running up command.

I also keep the unencrypted secrets file and the key file in the gitignore,

so I won't accidentally commit them into the repo.

The key file has a pair of encryption keys in the JSON format.

They can be generated using the secrets genkey command.

In the server.yaml file, a list of possible paths for the key file is specified.

The tool will try to find the key in this order.

In my personal computer, I just keep the key file in the repo directory.

On my home server, I keep the key file in another directory under the $HOME.

I keep all these configurations in a GitHub private repository like this.

As you can see, the secrets in plain text are not checked into the repo.

There's only the encrypted version.

Now let's go to my server and see how I bring up all these applications.

First, I need to clone the repository on the GitHub.

Then I'll go to the directory of the repo.

We can see that all the configurations are under this directory.

And from here, all I need to do is to run the up command.

This will decrypt my secrets, set up all the global resources,

and bring up the applications using the docker compose commands.

Next, I'll test one of the applications I just deployed.

For example, the File Browser app.

And looks like everything is working.

Another thing I want to show you is how I backup the data.

I'm using a self-hosted backup solution called Kopia.

It has a web UI that you can configure everything there.

Here are the backup rules for two of my apps.

Here is the backup policy for the VaultWarden.

It is configured to backup this directory,

which is actually the docker volume for the Vault Warden container.

The backup is scheduled to happen every day at 12:00.

And before the backup, the Kopia will run a script to store the Vault Warden container.

So there won't be any intermediary data in the backup.

To make this work, I need to mount the docker socket to the Kopia container.

And the script just sends start or stop requests to the socket.

I'll post another video about Kopia later to show you how all these are set up.

That's all for this video.

Please consider subscribing for content like this.

You can find the configurations on GitHub if you want to give it a try.

The link is in the description below.

Thank you for watching.

Resources