Practice: Backup Docker container data: Volumes / Bind Mounts

Diese Seite gibt es auch in Deutsch

In my article "All Docker containers: Moving hosts, theory and practice", I already went a little bit into the topic of backup. If you followed the article, you know that I outsource the data of my Docker containers via bind mounts and back them up with rsync. The backup job is started via crontab. But first, I thought about what actually needs to be backed up when using Docker in a single-server setup.

Docker container backup?

When using volumes or bind mounts, container data does not need to be backed up. However, it is a requirement that the container is created with the original options or with the original Docker Compose file when it is replaced or recreated. For this reason, I recommend using Docker Compose, as the docker-compose.yml file can easily combine different containers into one service, storing all the options for creating the containers in one file and thus documenting them. A simple "docker compose up" starts the stored containers with the specified options, even if the containers do not yet exist or no longer exist. Data that should not be lost in the process can be swapped out to volumes or bind mounts, i.e. specific folders outside the container. So the most important things about a container are the options used to create it - most simply in the form of a docker-compose.yml file - and the data in the volumes or bind-mounts used. For backing up Docker volumes, the official waylooks like this:

Backup / Restore: Docker Volumes

A look at Docker's official documentation recommends starting your own container for a backup or restore and transferring the volume to the host via a bind mount:

Backup Volume:

docker run --rm --volumes-from dbstore -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata

Restore Volume:

 docker run --rm --volumes-from datavolume-name -v $(pwd):/backup image-name bash -c "cd /path-to-datavolume && tar xvf /backup/backup.tar --strip 1"

Source, see: https://docs.docker.com/storage/volumes/ (retrieval date: 09/06/2022)

I am aware that Docker volumes are the preferred option for storing Docker data, I still decided against using volumes in my setup. The reason is that simple folders (bind mounts) are much easier to back up from a single server. See also, Docker Volumes.

Backing up bind mounts

When using bind mounts, specific folders of Docker containers can be mapped to a defined folder on the host. The folders contain all relevant data of the containers, which also limits the backup to these folders. Special attention should be paid to the databases used:

Copy MySQL database during operation

If a database is copied during operation, it is not guaranteed that the database is consistent at the destination. Even if in the past MySQL-databases to my development environment during operation, I never had a problem with a database. Whether or not a problem can occur depends on the data and mainly on how much data is changed during the copying process. The fact is that copying a database while it is running is not completely clean and could cause problems.

Solution for MySQL: mysqldump

If you want to create a dump of a running mysql database, you should use mysqldump for this. With mysqldump the database can be exported and imported at the target system, see: MySQL Commands in Linux: Connection, Database, Backup. Mysqldump should be run in preparation for a backup, which allows the created image of the databases to be included in the backup. Since the dumps contain all data, the actual database files of the container do not need to be copied additionally.

I use mysqldump in addition to the backup.
The generated database dumps can be easily added to a copy job.

Here is a command from the remote computer I use to create the backup. The command connects to the server with SSH and creates a dump in the database container with mysqldump and writes it to a file.

ssh root@server.domain.tld "docker exec mysqlcontainer mysqldump --user=root --password=??? mysqlusername | gzip -c > /backups/db/mysqlcontainerdump.gz"

Stop container

Alternatively, the container for the database could be stopped, the files backed up and the database restarted. Since this variant is connected with a short downtime, I would not recommend this procedure for a backup.

rsync -rltD --delete /var/web/libe.net/db/ /var/web/libedbcopy
docker stop libenet_libenet-mysql.1.csubj0bcuu8ssd9xyg6rbyve5  
rsync -rltD --delete /var/web/libe.net/db/ /backup/dbcopy
docker start libenet_libenet-mysql.1.csubj0bcuu8ssd9xyg6rbyve5  

I last tried this variant when moving my server, see: Moving a Web Server with Docker Containers, Theory and Practice.

In practice: how I back up my container data.

I use Docker on a V-Server on the Internet and at home on my NAS. I use rsync to transfer the Docker containers on the rented V-Server to my NAS daily and the data from the NAS to an external hard drive and additionally to my Linux Receiver, see also: www.script-example.com/rsync-bash-funktion. 

I use one folder per application as preparation for a backup or moving containers. Inside the folder, I have stored the options for creating the containers in the form of a docker-compose.yml file and created the bind mounts used in the docker-compose-yml file as subfolders. As an example, I have organized all relevant data of a Docker service approximately as follows:

  • ./dockerService1/docker-compose.yml
  • ./dockerService1/db/* ... Database files
  • ./dockerService1/conf/* ... persistent config files

In the Docker-compose.yml files, the folders can be specified relative to the folder where the docker-compose-yml-File resides.

version: '3.1'

services:
  service1:
    image: test
    ...
    volumes:
      - ./dockerService1/www:/var/www

See also: docker data store: Docker volumes vs. host folders

When using databases, another subfolder could also be created for the data bandumps:

  • ./dockerService1/dbdumps/* ... Database dumps

By backing up the actual database data to the dbdumps folder using mysqldump, I would not need to back up the actual database files as well and could put them in a different location. Also the use of volumes for the database files would be conceivable at this point. In the event of an error, the dumps can be restored and the database thus restored.

To backup the dockerService1 described here, it is sufficient to backup all files and folders below dockerService1:

  • The docker-compose.yml file can rebuild the Docker container in case of failure or server change, and
  • all persistent user data is mapped into the container via bind mounts.

For the database containers, I added the mysql-dump line described here to the bash file from script-example.com.

#!/bin/bash
LOGFILE="/var/log/rsync.txt"
# ... Variablen und backup-Function from https://www.script-example.com/en-rsync-bash-function
# ...
ssh root@domain.tld "docker exec mysqlcontainername mysqldump --user=root --password=dbpassword dbname | gzip -c > /var/web/webservicename/dbdumps/dump.gz"
backup '-e ssh root@domaintld:/var/web' /backup/cloud
# ...
#Finish Script: Runtime-Information and Final-Summary: see https://www.script-example.com/en-rsync-bash-function

The mysqldump is stored at the webserver (ssh root@ ...) and transferred with rsync (backup '-e ssh ...).

To make the backup job run every day, I added the bash script to Crontab:

22 0 * * * sudo /scripts/rsync-backup.sh > /dev/null 2>&1

see also: Linux CronJobs - scheduled tasks [explained]

Conclusion

With rsync and optionally mysqldump and the right folder layout, it is very easy to transfer all Docker container data of a single server setup to another host and thus create a backup. To store multiple versions of the data, filesystem snapshots could be used on the backup host, see also: ZFS vs BTRFS - Filesystem | Deduplication and Snapshots. Since data can be stored with rsync on the backup host in the same form as on the source host, it would also be conceivable to start the containers on the backup host in the event of an error and theoretically use it as a standby server. Of course, this requires suitable hardware and a suitable network connection.

 

positive Bewertung({{pro_count}})
Rate Post:
{{percentage}} % positive
negative Bewertung({{con_count}})

THANK YOU for your review!

Updated: 2022-12-26 von Bernhard 🔔


Top articles in this section


Nextcloud Server Docker | Setup + https: Let's Encrypt [ssl]
To synchronize contacts, appointments, and photos of my NAS, I tested Nextcloud and thus turned my back on other cloud providers for my private data. Thanks to Docker, the installation is easier and more flexible than ever, allowing Nextcloud to run on almost any hardware.

Running Bitwarden in Docker - Setup step by step
Bitwarden is a web-based password manager, similar to LastPass, but open source and the ability to run (host) it yourself. How Bitwarden compares to other password managers, I have considered on the following page: Password Managers Secure? KeePass vs LastPass vs Bitwarden. Bitwarden consists of several services, which can be provided via different containers. The relatively complex setup has been simplified with "Bitwarden Unified" especially for self-hosting by packing all services into one co...

Commissioning Zigbee2MQTT in Docker - step by step
Zigbee2MQTT is an open source Zigbee bridge which can be easily integrated into existing smart home solutions thanks to the MQTT network protocol. As an example, Zigbee2MQTT combined with MQTT broker Mosquitto and Home Assistant can collect, display, record and control data from Zigbee devices. The setup described here uses Docker as a base. Manufacturer's website: https://www.zigbee2mqtt.io

Questions / Comments


By continuing to browse the site, you agree to our use of cookies. More Details