Giter Club home page Giter Club logo

docker-volume-backup's Introduction

offen.software logo

docker-volume-backup

Backup Docker volumes locally or to any S3, WebDAV, Azure Blob Storage, Dropbox or SSH compatible storage.

The offen/docker-volume-backup Docker image can be used as a lightweight (below 15MB) companion container to an existing Docker setup. It handles recurring or one-off backups of Docker volumes to a local directory, any S3, WebDAV, Azure Blob Storage, Dropbox or SSH compatible storage (or any combination thereof) and rotates away old backups if configured. It also supports encrypting your backups using GPG and sending notifications for (failed) backup runs.

Documentation is found at https://offen.github.io/docker-volume-backup


Quickstart

Recurring backups in a compose setup

Add a backup service to your compose setup and mount the volumes you would like to see backed up:

version: '3'

services:
  volume-consumer:
    build:
      context: ./my-app
    volumes:
      - data:/var/my-app
    labels:
      # This means the container will be stopped during backup to ensure
      # backup integrity. You can omit this label if stopping during backup
      # not required.
      - docker-volume-backup.stop-during-backup=true

  backup:
    # In production, it is advised to lock your image tag to a proper
    # release version instead of using `latest`.
    # Check https://github.com/offen/docker-volume-backup/releases
    # for a list of available releases.
    image: offen/docker-volume-backup:latest
    restart: always
    env_file: ./backup.env # see below for configuration reference
    volumes:
      - data:/backup/my-app-backup:ro
      # Mounting the Docker socket allows the script to stop and restart
      # the container during backup. You can omit this if you don't want
      # to stop the container. In case you need to proxy the socket, you can
      # also provide a location by setting `DOCKER_HOST` in the container
      - /var/run/docker.sock:/var/run/docker.sock:ro
      # If you mount a local directory or volume to `/archive` a local
      # copy of the backup will be stored there. You can override the
      # location inside of the container by setting `BACKUP_ARCHIVE`.
      # You can omit this if you do not want to keep local backups.
      - /path/to/local_backups:/archive
volumes:
  data:

One-off backups using Docker CLI

To run a one time backup, mount the volume you would like to see backed up into a container and run the backup command:

docker run --rm \
  -v data:/backup/data \
  --env AWS_ACCESS_KEY_ID="<xxx>" \
  --env AWS_SECRET_ACCESS_KEY="<xxx>" \
  --env AWS_S3_BUCKET_NAME="<xxx>" \
  --entrypoint backup \
  offen/docker-volume-backup:v2

Alternatively, pass a --env-file in order to use a full config as described below.


Copyright ยฉ 2024 offen.software and contributors. Distributed under the MPL-2.0 License.

docker-volume-backup's People

Contributors

acejam avatar alexander-zimmermann avatar dalmouiee avatar dependabot[bot] avatar erwanlpfr avatar freix1 avatar generaltao2 avatar hendr-ik avatar homelabhaven avatar hywax avatar jsloane avatar kaerbr avatar kzshantonu avatar m90 avatar maxja4 avatar michalmiddleton avatar mmauro94 avatar nigelm avatar peteward44 avatar pixxon avatar pxlfrk avatar riscue avatar rpatel3001 avatar schwannden avatar simboel avatar testwill avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-volume-backup's Issues

Email is not sent

  • I'm submitting a ...

    • support request
  • Please tell us about your environment:

    • Image version: 2.7.2
    • Docker version: 20.10.11
    • docker-compose version: v2

Hello,
can you please help me with email notifications,
Everything is working great, but email is not sent.
I triggerd a manual backup with docker exec <container_id> backup, the containers are all down, tar was created but email is not sent...

Here is the compose part for backup

backup:
restart: always
profiles: ["backup"]
image: offen/docker-volume-backup:v2.7.2
hostname: dzgz
environment:
- BACKUP_FILENAME=backup-%Y-%m-%dT%H-%M-%S.tar.gz
- BACKUP_LATEST_SYMLINK=backup.latest.tar.gz.gpg
- BACKUP_RETENTION_DAYS=2
- BACKUP_PRUNING_LEEWAY=5m
- BACKUP_PRUNING_PREFIX=backup-
- BACKUP_CRON_EXPRESSION=0 19 * * 1-5
- NOTIFICATION_URLS=smtp://prefix%[email protected]:passwd@domainurl:25/?fromAddress=[email protected]&toAddresses=[email protected]
- NOTIFICATION_LEVEL=info
volumes:
- solar-data:/backup/solar-data:ro
- solar-solrhome:/backup/solar-solrhome:ro
- alf-data:/backup/alf-data:ro
- postgres-data:/backup/postgres-data:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./backups:/archive
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro

*%5C in URL refers to the \

SMTP SERVER IS WORKING WELL , i send email via telnet like this:

C: MAIL FROM:[email protected]
C: RCPT TO:[email protected]
C: DATA
C: From: "Pisarnica" [email protected]
C: To: "Test1" [email protected]
C: Date: Wed, 05 Jan 2022 14:40:00 +0100
C: Subject: Test message
C: Hello Alice.
C: This is a test message with 5 header fields and 4 lines in the message body.
C: Your friend,
C: Bob
C: .

In this telnet example there is no username and pass, so i cut that part from the URL
- NOTIFICATION_URLS=smtp://domainurl:25/?fromAddress=[email protected]&toAddresses=[email protected]
..there was no errors but email still was not sent..

I could not do any troubleshooting cause there is no outputs to the console about emails from the container .

Do you have any clue of what is going on here..
Thank you very much!

Support scp backed storage

Given the current architecture of this script, it would be relatively easy to add support for storing backups on remote systems that are accessed by SSH / scp.

Open questions:

  • how does pruning of old backups work in this scenario? scp itself does only copy. Is there another suitable Go library for syncing files over the wire?
  • how does authentication work? Do users mount authorized SSH keys into the container? Are there any security issues I am not aware of?

[Feature Request] Support backup targets like Box / Dropbox

  • I'm submitting a ...

    • bug report
    • feature request
    • support request
  • What is the current behavior?
    Backups can be made to a variety of S3-like targets, but not storage services like Dropbox, Box, etc.

  • What is the motivation / use case for changing the behavior?
    These are very common in business environments.

I'm not a go dev, but maybe someone else knows how to do this... Thanks!

Reduce storage footprint when using GPG encryption

  • I'm submitting a ...

    • bug report
    • feature request
    • support request
  • What is the current behavior?

When using GPG_PASSPHRASE this tool will create two files in the /tmp directory:

  1. The backup-*.tar.gz file
  2. An decrypted backup-*.tar.gz.gpg
  • If the current behavior is a bug, please provide the configuration and steps to reproduce and if possible a minimal demo of the problem.

It's not a bug, but it will lead to a high storage usage. This is problematic on restricted servers for example, where storage space is expensive.
Example: On one of my servers I've got 90 GiB storage. With the current implementation the server application itself must not exceed 30 GiB or the server would encounter an out of space error when executing the backup.

  • What is the expected behavior?

Not expected but an idea: When using GPG_PASSPHRASE the result of the tar command could (probably) directly be piped to gpg. This way only the *.gpg file would be saved to the storage.

  • What is the motivation / use case for changing the behavior?

  • Please tell us about your environment:

    • Image version: v2.15.2
    • Docker version:
    • docker-compose version:
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, etc)

panic: unable to acquire file lock

Docker compose v 3.9

backup:
    image: offen/docker-volume-backup:v2.5.0
    logging:
      driver: "json-file"
      options:
        max-file: 6
        max-size: 1024m
    user: root
    restart: always
    mem_limit: 512m
    mem_reservation: 100m
    container_name: backup_new
    environment:         
        - BACKUP_CRON_EXPRESSION=$BACKUP_CRON_EXPRESSION
        - BACKUP_FILENAME=$BACKUP_FILENAME
        - BACKUP_PRUNING_PREFIX=$BACKUP_PRUNING_PREFIX
        - BACKUP_RETENTION_DAYS=$BACKUP_RETENTION_DAYS
        - BACKUP_FROM_SNAPSHOT=$BACKUP_FROM_SNAPSHOT
    volumes:
        - mongo_archive:/backup/mongo-archive:ro
        - timescale_data:/backup/timescale-data:ro         
        - ${HOME}:/archive 

BACKUP_CRON_EXPRESSION='00 */12 * * *'
BACKUP_FILENAME='backup-%Y-%m-%dT%H-%M-%S.tar.gz'
BACKUP_PRUNING_PREFIX='backup-'
BACKUP_RETENTION_DAYS=7
BACKUP_FROM_SNAPSHOT='true'

HOME='C:\BackupVolume'

time="2021-12-02T15:34:09Z" level=info msg="Created snapshot of `/backup` at `/tmp/backup`."

**panic: unable to acquire file lock

goroutine 1 [running]:
main.lock({0xa1950b, 0x21})
/app/cmd/backup/main.go:677 +0xce
main.main()
/app/cmd/backup/main.go:35 +0x45**

time="2021-12-02T15:38:29Z" level=info msg="Created backup of `/tmp/backup` at `/tmp/backup-2021-12-02T15-34-00.tar.gz`."
time="2021-12-02T15:38:56Z" level=info msg="Stored copy of backup `/tmp/backup-2021-12-02T15-34-00.tar.gz` in local archive `/archive`."
time="2021-12-02T15:38:56Z" level=info msg="Sleeping for 1m0s before pruning backups."
time="2021-12-02T15:39:56Z" level=info msg="None of 6 local backup(s) were pruned."
time="2021-12-02T15:39:56Z" level=info msg="Removed snapshot `/tmp/backup`."
time="2021-12-02T15:39:56Z" level=info msg="Removed tar file `/tmp/backup-2021-12-02T15-34-00.tar.gz`."
time="2021-12-02T15:39:56Z" level=info msg="Finished running backup tasks."

Support Nextcloud / WebDav

  • I'm submitting a ...

    • bug report
    • feature request
    • support request
  • What is the current behavior?
    I see no ability to sync my backups to Nextcloud (Webdav).

  • If the current behavior is a bug, please provide the configuration and steps to reproduce and if possible a minimal demo of the problem.

  • What is the expected behavior?
    I would like to sync the backups to a specific folder inside my Nextcloud (I guess webdav would be the protocol?!).

  • What is the motivation / use case for changing the behavior?
    I could use a tool like rclone but it would be way easier to set some env variables (eg.: source_folder, dest_folder, username, password/token). Just like its possible to sync to S3.

  • Please tell us about your environment:

    • Image version: v2.7.2
    • Docker version: Docker version 20.10.12, build e91ed57
    • docker-compose version: ---
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, etc)

Confusing README.md "Restoring..." section example

Hi Mr Offen,

this is not so much an issue but more of a suggestion so as to avoid confusion when reading through the example part of the restore section:

Not having the same name for the container and the mounted restore volume. Both are currently "backup_restore" and although I understood it I found it quite confusing at first.

docker run -d --name temp_restore_container -v data:/backup_restore alpine
docker cp /tmp/backup/data-backup temp_restore_container:/backup_restore
docker stop temp_restore_container
docker rm temp_restore_container

Updated email notifications

I tried googling this one but coming up short. So, I have a special character in my password (#) and its not playing well with the docker composer file. In the previous email notification, I could enclose each entry with quotes but does not seem like I can do that here. Any idea how I could use this method with a password that contains a special character?

smtp://username:password@host:port/?fromAddress=fromAddress&toAddresses=recipient1

Thank you

Rest API Support

  • I'm submitting a ...

    • bug report
    • feature request
    • support request
  • What is the current behavior?
    The Current way of adding a configuration is by editing a couple of env files which is a really good way of managing backups for a
    small number of containers. The drawback of this approach appears when you have multiple schedules and wanna be able to edit them from the app container which makes it really messy for you to edit multiple env files.

  • What is the new feature behavior?
    It would be really nice to support Rest APIs. For example, having an API for changing schedulers of backup containers, makes deployment easier and faster.

Memory leak

Hello,

it's me again, i just noticed that it seems to occur a memory leak with the container. It's running since 4 days, and the memory used is at more than 1GB:

leak_backup

i don't know what to provide to help to fix this, any idea maybe?

I use the image in a docker-compose environment (like in the demo), and my backup size is around 1gb too.

thanks a lot

Saving a backup to a subdirectory of a AWS-S3-Bucket

First of all, I would like to say that this is a great image. However, I want to save my backup in a subdirectory, but I can't find any configuration option. Could someone help me? Maybe I am missing something.

Restore PG/TimescaleDB

I'm trying to restore my data in the TimeScale db, a Postgresql extension, the backup is performed perfectly, even the restore, I can access the backup data, but when in django I try to migrate this is the error that comes back to me:

psycopg2.errors.DataCorrupted: could not read block 4 in file "base/17542/2604": read only 0 of 8192 bytes

Inspecting the block, with this query: SELECT * FROM pg_class WHERE oid = 2604, it turns out that it is the pg_attrdef block

For the restore I tried to use both your recommended commands: this in github, and those recommended by docker: this: docker run --rm --volumes-from timescale -v C:\BackupVolume:/backup ubuntu bash -c "cd /var/lib/postgresql/data/ && tar xvf /backup/last-backup.tar.gz --strip 1" but with both the result is the same.

The mapping with the volume and the data folder is correct, I used this to make backups and it never gave me problems apart from that it used to use 15 GB of Ram for I don't know what reason.

Small tip: The DB from which I have to backup is on Linux (Debian 10), while where I have to restore it is Windows 10

Traceback:

(facco_env) PS C:\Users\l.logiudice\Analysis Detection\costal.dataanalyser\Analyser> python.exe .\manage.py migrate
AutoDiscover task...
Operations to perform:
Apply all migrations: admin, admin_interface, auth, contenttypes, django_celery_beat, django_celery_results, main, mongodb, rabbitmqservice, sessions, statistic, variables
Running migrations:
Traceback (most recent call last):
File "C:\Users\l.logiudice\miniconda3\envs\facco_env\lib\site-packages\django\db\backends\utils.py", line 82, in _execute
return self.cursor.execute(sql)
psycopg2.errors.DataCorrupted: could not read block 4 in file "base/17542/2604": read only 0 of 8192 bytes




The above exception was the direct cause of the following exception:



Traceback (most recent call last):
File "C:\Users\l.logiudice\miniconda3\envs\facco_env\lib\site-packages\django\db\migrations\recorder.py", line 68, in ensure_schema
editor.create_model(self.Migration)
File "C:\Users\l.logiudice\miniconda3\envs\facco_env\lib\site-packages\django\db\backends\base\schema.py", line 345, in create_model
self.execute(sql, params or None)
File "C:\Users\l.logiudice\miniconda3\envs\facco_env\lib\site-packages\django\db\backends\base\schema.py", line 145, in execute
cursor.execute(sql, params)
File "C:\Users\l.logiudice\miniconda3\envs\facco_env\lib\site-packages\django\db\backends\utils.py", line 98, in execute
return super().execute(sql, params)
File "C:\Users\l.logiudice\miniconda3\envs\facco_env\lib\site-packages\django\db\backends\utils.py", line 66, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "C:\Users\l.logiudice\miniconda3\envs\facco_env\lib\site-packages\django\db\backends\utils.py", line 75, in _execute_with_wrappers
return executor(sql, params, many, context)
File "C:\Users\l.logiudice\miniconda3\envs\facco_env\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "C:\Users\l.logiudice\miniconda3\envs\facco_env\lib\site-packages\django\db\utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "C:\Users\l.logiudice\miniconda3\envs\facco_env\lib\site-packages\django\db\backends\utils.py", line 82, in _execute
return self.cursor.execute(sql)
django.db.utils.InternalError: could not read block 4 in file "base/17542/2604": read only 0 of 8192 bytes




During handling of the above exception, another exception occurred:



Traceback (most recent call last):
File ".\manage.py", line 23, in <module>
main()
File ".\manage.py", line 19, in main
execute_from_command_line(sys.argv)
File "C:\Users\l.logiudice\miniconda3\envs\facco_env\lib\site-packages\django\core\management\__init__.py", line 419, in execute_from_command_line
utility.execute()
File "C:\Users\l.logiudice\miniconda3\envs\facco_env\lib\site-packages\django\core\management\__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\l.logiudice\miniconda3\envs\facco_env\lib\site-packages\django\core\management\base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\l.logiudice\miniconda3\envs\facco_env\lib\site-packages\django\core\management\base.py", line 398, in execute
output = self.handle(*args, **options)
File "C:\Users\l.logiudice\miniconda3\envs\facco_env\lib\site-packages\django\core\management\base.py", line 89, in wrapped
res = handle_func(*args, **kwargs)
File "C:\Users\l.logiudice\miniconda3\envs\facco_env\lib\site-packages\django\core\management\commands\migrate.py", line 244, in handle
post_migrate_state = executor.migrate(
File "C:\Users\l.logiudice\miniconda3\envs\facco_env\lib\site-packages\django\db\migrations\executor.py", line 91, in migrate
self.recorder.ensure_schema()
File "C:\Users\l.logiudice\miniconda3\envs\facco_env\lib\site-packages\django\db\migrations\recorder.py", line 70, in ensure_schema
raise MigrationSchemaMissing("Unable to create the django_migrations table (%s)" % exc)
django.db.migrations.exceptions.MigrationSchemaMissing: Unable to create the django_migrations table (could not read block 4 in file "base/17542/2604": read only 0 of 8192 bytes
)

Is it available for Docker Swarm?

Hello,

first of all, thanks a lot for your work, seems very nice!
i would like to know if your image could work for a docker swarm architecture?

In my case, i have only one node (one vps) and only one replica of each container (php, mysql,...) but instead of starting/stopping independent containers i need to start/stop a service.
and i tried manually to stop one container (docker stop) but the service restarts it automatically just after (so i can't backup right now with your solution).
Do i need to change something in my docker compose or is it possible to modify your script to stop and start a whole service?

Thanks a lot

Case sensitivity of "docker-volume-backup.stop-during-backup" value causing compose v3 labels to not stop containers ("True" vs "true")

  • I'm submitting a ...

    • bug report
    • feature request
    • support request
  • What is the current behavior?
    Despite the "docker-volume-backup.stop-during-backup" label being set to "True", multiple docker containers are not stopped during the backup process.

  • If the current behavior is a bug, please provide the configuration and steps to reproduce and if possible a minimal demo of the problem.
    When a portainer stack uses docker compose version 3 in the yaml file the environment variable for "docker-volume-backup.stop-during-backup" is set to "True" (upper case "T") even though it was written as follows (notice the lower case "t" used for "true"):

labels:
  docker-volume-backup.stop-during-backup: true

This causes the the docker-volume-backup container to not stop the desired containers.

  • What is the expected behavior?
    The correct stop behaviour regardless of the docker compose version used.

  • What is the motivation / use case for changing the behavior?
    Ease of use and to fix a bug.

  • Please tell us about your environment:

    • Image version: latest (as of 5th Feb 2022)
    • Docker version: 20.10.12, build e91ed57
    • docker-compose version: 1.29.2, build 5becea4c
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, etc)

Problem can be solved by using the compose v2 format even if the stack uses v3:

    labels:
      - docker-volume-backup.stop-during-backup=true

Ability to create multiple backup.tar.gz based on the input array of directories

As far as I can tel, it works by taking everything in /backup/* and adding that to a single tar.gz file.
My use case is as follows:
I run this in docker-compose separately from the running containers.
All my containers data dirs are located in lets say: /docker-volumes/app1 /docker-volumes/app2 etc.
I have grouped them by schedule for backups, eg. nightly, weekly etc. and have an instance of docker-volume-backup running per schedule, mapping the relevant volumes to them.
I've gotten the label feature to work brilliantly stopping all the "nightly" ones at night, backing them up, then starting them back up.
I can't seem to figure out an easy way to have this do the backups "one by one".
What I would like to see is an option to have every dir in /backup/* be treated as a single backup so that I could have
App1, App2, App3 in separate tar.gz files instead of a single tar.gz with App1, App2, App3 inside.
Is the only alternative to run multiple of these containers?

Thanks for a great little container!

Is there a way to exclude folders or files?

Is there a way for exclusions?

For example, I am backing up my Plex library but I'd like to exclude the cache folder because its large and not needed for a restore.

Volume being backed up = dockerapps/plex/:/backup/plex:ro
Exclude = dockerapps/plex/config/Library/Application Support/Plex Media Server/Cache

Thank you

mc return: The location constraint is incompatible for the region specific endpoint this request was sent to.

Hi!
First, thank you very much for the developed solution!

I'm implementing this solution in a project with Swarm. I need to run daily backups for an S3 AWS account.

It happens that when I run the backup:
docker exec <CONTAINER-ID-OF-BACKUP> backup

This generates the zipped file but does not send it to S3. The return is:

[INFO] Uploading backup to remote storage                                                                                                                                        
                                                                                                                                                                                 
Will upload to bucket "bucketname".                                                                                                                                     
`backupfile-2021-08-18T01-59-26.tar.gz` -> `backup-target/bucketname/backupfile-2021-08-18T01-59-26.tar.gz`                                             
Killed                                                                                                                                                                           
Upload finished.                                                                                                                                                                 
                                                                                                                                                                                 
[INFO] Cleaning up                                                                                                                                                               
                                                                                                                                                                                 
removed 'backupfile-2021-08-18T01-59-26.tar.gz'                                                                                                                          
                                                                                                                                                                                 
[INFO] Backup finished

I investigated more details with the command:
docker exec <CONTAINER-ID-OF-BACKUP> mc admin info backup-target --debug

return:

mc: <DEBUG> GET /minio/admin/v3/info HTTP/1.1
Host: s3.amazonaws.com
User-Agent: MinIO (linux; amd64) madmin-go/0.0.1 mc/RELEASE.2021-06-13T17-48-22Z
Authorization: AWS4-HMAC-SHA256 Credential=**REDACTED**/20210818//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20210818T020858Z
Accept-Encoding: gzip

mc: <DEBUG> HTTP/1.1 400 Bad Request
Connection: close
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Wed, 18 Aug 2021 02:08:57 GMT
Server: AmazonS3
X-Amz-Id-2: BRdh2GZIfipTHTNZWuYDJHXmX55r9uv3qa3X78IECCFGqWaYsticjdvN1j/UzWvWijqGeZl4ePk=
X-Amz-Request-Id: FW6W9E8DFSV2APBG

169
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>IllegalLocationConstraintException</Code><Message>The me-south-1 location constraint is incompatible for the region specific endpoint this request was sent to.</Message><RequestId>FW6W9E8DFSV2APBG</RequestId><HostId>BRdh2GZIfipTHTNZWuYDJHXmX55r9uv3qa3X78IECCFGqWaYsticjdvN1j/UzWvWijqGeZl4ePk=</HostId></Error>
0

mc: <DEBUG> TLS Certificate found: 
mc: <DEBUG>  >> Country: US
mc: <DEBUG>  >> Organization: DigiCert Inc
mc: <DEBUG>  >> Expires: 2022-07-24 23:59:59 +0000 UTC
mc: <DEBUG> TLS Certificate found: 
mc: <DEBUG>  >> Country: IE
mc: <DEBUG>  >> Organization: Baltimore
mc: <DEBUG>  >> Expires: 2025-05-10 12:00:00 +0000 UTC
mc: <DEBUG> Response Time:  304.074865ms

mc: <ERROR> Unable to get service status. Failed to parse server response: invalid character '<' looking for beginning of value.
 (0) admin-info.go:83 cmd.clusterStruct.String(..)
 Release-Tag:RELEASE.2021-06-13T17-48-22Z | Commit:DEVELOPMENT. | Host:5de75a4eeeec | OS:linux | Arch:amd64 | Lang:go1.16.5 | Mem:2.8 MB/74 MB | Heap:2.8 MB/67 MB.

Well, I did a lot of testing, assigning the variables in many ways: AWS_ENDPOINT; AWS_ENDPOINT_PROTO; AWS_DEFAULT_REGION but nothing resolved.

This is the backup definition in docker-compose:

  backup:
    image: offen/docker-volume-backup:latest
    env_file:
      - ./.env.backup
    volumes:
      # Mounting the Docker socket allows the script to stop and restart
      # the container during backup. You can omit this if you don't want
      # to stop the container
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - postgres_data:/backup/postgres-data-bkp:ro
      - media_volume:/backup/media-volume-bkp:ro
    deploy:
      restart_policy:
        condition: on-failure
      resources:
        limits:
          memory: 25M

In the .env.backup file I left the basic settings:

BACKUP_CRON_EXPRESSION=0 3 * * *
BACKUP_FILENAME="backupfile-%Y-%m-%dT%H-%M-%S.tar.gz"
AWS_ACCESS_KEY_ID=xxxxxx
AWS_SECRET_ACCESS_KEY=xxxxxxx
AWS_S3_BUCKET_NAME=bucketname

A note: Although the error message refers to the me-south-1 region, the AWS region is US West (Northern California) us-west-1

Thank you in advance for your help

v3 breaking changes

This issue exists to track possible breaking changes that would make sense in a v3 (so I don't forget about them). This does not mean a v3 is going to happen soon. It could even be it never happens.


Remove EMAIL_* config

As all notifications are now sent using shoutrrr, the EMAIL_* interface is obsolete and should be removed.

  • Remove configuration shim
  • Remove documentation

Remove BACKUP_FROM_SNAPSHOT

The functionality of BACKUP_FROM_SNAPSHOT can be achieved by using and exec-pre and exec-post command.

  • Remove feature
  • Remove documentation

Default to expanding variables in BACKUP_FILENAME

The option to do this has been introduced later on and false was chosen as a default in order to not introduce a breaking change. true would be the better and more helpful default.

  • Set default to true
  • Update documentation

Also consider doing this for all other configuration values.

Rename BACKUP_SOURCES

BACKUP_SOURCES points to a single location. It should be called BACKUP_SOURCE (just like BACKUP_ARCHIVE).

  • Rename to BACKUP_SOURCE
  • Update documentation

Remove exec-[pre|post] labels

The labels have more granularity now.

  • Remove all handling of exec-pre and exec-post
  • Update documentation

Remove BACKUP_STOP_CONTAINER_LABEL

The BACKUP_STOP_CONTAINER_LABEL setting has been renamed: https://offen.github.io/docker-volume-backup/how-tos/replace-deprecated-backup-stop-container-label.html

  • Remove setting
  • Remove documentation

Using labels and yaml file

Two questions

First question; is there a way to use labels to determine which container to backup? Right now, all my of my container data is in /home/username/docker/AppName. Rather than defining /home/username/docker:ro in my backup app, to narrow down to which apps I want to backup, I am only defining volumes of apps I want to backup. E.g.

  • /home/username/docker/App1
  • /home/username/docker/App2
  • /home/username/docker/App3
    Rather, is it possible to define the root docker folder "/home/username/docker:ro" but then use a label to tell the backup tool which apps to backup or not to backup?

Second question; Is there a way to include the (docker compose) yaml and env files in the same tar.gz?

Thank you

archive/tar: sockets not supported

  • I'm submitting a ...

    • [X ] bug report
    • feature request
    • support request
  • What is the current behavior?
    When trying to backup a gitlab docker volume I get an error and no tar file:

level=error msg="Fatal error running backup: takeBackup: error compressing backup folder: archive/tar: sockets not supported"
  • If the current behavior is a bug, please provide the configuration and steps to reproduce and if possible a minimal demo of the problem.
version: '3.6'
services:
  nginx:
    image: nginxproxy/nginx-proxy
    restart: always
    ports:
      - "80:80"
      - "443:443"
    labels:
      - docker-volume-backup.stop-during-backup=true
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./data/nginx/certs:/etc/nginx/certs:ro
      - ./data/nginx/vhosts:/etc/nginx/vhost.d
      - ./data/nginx/html:/usr/share/nginx/html
    networks:
      - proxy

  proxy-companion:
    restart: always
    image: sebastienheyd/self-signed-proxy-companion
    labels:
      - docker-volume-backup.stop-during-backup=true
    volumes:
        - /var/run/docker.sock:/var/run/docker.sock:ro
        - ./data/nginx/certs:/etc/nginx/certs:rw
    depends_on:
      - nginx

  gitlab:
    image: 'gitlab/gitlab-ce:14.7.0-ce.0'
    restart: always
    hostname: 'git.localhost'
    shm_size: '256m'
    volumes:
      - ./data/gitlab/config:/etc/gitlab
      - ./data/gitlab/logs:/var/log/gitlab
      - ./data/gitlab/data:/var/opt/gitlab
    environment:
      VIRTUAL_HOST: git.localhost
      VIRTUAL_PORT: 80
      SELF_SIGNED_HOST: git.localhost
      GITLAB_OMNIBUS_CONFIG: |
         external_url 'https://git.localhost'
         nginx['listen_port'] = 80
         nginx['listen_https'] = false
    labels:
      - docker-volume-backup.stop-during-backup=true
    ports:
      - "10022:22"
    networks:
      - proxy

  backup:
    image: offen/docker-volume-backup:v2.10.0
    environment:
      # A backup is taken each day at 3AM
      BACKUP_CRON_EXPRESSION: "0 3 * * *"
      BACKUP_FILENAME: backup-%Y-%m-%dT%H-%M-%S.tar.gz
      BACKUP_RETENTION_DAYS: 7
      GPG_PASSPHRASE: secretkey
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
      - ./backups:/archive
      - ./:/backup/docker-compose:ro
      - ./data/gitlab/data:/backup/gitlab/data:ro
      - ./data/gitlab/logs:/backup/gitlab/logs:ro
      - ./data/gitlab/config:/backup/gitlab/config:ro

Do:
docker-compose up -d
docker exec backup backup
level=error msg="Fatal error running backup: takeBackup: error compressing backup folder: archive/tar: sockets not supported"

  • What is the expected behavior?
    Ignore sockets and create tar file. Maybe a info for the user in the log.

  • What is the motivation / use case for changing the behavior?
    Increases the compatibility

  • Please tell us about your environment:

    • Image version: 2.10.0
    • Docker version: 20.10.12
    • docker-compose version: 2.2.3

Notifications on failed backups

On failed backups, the image could send notifications so consumers do know about their backups being potentially unavailable.

Easy options would be:

  • Email (using SMTP credentials provided through configuration)
  • Pushover
  • Calling arbitrary HTTP endpoints sending a defined payload

What exactly happen when BACKUP_FROM_SNAPSHOT=true?

Hello!
Can anyone explaine what exactly happens when i use variable BACKUP_FROM_SNAPSHOT=true?

I test backup prometheus DB without stop, but with BACKUP_FROM_SNAPSHOT and i receive an error

Failure running docker-volume-backup at 2022-01-18T00:00:00+03:00
Running docker-volume-backup failed with error: takeBackup: error creating snapshot: open /backup/prometheus_data/01FSM8TPFEXQ0QC28H11PMQZ0R: no such file or directory
Log output of the failed run was:
time="2022-01-18T00:04:00+03:00" level=error msg="Fatal error running backup: takeBackup: error creating snapshot: open /backup/prometheus_data/01FSM8TPFEXQ0QC28H11PMQZ0R: no such file or directory"
time="2022-01-18T00:04:03+03:00" level=info msg="Removed snapshot /tmp/backup."

If i use label which is stops container before backup - all ok

Here part of compose file

  backup:
    container_name: prometheus_backup
    image: offen/docker-volume-backup:v2.7.2
    restart: always
    environment:
      BACKUP_FILENAME: backup-{{ prometheus_name }}-%Y-%m-%dT%H-%M-%S.tar.gz
      BACKUP_FROM_SNAPSHOT: "true"
      BACKUP_RETENTION_DAYS: "7"
      NOTIFICATION_LEVEL: "info"
      NOTIFICATION_URLS: "{{ slack_hook}}"
      # BACKUP_STOP_CONTAINER_LABEL: {{ prometheus_name }}
    volumes:
      - "{{ prometheus_volume_config }}:/backup/{{ prometheus_volume_config }}:ro"
      - "{{ prometheus_volume_data }}:/backup/{{ prometheus_volume_data }}:ro"
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - "/hdd/backup/{{ prometheus_name }}:/archive"
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    networks:
      - docker_office

networks: 
  docker_office:
    external: true
  
volumes:
  {{ prometheus_volume_config }}:
    external: true  
  {{ prometheus_volume_data }}:
    external: true

Email and S3 storage broken, URL errors

  • I'm submitting a ...

    • bug report
    • feature request
    • support request
  • What is the current behavior?

Email: Might be related to this issue - golang/go#19297

panic: newScript: error creating sender: error initializing router services: parse "\"smtp://user:password@mail_host:465/[email protected]&[email protected]\"": first path segment in URL cannot contain colon

S3: The bucket name only contains lower case letters, numbers, and hyphens. Might be related to the key secret containing a \ character? Tried re-generating the key and the all have it.

level=error msg="Fatal error running backup: copyBackup: error uploading backup to remote storage: Bucket name contains invalid characters"
  • If the current behavior is a bug, please provide the configuration and steps to reproduce and if possible a minimal demo of the problem.

Email:

environment:
  - NOTIFICATION_URLS="smtp://user:password@mail_host:465/[email protected]&[email protected]"

S3:

environment:
  - AWS_S3_BUCKET_NAME="bucket_name"
  - AWS_ACCESS_KEY_ID="key_id"
  - AWS_SECRET_ACCESS_KEY="key_secret"
  • What is the expected behavior?
  • Emaill notifications work
  • S3 storage works
  • What is the motivation / use case for changing the behavior?

N/A

  • Please tell us about your environment:

    • Image version: latest (sha256:a62560a37daad7ff7d5f5c2fd17c381e84e315cacbf06061bc1e87aaf69c9f12)
    • Docker version: 20.10.15
    • docker-compose version: 1.29.2
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, etc)

400 Bad Request when uploading to WebDAV Server

  • I'm submitting a ...

    • bug report
    • feature request
    • [ x] support request
  • What is the current behavior?
    When the backup tasks reaches the point to upload the archive to the WebDAV server I get a 400 return code.
    What I tried:
    HTTPS (self signed cert) backend
    HTTP backend
    slash variations at the end and the beginning of URL and PATH (although the gowebdav lib seems to correct them anyway)

The WebDAV server is a Synology (DSM 7). I think there is a protocol issue between the gowebdav client and the Synology WebDAV server. But I am not sure. So I am asking here at first :)

The Synology log tells me nothing important here.

This is a sample:
~ # echo $WEBDAV_PATH /small_written_test/ ~ # echo $WEBDAV_URL http://192.168.2.101:5005/ ~ # backup time="2022-04-20T06:42:55Z" level=info msg="Stopping 2 container(s) labeleddocker-volume-backup.stop-during-backup=trueout of 4 running container(s)." time="2022-04-20T06:44:02Z" level=info msg="Created backup of/backupat/tmp/"nextcloud-2022-04-20T06-42-55.tar.gz"." time="2022-04-20T06:44:03Z" level=info msg="Restarted 2 container(s) and the matching service(s)." time="2022-04-20T06:44:04Z" level=error msg="Fatal error running backup: copyBackup: error creating directory '/small_written_test/' on WebDAV server: MkdirAll /small_written_test/: 400" time="2022-04-20T06:44:04Z" level=info msg="Removed tar file /tmp/"nextcloud-2022-04-20T06-42-55.tar.gz"."

Do you have any idea what I might test?

Support for filebase.com S3

Can you add support for filebase.com S3 bucket access? When I tried:

AWS_ENDPOINT: "s3://us-east-1.s3.filebase.com"

I got the startup error:
backup_1 | mc: <ERROR> Invalid URL. URL https://s3://us-east-1.s3.filebase.com` for MinIO Client should be of the form scheme://host[:port]/ without resource component.`

Backup of overseerr:latest data volume causes backup to fail

Log entry for failed backup:
"level=fatal msg="Fatal error running backup: takeBackup: error compressing backup folder: archive/tar: write too long"

The overseerr container has the same config necessary to stop (before the backup) and start (afterwards) as every other container I have in my automated backup:

labels:
  - docker-volume-backup.stop-during-backup=true

I unfortunately do not know if there are any other logs that I can have a look at to get more info.

The backup of the overseerr container's data volume had been working up until recently and suddenly didn't anymore after a recent update of the offen/docker-volume-backup image.

Thanks in advance for any help. :)

EDIT: Just a shot in the dark... might have to do with this commit: 70daa03

Restore instructions are unnecessarily complicated

May I suggest an easier instruction to restore on the README?
It just seems unnecessarily complicated. Also it doesn't tell to remove the volume first, which can be a problem if the volume already exists.

This is what I propose:

docker volume rm volume_name # this removes volume if it exists

docker run --rm -it -v volume_name:/backup/my-app-backup -v /path/to/local_backups:/archive:ro alpine tar -xvzf /archive/full_filename.tar.gz # this creates volume first

Originally posted by @kzshantonu in #71 (comment)

Best way to restore volume: swap-in volume for existing image/container?

from your how-to(https://dev.to/hendr_ik/automate-backing-up-your-docker-volumes-3gdk?signin=true):

The volume is now ready to use in other containers. Alternatively, you can use a one-off volume created beforehand.

so I managed to cp the restore file to the offen_data volume:

docker run -d --name backup_restore -v offen_data:/backup_restore alpine
docker cp ~/Downloads/backup/faithful-word-database-backup backup_restore:/backup_restore

which copies the backup data into the offen_data volume.

I have an existing volume attached to a running container that I want to replace with offen_data:

faithful-word_database-storage

How do I swap-in the restored volume to the existing container?

I tried:
docker-compose down
docker volume rm faithful-word_database-storage
docker volume create --name faithful-word_database-storage
docker run --rm -it -v offen_data:/from -v faithful-word_database-storage:/to alpine ash -c "cd /from ; cp -av . /to"

but when I do docker-compose up the running container is not associated with the new volume containing restored data.

How do I associate the new volume with the image/container? Or perhaps this is not the "right" way to do this.

Allow specification of AWS region

  • I'm submitting a ...

    • bug report
    • feature request
    • support request
  • What is the current behavior?

Currently it appears that the destination S3 bucket is assumed to be in the default us-east-1 region. I've not found a way to specify otherwise (apologies if I've missed something).

  • If the current behavior is a bug, please provide the configuration and steps to reproduce and if possible a minimal demo of the problem.

Setting AWS_S3_BUCKET_NAME to a bucket in a region other than us-east-1 results in an error:

Fatal error running backup: copyBackup: error uploading backup to remote storage: Put \"https://<AWS_S3_BUCKET_NAME>.s3.dualstack.us-east-1.amazonaws.com/<BACKUP_FILENAME>\": 301 response missing Location header"                                                                                                          
  • What is the expected behavior?

An AWS region can be specified (probably via AWS_S3_BUCKET_REGION or similarly-named env var) and the backup is uploaded to the bucket with AWS_S3_BUCKET_NAME in that region.

  • What is the motivation / use case for changing the behavior?

Using this backup tool in regions other than us-east-1

Thanks for your work on what is a very comprehensive backup solution for docker volumes! ๐Ÿ˜ƒ

Custom notification messages

  • I'm submitting a ...
    • bug report
    • feature request
    • support request

Right now there is no way of customizing error and success messages for notifications. As I have multiple backup containers, I struggle to identify at a glance which backup has run, since I have to read the log. Also I don't need the full log on success notifications.

It would be nice having the ability to customize these messages, while injecting useful information. Something like this:

SUCCESS_NOTIFICATION_MESSAGE={CONTAINER_LABEL} backup succeeded at {BACKUP_TIME}

Thoughts? If you're interested in implementing this feature I can produce a list of useful variables to be injected.

Open questions:

  1. Should the same feature be available for the notification title?
  2. Should it be possible to provide different messages for different shoutrrr URLs? (For example send the full log via email but only a short message in Telegarm) For this one I'd argue no, since it would become too complex to configure by environment variables, but I just thought to put it out here.

Prune backups doesn't seem to acknowledge AWS_S3_PATH

  • I'm submitting a ...

    • bug report
    • feature request
    • support request
  • What is the current behavior?

When using docker-volume-backup with Backblaze b2 (Region: eu-central; following the link you can create a free 10 GiB bucket) the backup retention logic doesn't seem to acknowledge the AWS_S3_PATH variable. I saw this message:

time="2022-04-13T11:36:38Z" level=warning msg="The current configuration would delete all 4 existing remote backup(s)."
time="2022-04-13T11:36:38Z" level=warning msg="Refusing to do so, please check your configuration."

This indicates, that it wanted to delete all 4 "directories" in the bucket. But it should've looked into the directory specified with AWS_S3_PATH where there are just 1 file.

  • If the current behavior is a bug, please provide the configuration and steps to reproduce and if possible a minimal demo of the problem.

docker-compose.yml:

version: '3.8'
services:
  backup:
    image: offen/docker-volume-backup:latest
    environment:
      BACKUP_RETENTION_DAYS: 7
      AWS_ENDPOINT: s3.eu-central-003.backblazeb2.com
      AWS_S3_BUCKET_NAME: <bucketName>
      AWS_ACCESS_KEY_ID: <keyId>
      AWS_SECRET_ACCESS_KEY: <applicationKey>
      AWS_S3_PATH: mypath
    volumes:
      - testdata:/backup/testdata-backup:ro
volumes:
 testdata:
  • What is the expected behavior?

Executing: docker-compose up -d & docker-compose exec backup backup I would expect the following structure on an empty bucket. This works as epxected

mypath/backup-2022-04-13T12-02-46.tar.gz

And I would expect the output of the command to be. This doesn't work as expected

[...]
time="2022-04-13T11:52:18Z" level=info msg="None of 0 existing remote backup(s) were pruned."
[...]

Instead it will show this:

[...]
time="2022-04-13T11:48:45Z" level=warning msg="The current configuration would delete all 1 existing remote backup(s)."
time="2022-04-13T11:48:45Z" level=warning msg="Refusing to do so, please check your configuration."
[...]

It seems like the prune backups logic doesn't look in the path specified with AWS_S3_PATH but looks in the root of the bucket (if you create new folders or files at the root level the number in the message will increase).

  • What is the motivation / use case for changing the behavior?

This behavior is of course incorrect and potentially dangerous (the only reason why my bucket wasn't cleared is that there is a special check in the software that prevents deleting all files).

  • Please tell us about your environment:

    • Image version: latest (80635b23cb46)
    • Docker version: Docker version 20.10.14, build a224086
    • docker-compose version: docker-compose version 1.29.2, build 5becea4c
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, etc)

Maybe this is a special case with the Backblaze S3 API implementation. I haven't checked with AWS etc.

Setting PUID/PGID?

Would it be possible to add the ability to change PUID and PGID?

I'd like to put my local archive on a NAS, but my file permissions doesn't allow root to put the files there.

Insufficient permissions to access this file

Hi when I run

docker exec <container-id> backup (I haven't tried a cron backup yet) I see:

[INFO] Uploading backup to remote storage

Will upload to bucket "secret-bucket-name".
`offen-db-2021-08-19T04-02-01.tar.gz` -> `backup-target/secret-bucket-name/offen-db-2021-08-19T04-02-01.tar.gz`
mc: <ERROR> Failed to copy `offen-db-2021-08-19T04-02-01.tar.gz`. Insufficient permissions to access this file `https://s3.amazonaws.com/secret-bucket-name/offen-db-2021-08-19T04-02-01.tar.gz`
Total: 0 B, Transferred: 1.28 KiB, Speed: 10.33 KiB/s
Upload finished.

I am connecting to Amazon S3 with the root keys, so I presume I am connecting with bucket owner permissions yet I get that error.

The bucket is set to "Objects can be public" (The bucket is not public but anyone with appropriate permissions can grant public access to objects).

Any suggestions?

Azure Storage Support

  • I'm submitting a ...

    • bug report
    • feature request
    • support request
  • What is the expected behavior?

Enabling docker-volume-backup to backup to Azure Storage.

  • What is the motivation / use case for changing the behavior?

Azure Storage is a great solution (especially with cold storage) to backup Docker volumes to. I don't have any experience with Go but based on the AWS format I might be able to implement support for Azure Storage. Additionally, lots of professionals working with Microsoft technology have a subscription from work that they would like to use. (e.g. via a Visual Studio subscription)

SDK Docs: https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#pkg-overview

Thanks for this awesome tool!

docker permissions

  • I'm submitting a ...

    • bug report
    • feature request
    • support request
  • What is the current behavior?
    There is no documentation of which docker permissions are required.

  • What is the motivation / use case for changing the behavior?
    I'm wary of giving containers unrestricted access to the docker socket and would like to expose it via docker-socket-proxy and the DOCKER_HOST environment variable with limited permissions.

Error write too long

Hey this was running fine when I started it.
After missing some backups I checked the logs and found:

me="2022-06-08T00:00:34Z" level=info msg="Created backup of `/backup` at `/tmp/homeassistant-2022-06-08T00-00-00.tar.gz`."
time="2022-06-08T00:00:35Z" level=info msg="Stored copy of backup `/tmp/homeassistant-2022-06-08T00-00-00.tar.gz` in local archive `/archive`."
time="2022-06-08T00:00:35Z" level=info msg="None of 4 existing local backup(s) were pruned."
time="2022-06-08T00:00:35Z" level=info msg="Removed tar file `/tmp/homeassistant-2022-06-08T00-00-00.tar.gz`."
time="2022-06-08T00:00:35Z" level=info msg="Finished running backup tasks."
time="2022-06-09T00:00:39Z" level=error msg="Fatal error running backup: takeBackup: error compressing backup folder: createArchive: error creating archive: compress error writing /backup/homeassistant-backup/home-assistant_v2.db to archive: writeTarGz: error copying /backup/homeassistant-backup/home-assistant_v2.db to tar writer: archive/tar: write too long"
time="2022-06-09T00:00:39Z" level=info msg="Removed tar file `/tmp/homeassistant-2022-06-09T00-00-00.tar.gz`."
time="2022-06-10T00:00:47Z" level=info msg="Created backup of `/backup` at `/tmp/homeassistant-2022-06-10T00-00-00.tar.gz`."
time="2022-06-10T00:00:47Z" level=info msg="Stored copy of backup `/tmp/homeassistant-2022-06-10T00-00-00.tar.gz` in local archive `/archive`."
time="2022-06-10T00:00:47Z" level=info msg="None of 5 existing local backup(s) were pruned."
time="2022-06-10T00:00:47Z" level=info msg="Removed tar file `/tmp/homeassistant-2022-06-10T00-00-00.tar.gz`."
time="2022-06-10T00:00:47Z" level=info msg="Finished running backup tasks."
time="2022-06-11T00:00:52Z" level=error msg="Fatal error running backup: takeBackup: error compressing backup folder: createArchive: error creating archive: compress error writing /backup/homeassistant-backup/home-assistant_v2.db to archive: writeTarGz: error copying /backup/homeassistant-backup/home-assistant_v2.db to tar writer: archive/tar: write too long"
time="2022-06-11T00:00:52Z" level=info msg="Removed tar file `/tmp/homeassistant-2022-06-11T00-00-00.tar.gz`."
time="2022-06-12T00:01:18Z" level=error msg="Fatal error running backup: takeBackup: error compressing backup folder: createArchive: error creating archive: compress error writing /backup/homeassistant-backup/home-assistant_v2.db to archive: writeTarGz: error copying /backup/homeassistant-backup/home-assistant_v2.db to tar writer: archive/tar: write too long"
time="2022-06-12T00:01:18Z" level=info msg="Removed tar file `/tmp/homeassistant-2022-06-12T00-00-00.tar.gz`."

what is that write too long telling me?

Stats export

#60 introduced the collection of stats about a backup run (e.g. size, duration, number of pruned items). These stats can currently be consumed in post-backup notifications, however it should also be possible to send these to arbitrary monitoring systems (e.g. Prometheus) so that users can monitor their backups on such systems.

I don't know too much about the monitoring landscape out there, but in case someone would like to use this feature, I'd be happy to hear from you about how this could work in your setup.

Enhancements to the tar.gz

One issue and one ask.

Ask - Is there a way to reference the docker server hostname in the backup file? E.g.
BACKUP_FILENAME="backup-%Y-%m-%dT%H-%M-%S.tar.gz"
BACKUP_FILENAME="{hostname}%Y-%m-%dT%H-%M-%S.tar.gz"

Issue - I have a volume mounted for /archive in my docker-composer file like this;
- mnt/backups:/archive
When I run a backup and check the folder on the docker server, the files look like this _PNSM6~H. If I look at the /archive folder from within the container, the tar.gz is named appropriately. How can I access the tar.gz files outside of the container?

Thank you

Support for timezones?

I can see that it runs with default UTC time.
This makes it difficult to time everything to specific local times (especially with daylight saving)
I suggest adding support for setting the timezone it runs under in the environment

Flag to avoid backing up long-stopped containers

It would be nice to be able to add a flag (either in the config or in the container labels) to avoid making a backup if the container is stopped AND it was last stopped before the latest backup.

My use case is this: I have a game server container that I start on demand. There may be periods on which it'll be used almost every day and periods of months when it will be permanently stopped. I would like to run backups only when there are actual changes to the state of the container. When used in conjunction with backup pruning this also avoids having N backups of the same exact thing and allows to keep the last N states.

By running docker inspect I found that there are the StartedAt and FinishedAt properties, so it should be relatively easy to pull this info up and compare it with the last backup date.

PaxHeaders.0 throughout my tar file

  • I'm submitting a ...

    • bug report
    • feature request
    • support request
  • What is the current behavior?
    In every folder, there is a "PaxHeaders.0" folder that duplicate files from the original folder.
    foobar.app/foobar files
    foobar.app/PaxHeaders.0/foobar files

  • If the current behavior is a bug, please provide the configuration and steps to reproduce and if possible a minimal demo of the problem.
    The container runs fine on my Ubuntu server but this PaxHeaders.0 issue occurs on my synology running the same container and same env. The synology has the latest synology updates. DS1821+.

  • What is the expected behavior?
    No PaxHeaders.0. Is there a way to exclude this somehow? Its in every folder that I backup.

  • What is the motivation / use case for changing the behavior?

  • Please tell us about your environment:

    • Image version: 2.15
    • Docker version: Latest supported on the synology
      docker version
      Client:
      Version: 20.10.3
      API version: 1.41
      Go version: go1.15.13
      Git commit: b455053
      Built: Thu Aug 19 07:13:24 2021
      OS/Arch: linux/amd64
      Context: default
      Experimental: true

Server:
Engine:
Version: 20.10.3
API version: 1.41 (minimum version 1.12)
Go version: go1.15.13
Git commit: a3bc36f
Built: Thu Aug 19 07:11:25 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.4.3
GitCommit: ea3508454ff2268c32720eb4d2fc9816d6f75f88
runc:
Version: v1.0.0-rc93
GitCommit: 31cc25f16f5eba4d0f53e35374532873744f4b31
docker-init:
Version: 0.19.0
GitCommit: ed96d00

  • docker-compose version: Latest supported on the synology

docker-compose version
docker-compose version 1.28.5, build 324b023a
docker-py version: 4.4.4
CPython version: 3.7.10
OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019

  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, etc)

Can be used for local backup?

Thanks for the development of this feature, but I wonder if there is a way to make the backup locally? like /futurice/docker-volume-backup but keeping the file rotation.

Better Pruning

I like this straightforward solution to backup my Docker Volumes.

Unfortunately "The mechanism used for pruning old backups is not very sophisticated" some kind of Policy based pruning would be nice.

Any thoughts on if it's possible / doable to implement something like this?

Build for ARM Architectures

I'd love to use this to back up some projects I've got running on Raspberry Pi's. Would it be possible to switch to Docker buildx and publish for arm/v7 and arm64 as well?

Support other means of providing access to the Docker Daemon

Right now, a Docker client will only be created when the socket is mounted into the container using a hard check for /var/run/docker.sock:

_, err := os.Stat("/var/run/docker.sock")
if !os.IsNotExist(err) {
cli, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
if err != nil {
return nil, fmt.Errorf("newScript: failed to create docker client")
}
s.cli = cli
}

User might want to use other means of providing this (e.g. a proxied HTTP service by setting DOCKER_HOST).

Instead of hard coding the socket location, we could:

  • research which means of providing access would be supported by calling client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
  • add all of them to the conditional

Pre/Post backup routine scripts

  • I'm submitting a ...

    • bug report
    • feature request
    • support request
  • What is the current behavior?

  • No ability to reference pre and post backup scripts

  • If the current behavior is a bug, please provide the configuration and steps to reproduce and if possible a minimal demo of the problem.

  • What is the expected behavior?

  • What is the motivation / use case for changing the behavior?
    Ability to reference scripts as part of the backup routine. For example, I could reference a database script to create a backup and then have docker-volume-backup tool backup the flat file, and all while leaving the database running. Or pihole is also the current use case I have as I don't want to take it down if I don't have too. Referencing a simple custom pre-backup script could solve this.

#!/bin/sh
# This script will create a Teleport backup to a directory that it is run in.

# Change to mapped directory
cd /backup

# Run Backup
pihole -a -t &

# Record the process id and wait
process_id=$!
wait $process_id
echo "Backup complete with status $?"
  • Please tell us about your environment:

    • Image version:latest
    • Docker version: 20.10.12
    • docker-compose version: 1.29.2
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, etc)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.