Giter Club home page Giter Club logo

docker-db-backup's People

Contributors

alexbarcelo avatar alwynpan avatar benvia avatar claudioaltamura avatar eoehen avatar greenatwork avatar jacksgt avatar james-song avatar joergmschulz avatar mark-monteiro avatar melwinkfr avatar milenkara avatar oscarsiles avatar pascalberger avatar piemonkey avatar pimjansen avatar sbrunecker avatar simoninops avatar skylord123 avatar steve-todorov avatar teenigma avatar teun95 avatar the1ts avatar thomas-negrault avatar tiredofit avatar tito avatar toshy avatar tpansino avatar vanzhiganov avatar zicklag avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-db-backup's Issues

s3 glacier

First of all, thanks for such a great tool. Local backups are working great!

I configured a backup job to backup to s3 glacier. The backups are created in the /tmp/backups folder as they should be, but the upload to s3 fails with error:

{"code":"MissingParameterValueException","message":"Required parameter missing: API version","type":"Client"}

According to: https://docs.aws.amazon.com/amazonglacier/latest/dev/api-common-request-headers.html

We are missing header 'x-amz-glacier-version'. The current api version is: 2012-06-01

Postgres based TimescalDB back-up difficulties

First of all huge thanks for making this tool available and help to get rid of the back-up hassle!

Currently I'm trying to use it for backing up a TimescaleDB but I'm encountering some issues, so hopefully you are able to help me with:

[NOTICE] ** [db-backup] Compressing backup with gzip


pg_dump: warning: there are circular foreign-key constraints on this table:


pg_dump:   hypertable


pg_dump: You might not be able to restore the dump without using --disable-triggers or temporarily dropping the constraints.


pg_dump: Consider using a full dump instead of a --data-only dump to avoid this problem.


pg_dump: warning: there are circular foreign-key constraints on this table:


pg_dump:   chunk


pg_dump: You might not be able to restore the dump without using --disable-triggers or temporarily dropping the constraints.


pg_dump: Consider using a full dump instead of a --data-only dump to avoid this problem.


[NOTICE] ** [db-backup] Generating MD5 for pgsql_exsyn_TimescaleDB_20210121-114348.sql.gz


[NOTICE] ** [db-backup] Backup of pgsql_exsyn_TimescaleDB_20210121-114348.sql.gz created with the size of 1572 bytes


[INFO] ** [db-backup] Dumping database: decide


[NOTICE] ** [db-backup] Compressing backup with gzip


pg_dump: warning: there are circular foreign-key constraints on this table:


pg_dump:   hypertable


pg_dump: You might not be able to restore the dump without using --disable-triggers or temporarily dropping the constraints.


pg_dump: Consider using a full dump instead of a --data-only dump to avoid this problem.


pg_dump: warning: there are circular foreign-key constraints on this table:


pg_dump:   chunk


pg_dump: You might not be able to restore the dump without using --disable-triggers or temporarily dropping the constraints.


pg_dump: Consider using a full dump instead of a --data-only dump to avoid this problem.


pg_dump: NOTICE:  hypertable data are in the chunks, no data will be copied


DETAIL:  Data for hypertables are stored in the chunks of a hypertable so COPY TO of a hypertable will not copy any data.


HINT:  Use "COPY (SELECT * FROM <hypertable>) TO ..." to copy all data in hypertable, or copy each chunk individually.


pg_dump: NOTICE:  hypertable data are in the chunks, no data will be copied


DETAIL:  Data for hypertables are stored in the chunks of a hypertable so COPY TO of a hypertable will not copy any data.


HINT:  Use "COPY (SELECT * FROM <hypertable>) TO ..." to copy all data in hypertable, or copy each chunk individually.


pg_dump: NOTICE:  hypertable data are in the chunks, no data will be copied


DETAIL:  Data for hypertables are stored in the chunks of a hypertable so COPY TO of a hypertable will not copy any data.


HINT:  Use "COPY (SELECT * FROM <hypertable>) TO ..." to copy all data in hypertable, or copy each chunk individually.


pg_dump: NOTICE:  hypertable data are in the chunks, no data will be copied


DETAIL:  Data for hypertables are stored in the chunks of a hypertable so COPY TO of a hypertable will not copy any data.


HINT:  Use "COPY (SELECT * FROM <hypertable>) TO ..." to copy all data in hypertable, or copy each chunk individually.


pg_dump: NOTICE:  hypertable data are in the chunks, no data will be copied


DETAIL:  Data for hypertables are stored in the chunks of a hypertable so COPY TO of a hypertable will not copy any data.


HINT:  Use "COPY (SELECT * FROM <hypertable>) TO ..." to copy all data in hypertable, or copy each chunk individually.


[NOTICE] ** [db-backup] Generating MD5 for pgsql_decide_TimescaleDB_20210121-114348.sql.gz


[NOTICE] ** [db-backup] Backup of pgsql_decide_TimescaleDB_20210121-114348.sql.gz created with the size of 229055344 bytes


[NOTICE] ** [db-backup] Sending Backup Statistics to Zabbix

Is there a possibility to force a full pgsql dump to circumvent this?

Native backup support for TimescaleDB would even be greater of course, but I'm not sure how to implement their guidelines with your tool: https://docs.timescale.com/latest/using-timescaledb/backup

Please let me know if you need any additional info to provide a meaningful answer.

Thanks,

Rob

mySQL/MariaDB - Backup fails when password contains spaces

Hi, first and foremost, thank you for the great work on this container!

I have four instances of this container installed, 3 backing up successfully, 1 not.
2 x MariaDB - NO spaces in password - successful
1 x Postgres - spaces IN password - successful
1 x MariaDB - spaces IN password - NOT successful.

I have enabled debug mode and am able to reproduce the behaviour by copying the mysql command (below) from the debug output and executing it from within the db-backup container.

(I have changed the words used for the password, but have maintained the length of each individual word, and the case, type of character used.)

Once I place quotes around the password, the command executes as expected.
Without them, components of the password after the first space are being treated as options being passed to mysql and are errorring out as unsupported (unrecognised) options.

mysql -umonicauser -P 3306 -h monica_mariadb -pgreen balloon mauve buster -e 'SELECT COUNT(*) FROM information_schema.FILES;'

Segfault

Does this still work? Im seeing this on my logs:

TARGET=db_phpipam_mariadb_20200303-064449.sql
+ '[' FALSE = TRUE ']'
+ mysqldump --max-allowed-packet=512M -A -h mariadb -uroot -p
./run: line 290: 1281 Segmentation fault mysqldump --max-allowed-packet=512M -A -h $DBSERVER -u$DBUSER -p$DBPASS > ${TMPDIR}/${TARGET}
+ '[' FALSE = TRUE ']'

InfluxDB - Backup of multiple databases fails

Script contains for loop over $DB_NAME values likely to support backup of multiple InfluxDB databases but there is no setting for target in each iteration. So backup fails with stat: can't stat '/tmp/backups/influx_home_assistant telegraf_192.168.1.199_20201110-233711.sql': No such file or directory.

Adding target=influx_${DB}_${dbhost}_${now} solved problem for me.

Azure support

Good day

I would just like to find out if support for Azure is on the horizon?

Cheers

Can't access Database after update

Hi :-) I recently made an update to your latest container, and since then I get:

Database not accessible - retrieving

Are docker-secrets still supported? I'm running in swarm...

Unable to run backup

Using unRaid Docker container (Both the CA app store one, as well as creating a custom one with the latest release from the docker hub), I get the following error when attempting to run 'backup-now':

/usr/local/bin/backup-now: line 4: /etc/s6/services/10-db-backup/run: No such file or directory

Is this a configuration error on my part? I am attempting to backup a mongo database from an external host.

wrong timezone

Hi,
I have a question, the timezone is wrong even if I put the environment TZ="Europe/Paris" and even if I add tzdata in the dockerfile... What can I do ? Thanks for your help !!

[Bug] Docker secrets not working as of v2.1.1

I inject my DB password via Docker Secrets, and noticed that this is no longer working as of v2.1.1.

Looking at the code, the root cause appears to be a result of #43 . Specifically, these lines:

[[ ( -n "${DB_PASS}" ) ]] && file_env 'DB_PASS'

Which are neglecting to check whether DB_PASS_FILE is also set, in which case the password still needs to be read and injected.

I'll open a PR for this shortly.


This was a surprising regression, and I only caught it because I have monitoring which indicates my backups haven't succeeded for X amount of time. @tiredofit , just curious if you've thought about any plans for a test suite? Or a checklist for the manual testing that should go into each release?

In any case, thanks for the tool 👍

Backups fail with compression

Default compression of GZ works.
XZ fails - output file size 0
I then tried ZSTD just to try and help give you add'l info
ZSTD works ok - so perhaps just a problem with XZ?

Allow multiple backup targets

You can define lists in environment variables. This way we could us the image to backup multiple targets without the need of hosting the image multiple times.

"MessageBusList:0:Name": "Test"
"MessageBusList:0:UserName": demo

Make it possible to run the container with another user

I cant´ find any documentation how to run this container with another. I would like to run this container with the --user command. With security in mind, running as root is not a good idea.
When running with the `--user, i´ve got a lot of permission errors.

Docker build fails with the following error

  • apk add --virtual .db-backup-run-deps bzip2 mongodb-tools mariadb-client libressl pigz postgresql postgresql-client redis xz
    ERROR: unsatisfiable constraints:
    mongodb-tools (missing):

neither DB_USER nor DB_USER_FILE are set but are required

On version 1.21.3 I got this error when I run backup of MongoDB instance without authentication.

bash-5.0# backup-now
** Performing Manual Backup
[ERROR] ** [db-backup] error: neither DB_USER nor DB_USER_FILE are set but are required

MYSQL_PASS_STR erroneous

Connection to mysql database not possible, but when changing the MYSQL_PASS_STR from

        [[ ( -n "${DB_PASS}" ) ]] && MYSQL_PASS_STR=" -p'${DBPASS}'"

to

        [[ ( -n "${DB_PASS}" ) ]] && MYSQL_PASS_STR=" -p${DBPASS}"

all works fine

`DB_USER` and `DB_PASS` became required for all database types

DB_USER and DB_PASS were optional for mongodb, and DB_PASS was optional for many other database types. However, due to recent changes, they now became required for all database types. Failed to supply either one will fail the backup. See error:

[ERROR] ** [db-backup] error: neither DB_USER nor DB_USER_FILE are set but are required
[ERROR] ** [db-backup] error: neither DB_PASS nor DB_PASS_FILE are set but are required

This can be fixed either from the upstream or in this repo. I will work on a temp fix.

Postgres Backup fails with user "root"

I have this docker-compose file:

version: '3.1'

services:
  # ...SOME OTHER SERVICES...

  pg:
    image: postgres:11
    container_name: pg-my-service-prod
    restart: always
    volumes:
      - $HOME/data/postgres-my-service-prod:/var/lib/postgresql/data
    environment:
      - POSTGRES_USER=root
      - POSTGRES_PASSWORD=root
      - POSTGRES_DB=myservicePROD
    ports:
      - 127.0.0.1:5432:5432
    networks:
      - my-service-prod

  db-backup:
    image: tiredofit/db-backup
    restart: always
    container_name: db-backup
    depends_on:
      - pg
    volumes:
      - $HOME/data/backups:/backup
    environment:
      - DB_TYPE=pgsql
      - DB_HOST=pg
      - DB_NAME=myservicePROD
      - DB_PORT=5432
      - DB_USER="root"
      - DB_PASS="root"
      # - DB_DUMP_BEGIN=0415
      - DB_DUMP_BEGIN=+01
      - MD5=TRUE
      - SPLIT_DB=TRUE
      - DEBUG_MODE=TRUE
    networks:
      - my-service-prod

  # ...SOME OTHER SERVICES...

networks:
    my-service-prod:

The backup fails with this message:

db-backup     | [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
db-backup     | [s6-init] ensuring user provided files have correct perms...exited 0.
db-backup     | [fix-attrs.d] applying ownership & permissions fixes...
db-backup     | [fix-attrs.d] 01-s6: applying...
db-backup     | [fix-attrs.d] 01-s6: exited 0.
db-backup     | [fix-attrs.d] 02-zabbix: applying...
db-backup     | [fix-attrs.d] 02-zabbix: exited 0.
db-backup     | [fix-attrs.d] 03-logrotate: applying...
db-backup     | [fix-attrs.d] 03-logrotate: exited 0.
db-backup     | [fix-attrs.d] done.
db-backup     | [cont-init.d] executing container initialization scripts...
db-backup     | [cont-init.d] 01-permissions: executing...
db-backup     | + DEBUG_PERMISSIONS=FALSE
db-backup     | + ENABLE_PERMISSIONS=TRUE
db-backup     | + '[' TRUE = TRUE ']'
db-backup     | + varenvusername=(`env | grep USER_ | awk -F= '{print tolower($1)}' | awk -F_ '{print $2}'`)
db-backup     | ++ grep USER_
db-backup     | ++ awk -F_ '{print $2}'
db-backup     | ++ awk -F= '{print tolower($1)}'
db-backup     | ++ env
db-backup     | + varenvuid=(`env | grep USER_ | awk -F= '{print tolower($2)}'`)
db-backup     | ++ env
db-backup     | ++ awk -F= '{print tolower($2)}'
db-backup     | ++ grep USER_
db-backup     | ++ echo ''
db-backup     | ++ sed 's/ /\\|/g'
db-backup     | + strusers=
db-backup     | + [[ ! -z '' ]]
db-backup     | + '[' FALSE = TRUE ']'
db-backup     | + '[' FALSE = true ']'
db-backup     | + '[' TRUE = TRUE ']'
db-backup     | + echo '**** [permissions] [debug] Users (varenvusername) from Docker env are: '
db-backup     | + echo '**** [permissions] [debug] UIDs (varenvuid) from Docker env are: '
db-backup     | + echo '**** [permissions] [debug] The string (strusers) used to grep the users is: '
db-backup     | + echo '**** [permissions] [debug] Users (varpassuser) from /etc/passwd are: '
db-backup     | + echo '**** [permissions] [debug] UIDs (varpassuserid) from /etc/passwd are: '
db-backup     | + counter=0
db-backup     | + '[' 0 -gt 0 ']'
db-backup     | + counter=0
db-backup     | + varenvgroupname=(`env | grep ^GROUP_ | grep -v GROUP_ADD_  | awk -F= '{print tolower($1)}' | awk -F_ '{print $2}'`)
db-backup     | **** [permissions] [debug] Users (varenvusername) from Docker env are:
db-backup     | **** [permissions] [debug] UIDs (varenvuid) from Docker env are:
db-backup     | **** [permissions] [debug] The string (strusers) used to grep the users is:
db-backup     | **** [permissions] [debug] Users (varpassuser) from /etc/passwd are:
db-backup     | **** [permissions] [debug] UIDs (varpassuserid) from /etc/passwd are:
db-backup     | ++ grep '^GROUP_'
db-backup     | ++ awk -F= '{print tolower($1)}'
db-backup     | ++ grep -v GROUP_ADD_
db-backup     | ++ env
db-backup     | ++ awk -F_ '{print $2}'
db-backup     | + varenvgid=(`env | grep ^GROUP_ | grep -v GROUP_ADD_ | awk -F= '{print tolower($2)}'`)
db-backup     | ++ env
db-backup     | ++ ++ ++ grep '^GROUP_'
db-backup     | grep -v GROUP_ADD_
db-backup     | awk -F= '{print tolower($2)}'
db-backup     | ++ sed 's/ /\\|/g'
db-backup     | ++ echo ''
db-backup     | + strgroups=
db-backup     | + [[ ! -z '' ]]
db-backup     | + '[' FALSE = TRUE ']'
db-backup     | + '[' FALSE = true ']'
db-backup     | + '[' TRUE = TRUE ']'
db-backup     | + echo '**** [permissions] [debug] Group names (varenvgroupname) from Docker environment settings are: '
db-backup     | + echo '**** [permissions] [debug] GIDs (grvarenvgid) from Docker environment settings are: '
db-backup     | + echo '**** [permissions] [debug] The string (strgroup) used to grep the groups is: '
db-backup     | + echo '**** [permissions] [debug] Group names (vargroupname) from /etc/group are: '
db-backup     | + echo '**** [permissions] [debug] GIDs (vargroupid) from /etc/group are: '
db-backup     | + '[' 0 -gt 0 ']'
db-backup     | + counter=0
db-backup     | + varenvuser2add=(`env | grep ^GROUP_ADD_ | awk -F= '{print $1}' | awk -F_ '{print tolower($3)}'`)
db-backup     | **** [permissions] [debug] Group names (varenvgroupname) from Docker environment settings are:
db-backup     | **** [permissions] [debug] GIDs (grvarenvgid) from Docker environment settings are:
db-backup     | **** [permissions] [debug] The string (strgroup) used to grep the groups is:
db-backup     | **** [permissions] [debug] Group names (vargroupname) from /etc/group are:
db-backup     | **** [permissions] [debug] GIDs (vargroupid) from /etc/group are:
db-backup     | ++ env
db-backup     | ++ grep '^GROUP_ADD_'
db-backup     | ++ awk -F_ '{print tolower($3)}'
db-backup     | ++ awk -F= '{print $1}'
db-backup     | + varenvdestgroup=(`env | grep ^GROUP_ADD_ | awk -F= '{print tolower($2)}'`)
db-backup     | ++ env
db-backup     | ++ grep ++ awk -F= '{print tolower($2)}'
db-backup     | '^GROUP_ADD_'
db-backup     | + '[' FALSE = TRUE ']'
db-backup     | + '[' FALSE = true ']'
db-backup     | + '[' TRUE = TRUE ']'
db-backup     | + echo '**** [permissions] [debug] Users (varenvuser2add) to add to groups are: '
db-backup     | + echo '**** [permissions] [debug] Groups (varenvdestgroup) to add users are: '
db-backup     | **** [permissions] [debug] Users (varenvuser2add) to add to groups are:
db-backup     | **** [permissions] [debug] Groups (varenvdestgroup) to add users are:
db-backup     | + mkdir -p /tmp/state
db-backup     | ++ basename /var/run/s6/etc/cont-init.d/01-permissions
db-backup     | + touch /tmp/state/01-permissions-init
db-backup     | [cont-init.d] 01-permissions: exited 0.
db-backup     | [cont-init.d] 02-zabbix: executing...
db-backup     | [cont-init.d] 02-zabbix: exited 0.
db-backup     | [cont-init.d] 03-cron: executing...
db-backup     | **** [cron] Disabling Cron
db-backup     | [cont-init.d] 03-cron: exited 0.
db-backup     | [cont-init.d] 04-smtp: executing...
db-backup     | **** [smtp] [debug] SMTP Mailcatcher Enabled at Port 1025, Visit http://127.0.0.1:8025 for Web Interface
db-backup     | **** [smtp] Disabling SMTP Features
db-backup     | [cont-init.d] 04-smtp: exited 0.
db-backup     | [cont-init.d] 99-container-init: executing...
db-backup     | [cont-init.d] 99-container-init: exited 0.
db-backup     | [cont-init.d] done.
db-backup     | [services.d] starting services
db-backup     | [services.d] done.
db-backup     |
db-backup     | ** [zabbix] Starting Zabbix Agent
db-backup     | 2019/06/30 12:44:43 Using in-memory storage
db-backup     | 2019/06/30 12:44:43 [SMTP] Binding to address: 0.0.0.0:1025
db-backup     | [HTTP] Binding to address: 0.0.0.0:8025
db-backup     | 2019/06/30 12:44:43 Serving under http://0.0.0.0:8025/
db-backup     | Creating API v1 with WebPath:
db-backup     | Creating API v2 with WebPath:AnalyticsServer-0.0.1-SNAPSHOT.jar started by root in /opt/app)org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration$$EnhancerBySpringCGLIB$$42f4199] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)environments was not found on the java.library.path: [/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib]org.hibernate.type.UUIDBinaryType@158f4cfe
db-backup     | + '[' '!' -n pgsql ']'
db-backup     | + '[' '!' -n pg ']'
db-backup     | + COMPRESSION=GZ
db-backup     | + PARALLEL_COMPRESSION=TRUE
db-backup     | + DB_DUMP_FREQ=1440
db-backup     | + DB_DUMP_BEGIN=+01
db-backup     | + DB_DUMP_TARGET=/backup
db-backup     | + DBHOST=pg
db-backup     | + DBNAME=myservicePROD
db-backup     | + DBPASS='"root"'
db-backup     | + DBUSER='"root"'
db-backup     | + DBTYPE=pgsql
db-backup     | + MD5=TRUE
db-backup     | + SPLIT_DB=TRUE
db-backup     | + TMPDIR=/tmp/backups
db-backup     | + '[' '' = NOW ']'
db-backup     | + '[' TRUE = 'TRUE ' ']'
db-backup     | + BZIP=bzip2
db-backup     | + GZIP=gzip
db-backup     | + XZIP=xz
db-backup     | + case "$DBTYPE" in
db-backup     | + DBTYPE=pgsql
db-backup     | + DBPORT=5432
db-backup     | + [[ -n "root" ]]
db-backup     | + POSTGRES_PASS_STR='PGPASSWORD="root"'
db-backup     | ++ date
db-backup     | + echo '** [db-backup] Initialized at at Sun' Jun 30 12:44:51 PDT 2019
db-backup     | ** [db-backup] Initialized at at Sun Jun 30 12:44:51 PDT 2019
db-backup     | ++ date +%s
db-backup     | + current_time=1561923891
db-backup     | ++ date +%Y%m%d
db-backup     | + today=20190630
db-backup     | + [[ +01 =~ ^\+(.*)$ ]]
db-backup     | + waittime=60
db-backup     | + sleep 60
db-backup     | [APIv1] KEEPALIVE /api/v1/events
db-backup     | + true
db-backup     | + mkdir -p /tmp/backups
db-backup     | ++ date +%Y%m%d-%H%M%S
db-backup     | + now=20190630-124551
db-backup     | + TARGET=pgsql_myservicePROD_pg_20190630-124551.sql
db-backup     | + case "$DBTYPE" in
db-backup     | + check_availability
db-backup     | + case "$DBTYPE" in
db-backup     | + COUNTER=0
db-backup     | + export 'PGPASSWORD="root"'
db-backup     | + PGPASSWORD='"root"'
db-backup     | + pg_isready --dbname=myservicePROD --host=pg --port=5432 '--username="root"' -q
db-backup     | + backup_pgsql
db-backup     | + '[' TRUE = TRUE ']'
db-backup     | + export 'PGPASSWORD="root"'
db-backup     | + PGPASSWORD='"root"'
db-backup     | ++ psql -h pg -U '"root"' -p 5432 -c 'COPY (SELECT datname FROM pg_database WHERE datistemplate = false) TO STDOUT;'
db-backup     | psql: FATAL:  password authentication failed for user ""root""
db-backup     | + DATABASES=
db-backup     | + '[' TRUE = TRUE ']'
db-backup     | ++ stat -c%s /backup/pgsql_myservicePROD_pg_20190630-124551.sql
db-backup     | stat: can't stat '/backup/pgsql_myservicePROD_pg_20190630-124551.sql': No such file or directory
db-backup     | + zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.size -o
db-backup     | zabbix_sender [759]: option requires an argument -- o
db-backup     | usage:
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] -s host -k key
db-backup     |                 -o value
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] [-s host] [-T] [-r]
db-backup     |                 -i input-file
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] -k key -o value
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] [-T] [-r] -i input-file
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] -s host
db-backup     |                 --tls-connect cert --tls-ca-file CA-file
db-backup     |                 [--tls-crl-file CRL-file]
db-backup     |                 [--tls-server-cert-issuer cert-issuer]
db-backup     |                 [--tls-server-cert-subject cert-subject]
db-backup     |                 --tls-cert-file cert-file --tls-key-file key-file -k key
db-backup     |                 -o value
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] [-s host]
db-backup     |                 --tls-connect cert --tls-ca-file CA-file
db-backup     |                 [--tls-crl-file CRL-file]
db-backup     |                 [--tls-server-cert-issuer cert-issuer]
db-backup     |                 [--tls-server-cert-subject cert-subject]
db-backup     |                 --tls-cert-file cert-file --tls-key-file key-file [-T] [-r]
db-backup     |                 -i input-file
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] --tls-connect cert --tls-ca-file CA-file
db-backup     |                 [--tls-crl-file CRL-file]
db-backup     |                 [--tls-server-cert-issuer cert-issuer]
db-backup     |                 [--tls-server-cert-subject cert-subject]
db-backup     |                 --tls-cert-file cert-file --tls-key-file key-file -k key
db-backup     |                 -o value
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] --tls-connect cert --tls-ca-file CA-file
db-backup     |                 [--tls-crl-file CRL-file]
db-backup     |                 [--tls-server-cert-issuer cert-issuer]
db-backup     |                 [--tls-server-cert-subject cert-subject]
db-backup     |                 --tls-cert-file cert-file --tls-key-file key-file [-T] [-r]
db-backup     |                 -i input-file
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] -s host
db-backup     |                 --tls-connect psk --tls-psk-identity PSK-identity
db-backup     |                 --tls-psk-file PSK-file -k key -o value
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] [-s host]
db-backup     |                 --tls-connect psk --tls-psk-identity PSK-identity
db-backup     |                 --tls-psk-file PSK-file [-T] [-r] -i input-file
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] --tls-connect psk --tls-psk-identity PSK-identity
db-backup     |                 --tls-psk-file PSK-file -k key -o value
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] --tls-connect psk --tls-psk-identity PSK-identity
db-backup     |                 --tls-psk-file PSK-file [-T] [-r] -i input-file
db-backup     |   zabbix_sender -h
db-backup     |   zabbix_sender -V
db-backup     | ++ date -r /backup/pgsql_myservicePROD_pg_20190630-124551.sql +%s
db-backup     | date: can't stat '/backup/pgsql_myservicePROD_pg_20190630-124551.sql': No such file or directory
db-backup     | + zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.datetime -o
db-backup     | zabbix_sender [761]: option requires an argument -- o
db-backup     | usage:
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] -s host -k key
db-backup     |                 -o value
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] [-s host] [-T] [-r]
db-backup     |                 -i input-file
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] -k key -o value
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] [-T] [-r] -i input-file
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] -s host
db-backup     |                 --tls-connect cert --tls-ca-file CA-file
db-backup     |                 [--tls-crl-file CRL-file]
db-backup     |                 [--tls-server-cert-issuer cert-issuer]
db-backup     |                 [--tls-server-cert-subject cert-subject]
db-backup     |                 --tls-cert-file cert-file --tls-key-file key-file -k key
db-backup     |                 -o value
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] [-s host]
db-backup     |                 --tls-connect cert --tls-ca-file CA-file
db-backup     |                 [--tls-crl-file CRL-file]
db-backup     |                 [--tls-server-cert-issuer cert-issuer]
db-backup     |                 [--tls-server-cert-subject cert-subject]
db-backup     |                 --tls-cert-file cert-file --tls-key-file key-file [-T] [-r]
db-backup     |                 -i input-file
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] --tls-connect cert --tls-ca-file CA-file
db-backup     |                 [--tls-crl-file CRL-file]
db-backup     |                 [--tls-server-cert-issuer cert-issuer]
db-backup     |                 [--tls-server-cert-subject cert-subject]
db-backup     |                 --tls-cert-file cert-file --tls-key-file key-file -k key
db-backup     |                 -o value
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] --tls-connect cert --tls-ca-file CA-file
db-backup     |                 [--tls-crl-file CRL-file]
db-backup     |                 [--tls-server-cert-issuer cert-issuer]
db-backup     |                 [--tls-server-cert-subject cert-subject]
db-backup     |                 --tls-cert-file cert-file --tls-key-file key-file [-T] [-r]
db-backup     |                 -i input-file
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] -s host
db-backup     |                 --tls-connect psk --tls-psk-identity PSK-identity
db-backup     |                 --tls-psk-file PSK-file -k key -o value
db-backup     |   zabbix_sender [-v] -z server [-p port] [-I IP-address] [-s host]
db-backup     |                 --tls-connect psk --tls-psk-identity PSK-identity
db-backup     |                 --tls-psk-file PSK-file [-T] [-r] -i input-file
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] --tls-connect psk --tls-psk-identity PSK-identity
db-backup     |                 --tls-psk-file PSK-file -k key -o value
db-backup     |   zabbix_sender [-v] -c config-file [-z server] [-p port] [-I IP-address]
db-backup     |                 [-s host] --tls-connect psk --tls-psk-identity PSK-identity
db-backup     |                 --tls-psk-file PSK-file [-T] [-r] -i input-file
db-backup     |   zabbix_sender -h
db-backup     |   zabbix_sender -V
db-backup     | + [[ -n '' ]]
db-backup     | + '[' '' = TRUE ']'
db-backup     | + sleep 86400

Looks like it ignores the DB name in the command generation phase, therefore it's trying to connect to the root db. Tried with ', " and also without anything sorrounding db name, user and password

MD5 not working with influxdb backups

Summary

The influxdb backup sets $target to be a directory. md5sum doesn't work on directories, only on files.

generate_md5() {
if var_true "$MD5" ; then
print_notice "Generating MD5 for ${target}"
cd $tmpdir
md5sum "${target}" > "${target}".md5
MD5VALUE=$(md5sum "${target}" | awk '{ print $1}')
fi
}

Steps to reproduce

Back up any influxdb and enable md5

What is the expected correct behavior?

md5 generated for each file in the backup set

Relevant logs and/or screenshots

2021/05/14 11:28:50 /tmp/backups/influx_varken_192.168.111.11_20210514-105927/20210514T152841Z.s3140.tar.gz
2021/05/14 11:28:50 /tmp/backups/influx_varken_192.168.111.11_20210514-105927/20210514T152841Z.manifest
[NOTICE] ** [db-backup] Generating MD5 for influx_varken_192.168.111.11_20210514-105927
md5sum: can't read 'influx_varken_192.168.111.11_20210514-105927': Is a directory
md5sum: can't read 'influx_varken_192.168.111.11_20210514-105927': Is a directory

Environment

Unraid running tiredofit/db-backup:latest

Possible fixes

have a different md5 section in the run script for influxdb (and any other db's that have multi-file backup sets)

Cannot backup PostgreSQL 13

[INFO] ** [db-backup] Dumping database: postgres
[NOTICE] ** [db-backup] Compressing backup with gzip
pg_dump: error: server version: 13.0 (Debian 13.0-1.pgdg100+1); pg_dump version: 12.4
pg_dump: error: aborting because of server version mismatch
[NOTICE] ** [db-backup] Generating MD5 for pgsql_postgres_10.0.0.10_20201008-113031.sql.gz
[NOTICE] ** [db-backup] Backup of pgsql_postgres_10.0.0.10_20201008-113031.sql.gz created with the size of 20 bytes

Way to Nofiy on Backup Failure?

Hey there, I'm looking to use this container, but I would like a way to run a simple curl command to notify my webhook when a backup fails. Is there a way to detect a backup failure in the post-script?

Edit: I see that you have SMTP setup in the base image. Is there a way to just have it send an Email when the backup fails? That would also work, while the webhook would still be preferable.

Script is inconsistent for mysql and pgsql

The following two functions work differently and I think we should make sure they work the same:

backup_mysql() {
if var_true "$SPLIT_DB" ; then
DATABASES=$(mysql -h ${dbhost} -P $dbport -u$dbuser --batch -e "SHOW DATABASES;" | grep -v Database|grep -v schema)
for db in $DATABASES; do
if [[ "$db" != "information_schema" ]] && [[ "$db" != _* ]] ; then
print_notice "Dumping MariaDB database: $db"
target=mysql_${db}_${dbhost}_${now}.sql
compression
mysqldump --max-allowed-packet=512M -h $dbhost -P $dbport -u$dbuser ${EXTRA_OPTS} --databases $db | $dumpoutput > ${tmpdir}/${target}
exit_code=$?
generate_md5
move_backup
fi
done
else
compression
mysqldump --max-allowed-packet=512M -A -h $dbhost -P $dbport -u$dbuser ${EXTRA_OPTS} | $dumpoutput > ${tmpdir}/${target}
exit_code=$?
generate_md5
move_backup
fi
}
backup_pgsql() {
if var_true $SPLIT_DB ; then
export PGPASSWORD=${dbpass}
authdb=${DB_USER}
[ -n "${DB_NAME}" ] && authdb=${DB_NAME}
DATABASES=$(psql -h $dbhost -U $dbuser -p ${dbport} -d ${authdb} -c 'COPY (SELECT datname FROM pg_database WHERE datistemplate = false) TO STDOUT;' )
for db in $DATABASES; do
print_info "Dumping database: $db"
target=pgsql_${db}_${dbhost}_${now}.sql
compression
pg_dump -h ${dbhost} -p ${dbport} -U ${dbuser} $db ${EXTRA_OPTS} | $dumpoutput > ${tmpdir}/${target}
exit_code=$?
generate_md5
move_backup
done
else
export PGPASSWORD=${dbpass}
compression
pg_dump -h ${dbhost} -U ${dbuser} -p ${dbport} ${dbname} ${EXTRA_OPTS} | $dumpoutput > ${tmpdir}/${target}
exit_code=$?
generate_md5
move_backup
fi
}

Case 1:

  • In mysql setting DB_NAME has no effect
  • In pgsql you need to set DB_NAME to make the export work (*)

Case 2:

  • In mysql you can export a file with all DBs when setting SPLIT_DB: FALSE
  • In pgsql it will only export the DB set in DB_NAME when setting SPLIT_DB: FALSE

Case 3:

  • In mysql you can export each DB in it's own file when setting SPLIT_DB:TRUE
  • In pgsql you can export each DB in it's own file when setting SPLIT_DB:TRUE - and DB_NAME to the auth DB (*)

(*) In case of SPLIT_DB: FALSE or if POSTGRES_DB was manually set in the postgres image and is not equals to the user name POSTGRES_USER . When not specifying POSTGRES_DB then the value of POSTGRES_USER will be used.


My proposal is to introduce 3 backup options so that it works consistent:

  • Export single database
  • Export all DBs in single file
  • Export all DBs in multiple files

Alternatively we should add a segment to the README explaining the difference in functionality so that it is predictable.

Support for raspberrypi (4)

Hey =) As your container is the best solution in docker available, it would be really cool if you support raspberrypi with images too. =)

Container won't stop (stalling on "syncing disks")

I cannot stop the container,

When I run docker image_name stop, the docker-compose logs shows:

example_db_backup_1 | [cont-finish.d] executing container finish scripts...
example_db_backup_1 | [cont-finish.d] done.
example_db_backup_1 | [s6-finish] syncing disks.

Steps to reproduce:

git clone https://github.com/tiredofit/docker-db-backup.git
cd docker-db-backup/examples
docker-compose --project-name dbbackuptest up

Both containers seems to starts OK as per the docker-compose output:

example-db           | Initializing database
example-db           | 2018-11-05 15:49:04 0 [Warning] InnoDB: Failed to set O_DIRECT on file./ibdata1; CREATE: Invalid argument, continuing anyway. O_DIRECT is known to result in 'Invalid argument' on Linux on tmpfs, see MySQL Bug#26662.
example-db-backup    | ./run: line 5: [: !=: unary operator expected
example-db-backup    | ./run: line 40: [: =: unary operator expected
example-db-backup    | ** [db-backup] Initialized at at Mon Nov 5 07:49:05 PST 2018
example-db-backup    | [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
example-db-backup    | [s6-init] ensuring user provided files have correct perms...exited 0.
example-db-backup    | [fix-attrs.d] applying ownership & permissions fixes...
example-db-backup    | [fix-attrs.d] 01-s6: applying... 
example-db-backup    | [fix-attrs.d] 01-s6: exited 0.
example-db-backup    | [fix-attrs.d] 02-zabbix: applying... 
example-db-backup    | [fix-attrs.d] 02-zabbix: exited 0.
example-db-backup    | [fix-attrs.d] 03-logrotate: applying... 
example-db-backup    | [fix-attrs.d] 03-logrotate: exited 0.
example-db-backup    | [fix-attrs.d] done.
example-db-backup    | [cont-init.d] executing container initialization scripts...
example-db-backup    | [cont-init.d] 01-permissions: executing... 
example-db-backup    | [cont-init.d] 01-permissions: exited 0.
example-db-backup    | [cont-init.d] 02-zabbix: executing... 
example-db-backup    | [cont-init.d] 02-zabbix: exited 0.
example-db-backup    | [cont-init.d] 03-cron: executing... 
example-db-backup    | **** [cron] Disabling Cron
example-db-backup    | [cont-init.d] 03-cron: exited 0.
example-db-backup    | [cont-init.d] 04-smtp: executing... 
example-db-backup    | **** [smtp] Disabling SMTP Features
example-db-backup    | [cont-init.d] 04-smtp: exited 0.
example-db-backup    | [cont-init.d] 99-container-init: executing... 
example-db-backup    | [cont-init.d] 99-container-init: exited 0.
example-db-backup    | [cont-init.d] done.
example-db-backup    | [services.d] starting services
example-db-backup    | [services.d] done.
example-db-backup    | 
example-db-backup    | ** [zabbix] Starting Zabbix Agent# 
[...]
example-db           | 2018-11-05 15:49:11 0 [Note] Reading of all Master_info entries succeded
example-db           | 2018-11-05 15:49:11 0 [Note] Added new Master_info '' to hash table
example-db           | 2018-11-05 15:49:11 0 [Note] mysqld: ready for connections.
example-db           | Version: '10.3.10-MariaDB-1:10.3.10+maria~bionic'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  mariadb.org binary distribution# 

In a new shell, when I execute docker stop example-db-backup, the only 3 following lines gets added to docker-compose output:

example-db-backup    | [cont-finish.d] executing container finish scripts...
example-db-backup    | [cont-finish.d] done.
example-db-backup    | [s6-finish] syncing disks.

But the docker stop command never ends.
Trying to exec some command into the container gives an error:

docker exec -ti example-db-backup hostname
# OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "process_linux.go:86: executing setns process caused \"exit status 21\"": unknown

And docker kill does not work either.

I am using:

  • Docker version 18.06.1-ce, build e68fc7a
  • docker-compose version 1.22.0, build f46880fe

Redis Backup Fails during GZIP compression

Scheduled and Manual backups of an unprotected Redis server fail.

/ # backup-now
** Performing Manual Backup
sending REPLCONF capa eof
SYNC sent to master, writing 2942 bytes to '/tmp/backups/redis__redis_20210321-152913.rdb'
Transfer finished with success.
Transfer finished with success.
[INFO] ** [db-backup] Dumping Redis - Flushing Redis Cache First
[INFO] ** [db-backup] Redis Backup Complete
[INFO] ** [db-backup] Redis Busy - Waiting and retrying in 5 seconds
[INFO] ** [db-backup] Redis Backup Complete
[INFO] ** [db-backup] Redis Busy - Waiting and retrying in 5 seconds
[INFO] ** [db-backup] Redis Backup Complete
[INFO] ** [db-backup] Redis Busy - Waiting and retrying in 5 seconds
[INFO] ** [db-backup] Redis Backup Complete
[INFO] ** [db-backup] Redis Busy - Waiting and retrying in 5 seconds
[INFO] ** [db-backup] Redis Backup Complete
[INFO] ** [db-backup] Redis Busy - Waiting and retrying in 5 seconds
[NOTICE] ** [db-backup] Compressing backup with gzip
stat: can't stat '/tmp/backups/redis__redis_20210321-152913.rdb.gz': No such file or directory
[NOTICE] ** [db-backup] Backup of redis__redis_20210321-152913.rdb.gz created with the size of  bytes
mv: can't rename '/tmp/backups/*.md5': No such file or directory
mv: can't rename '/tmp/backups/redis__redis_20210321-152913.rdb.gz': No such file or directory
[NOTICE] ** [db-backup] Sending Backup Statistics to Zabbix
stat: can't stat '/backup/redis__redis_20210321-152913.rdb.gz': No such file or directory
date: can't stat '/backup/redis__redis_20210321-152913.rdb.gz': No such file or directory
[NOTICE] ** [db-backup] Cleaning up old backups
rm: '/backup' is a directory

The container is running the tiredofit/db-backup:latest tag. I'm running it on Unraid.
I can see that it is connecting to Redis, and it is generating the backups themselves. If I go into the container I can see dozens of backups in the /tmp/backups folder. So I think it's failing when doing the GZIP.

I have GZIP compression enabled, along with multicore processing, But I use both of those settings with other containers, because I also back up Postgres and Mariadb using your container.

Any ideas? Or settings that I should try?

Backup compression

The backup and compression process seems not to be optimal.

We have the following problem:

  • Original MongoDB database size is +9 GB
  • +50 GB .bson files are created
  • A .tar +50 GB is created
  • A Gzip / Bzip / Xzip +15 GB file is generated
  • Once completed, de process delete temporary files

In our case, files take up more than 115 GB (50 + 50 + 15 GB) are created temporarily.

Is there a way to optimize this? For example:

  • Skip the intermediate step of generating the .tar file (directly generating .tar.gz)
  • Directly generating the compressed backup mongodump --gzip or mongodump | gzip

DB_DUMP_BEGIN not working for Absolute or Relative time.

DB_DUMP_BEGIN not working for Absolute or Relative time. However, it is working when it is set to DB_DUMP_BEGIN: +0. Is there any way to troubleshoot this?

version: '3.1'

services:

Restore options:

Log into mysql container and change to backup directory

gunzip < db-backup.sql.gz | mysql -p -u cicgate_main

Relative +MM, i.e. how many minutes after starting the container or Absolute HHMM, e.g. 2330 or 0415

my-mysql-backup:
image: tiredofit/db-backup
hostname: my-mysql-backup
volumes:
- ./mysql/backups:/usr/src/backups
environment:
DB_TYPE: mysql
DB_HOST: my-mysql
DB_NAME: cicgate_ctmp
DB_USER: cicgate_main
DB_PASS: xxxxx
DB_DUMP_FREQ: 1440
DB_DUMP_BEGIN: 1615
DB_CLEANUP_TIME: 1440
DB_DUMP_TARGET: /usr/src/backups
DB_DUMP_DEBUG: "true"
MD5: "false"
depends_on:
- my-mysql
deploy:
replicas: 1

my-mysql:
image: mysql:5.7.22
hostname: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=myxxx
- MYSQL_USER=myuser
- MYSQL_PASSWORD=myrootxxx
volumes:
- my_mysql_data:/var/lib/mysql
- ./mysql/my.cnf:/etc/mysql/conf.d/mysite.cnf
- ./mysql/data:/docker-entrypoint-initdb.d
- ./mysql/backups:/usr/src/backups
secrets:
- mysql_config
deploy:
replicas: 1

DB backup doesn't performed on Ubuntu 18.04

Hello,
thank you for this tool, but unfortunately, it looks like I have an issue with it.

I added this backup tool as an additional service to my docker-compose file like this:

...

  dbbackup:
    image: tiredofit/db-backup:latest
    container_name: db-backupper
    environment:
      DB_TYPE: pgsql
      DB_HOST: storage-db
      DB_NAME: storage_db
      DB_USER: postgres
      DB_PASS: ${POSTGRES_PASSWORD}
      # Test db backup each 3 minutes
      DB_DUMP_FREQ: 3
      # Cleanup backups older than 3 days
      DB_CLEANUP_TIME: 4320
    depends_on:
      - db
    networks:
      - my-network
    volumes:
      - ${DB_BACKUP_DIR}:/backup

I just added the line with more frequent DB backupping for debugging.

On my local computer with Ubuntu 16.04, it works just fine, but after I pushed my services on a remote server under Ubuntu 18.04 I see no backups at all.

Checking logs with docker logs CONTAINER-NAME gives the output below, and as I see it the only suspicious line is s6-svc: fatal: unable to control /var/run/s6/services/-d: No such file or directory, but my knowledge of that system stuff is very limited.

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] 00-functions: applying... 
[fix-attrs.d] 00-functions: exited 0.
[fix-attrs.d] 01-s6: applying... 
[fix-attrs.d] 01-s6: exited 0.
[fix-attrs.d] 02-zabbix: applying... 
[fix-attrs.d] 02-zabbix: exited 0.
[fix-attrs.d] 03-logrotate: applying... 
[fix-attrs.d] 03-logrotate: exited 0.
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 00-startup: executing... 
[cont-init.d] 00-startup: exited 0.
[cont-init.d] 01-timezone: executing... 
[NOTICE] ** [timezone] Setting timezone to 'America/Vancouver'
[cont-init.d] 01-timezone: exited 0.
[cont-init.d] 02-permissions: executing... 
[cont-init.d] 02-permissions: exited 0.
[cont-init.d] 03-zabbix: executing... 
[NOTICE] ** [zabbix] Disabling Zabbix Monitoring Functionality
s6-svc: fatal: unable to control /var/run/s6/services/-d: No such file or directory
[cont-init.d] 03-zabbix: exited 0.
[cont-init.d] 04-cron: executing... 
[NOTICE] ** [cron] Disabling Cron
[cont-init.d] 04-cron: exited 0.
[cont-init.d] 05-smtp: executing... 
[NOTICE] ** [smtp] Disabling SMTP Features
[cont-init.d] 05-smtp: exited 0.
[cont-init.d] 99-container: executing... 
[cont-init.d] 99-container: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.

Maybe you have some ideas about what's went wrong? Thank you for your time.

Allow specifying extra options for the various backup commands

So I need to add the option --default-character-set=utf8mb4 to the mysqldump command but others will not want this option. It would be awesome if I could specify custom options for the backup commands.

The reason I need this is described in the Nextcloud docs (backups wont work for utf8mb4 unless specified on the dump command):
https://docs.nextcloud.com/server/stable/admin_manual/configuration_database/mysql_4byte_support.html#mariadb-support

Thanks!

If the value SPLIT_DB is set to TRUE post-script.sh only shows information about the last backed DB

I have differents DB in my mariaDB and I am interested to back it up in separate files not all inside the same file.

So, the script /assets/custom-scripts/post-script.sh is executed after the last DB backup and shows only stadistics about the last backed DB, not the previous ones. Can i have running this script with all DB backed up ?

And by the way. Could you give an example to send this script trought the SMTP included in the container?

Thanks in advanced.

Doesn't work with influx

I got this container working with mysql but it fails with my influx databases. I think it has to do with this line:

influxd backup -database $DB -host {DBHOST} ${TMPDIR}/${TARGET}

I think {DBHOST} needs to be prefixed with $ like ${DBHOST}. It would also be nice if the port argument was used here (if the database isn't running on default port it wont work).

S3 SigV2 deprecated

When using this docker with Backblaze B2 I get the following error. After some research it appears that SigV2 is deprecated and no new buckets created on S3 since mid 2020 support it and B2 doesn't support it at all.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>InvalidRequest</Code>
    <Message>The V2 signature authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256</Message>
</Error>

https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingAWSSDK.html#UsingAWSSDK-sig2-deprecation

If I find the time I may look into the changes needed, but in the mean time I wanted to bring it to attention.

Thanks.

Can't get this to work

Hi i have the following docker-compose

  mariadb_backup:
    image: tiredofit/db-backup
    restart: unless-stopped
    volumes:
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
      - ./mariadb/backups:/backup
    links:
      - mariadb
    environment:
      - DB_TYPE=mariadb
      - DB_HOST=mariadb
      - DB_NAME=homeassistant
      - DB_USER=backup
      - DB_PASS="secret"
      - DB_DUMP_FREQ=1440
      - DB_DUMP_BEGIN=1700
      - DB_CLEANUP_TIME=8640
      - MD5=TRUE
      - COMPRESSION=XZ
      - SPLIT_DB=TRUE

But i can't get it to work, when it runs i get the following error:

** [db-backup] Dumping database: homeassistant,
./run: line 206: =xz: command not found,
mv: can't rename '/tmp/backups/mysql_homeassistant_mariadb_20181005-171600.sql.xz': No such file or directory,

it creates a md5 file but no actual backup file!

Zabbix

Whats the whole zabbix stuff for? When running the container, theres a lot of communication to zabbix proxy. blocked by my pihole

How to backup all couchdb databases?

Hi, thanks for creating this very helpful tool.
I'm using an application that creates and uses couchdbs. Unfortunately I don't know the names of these databases as these are created dynamically.
A backup of all databases would be optimal for me. How can I do that? Just omit the DB_NAME environment variable?

s6-maximumtime: warning: child process crashed

It seems, if the database is bigger and consumes more time, it crashes due to the s6 maximumtime limit.

When I manually run backup-now everything is fine.
But the automatic crons wont work. In my logs I find something like:

[INFO] ** [db-backup] Backup routines Initialized on Thu Aug 27 03:09:30 CEST 2020
[NOTICE] ** [db-backup] Compressing backup with gzip
[cont-finish.d] executing container finish scripts...
[cont-finish.d] 10-db-backup: executing...
s6-maximumtime: warning: child process crashed
[cont-finish.d] 10-db-backup: exited 111.
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.

I was reading something about this on https://github.com/just-containers/s6-overlay:
S6_KILL_FINISH_MAXTIME (default = 5000): The maximum time (in milliseconds) a script in /etc/cont-finish.d could take before sending a KILL signal to it. Take into account that this parameter will be used per each script execution, it's not a max time for the whole set of scripts.

Not sure if it's related. But anyways, it shouldn't right?

Using awscli for upload backup files to S3

At the time using S3 have to error.
"The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256."

it's may be deprecated api, so it's change to using awscli

--Feature Request-- support docker secrets

I use the mariadb root user to backup (as I want always all my databases backuped). It would be really nice, if you can provide the password inside a docker secret to add some security =). I know you've a lot of amazing containers on github =). So if you need some help, I'll do what I can to support you =). Thanks for all this amazing contribution to the community =). Your container is the ONLY solution for docker swarm in combination with rancheros to backup multiple databases! really cool!

Syntax error in 2.7.0

Summary

I just updated the docker image tiredofit/db-backup from 2.4.0 to 2.7.0. Unfortunately this one doesn't work anymore. There seems to be a syntax error in the run script.

Steps to reproduce

Just execute a previously working config. For me this happens for a mariadb and postgres configuration.

What is the expected correct behavior?

The script should work.

Relevant logs and/or screenshots

./run: line 38: syntax error near unexpected token `)'
./run: line 38: `    "mysql" | "MYSQL" | "mariadb" | "MARIADB")'

Environment

  • Image version / tag: tiredofit/db-backup:2.7.0
  • Host OS: Synology

Possible fixes

In commit eb0ee61 there have been two ;; removed from the run script. Possibly this causes the syntax error. The previous version 2.6.1 works fine.

MySQL backup may not work with non-root user

Summary

MySQL backup may not work with non-root users in both manual and scheduled modes. Error message:

$ sudo docker exec -i backup backup-now
** Performing Manual Backup
[WARN] ** [db-backup] MySQL/MariaDB Server 'mysql' is not accessible, retrying.. (0 seconds so far)
[WARN] ** [db-backup] MySQL/MariaDB Server 'mysql' is not accessible, retrying.. (5 seconds so far)
[WARN] ** [db-backup] MySQL/MariaDB Server 'mysql' is not accessible, retrying.. (10 seconds so far)
[WARN] ** [db-backup] MySQL/MariaDB Server 'mysql' is not accessible, retrying.. (15 seconds so far)
...

Steps to reproduce

Sample docker-compose file

version: '3.7'
services:
  mysql:
    image: mysql:5.7
    environment:
      - MYSQL_ROOT_PASSWORD=password
      - MYSQL_USER=test
      - MYSQL_PASSWORD=test
      - MYSQL_DATABASE=test
    container_name: MySQL

  backup:
    image: tiredofit/db-backup:2.6.1
    environment:
      - DB_TYPE=mysql
      - DB_HOST=mysql
      - DB_PORT=3306
      - DB_USER=test
      - DB_PASS=test
      - DB_NAME=test
      - DB_DUMP_FREQ=1440
      - DB_DUMP_BEGIN=0000
      - DB_CLEANUP_TIME=43200
      - SPLIT_DB=FALSE
    container_name: backup

What is the expected correct behavior?

Backup should work.

Relevant logs and/or screenshots

$ sudo docker exec -i backup backup-now
** Performing Manual Backup
[WARN] ** [db-backup] MySQL/MariaDB Server 'mysql' is not accessible, retrying.. (0 seconds so far)
[WARN] ** [db-backup] MySQL/MariaDB Server 'mysql' is not accessible, retrying.. (5 seconds so far)
[WARN] ** [db-backup] MySQL/MariaDB Server 'mysql' is not accessible, retrying.. (10 seconds so far)
[WARN] ** [db-backup] MySQL/MariaDB Server 'mysql' is not accessible, retrying.. (15 seconds so far)
bash-5.1# mysql -u test -ptest -h mysql -e "SELECT COUNT(*) FROM information_schema.FILES;"
ERROR 1227 (42000) at line 1: Access denied; you need (at least one of) the PROCESS privilege(s) for this operation

Environment

  • Image version / tag: 2.6.1
  • Host OS: Unbut

Possible fixes

Use mysqlshow to test the database connection.

Timezone-setting missing

Hi =) I've discovered that the container is not able to set a Timezone...So you can't plan Backups properly...a TZ-variable would be really nice (as this would also work in swarm) =)

DB-CLEANUP doesn't work when using root

This code-section is wrong:

### Automatic Cleanup
    if [[ -n "$DB_CLEANUP_TIME" ]]; then
          find $DB_DUMP_TARGET/  -mmin +$DB_CLEANUP_TIME -iname "$DBTYPE_$DBNAME_*.*" -exec rm {} \;
    fi

If you use root, no DB-NAME is specified, which results into no cleanup...just remove name-section and delete everything in the bakup-folder older than given time like this:

### Automatic Cleanup
    if [[ -n "$DB_CLEANUP_TIME" ]]; then
          find $DB_DUMP_TARGET/  -mmin +$DB_CLEANUP_TIME -exec rm {} \;
    fi

It's the easiest solution that comes to my mind...hope it works...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.