Giter Club home page Giter Club logo

mysql-backup's People

Contributors

19h avatar andyshinn avatar anishmourya avatar cpanato avatar daemon604 avatar deitch avatar dependabot[bot] avatar fluffyescargot avatar johannrichard avatar juanluisbaptiste avatar kabudu avatar l9c avatar lukebarton avatar nacosdev avatar pcrt avatar philipkobernik avatar ramblurr avatar scicco avatar silane avatar stefanheinrichsen avatar th0th avatar toshy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mysql-backup's Issues

[CRON] Date problem in the periodic CRON update

Hello,
We have a problem when we use your docker image. Our docker-compose file is the following :

version: '2'
services:
    mysql-backup:
      user: "0"
      image: deitch/mysql-backup
      environment:
       - DB_DUMP_TARGET=/backup
       - DB_USER=root
       - DB_DUMP_CRON=0 0 * * *
       - DB_SERVER=test-mysql
      volumes:
        - ./backup:/backup
      restart: always

The database 'test-mysql' is running in an docker image and works great.

While using this command : sudo docker-compose -f docker-compose-file.yml up we have this error :

WARNING: Found orphan containers (docker_test-mysql_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Recreating docker_mysql-backup_1 ... 
Recreating docker_mysql-backup_1 ... done
Attaching to docker_mysql-backup_1
mysql-backup_1  | DB_PORT not provided, defaulting to 3306
mysql-backup_1  | Starting at Mon Feb 4 14:57:13 UTC 2019
mysql-backup_1  | date: invalid date ''
mysql-backup_1  | date: invalid date ''
mysql-backup_1  | date: invalid date ''
mysql-backup_1  | date: invalid date ''
mysql-backup_1  | date: invalid date ''
mysql-backup_1  | date: invalid date ''
mysql-backup_1  | date: invalid date ''
mysql-backup_1  | /functions.sh: line 244: % 7 : syntax error: operand expected (error token is "% 7 ")
mysql-backup_1  | BusyBox v1.29.3 (2019-01-24 07:45:07 UTC) multi-call binary.
mysql-backup_1  | 
mysql-backup_1  | Usage: sleep [N]...
mysql-backup_1  | 
mysql-backup_1  | Pause for a time equal to the total of the args given, where each arg can
mysql-backup_1  | have an optional suffix of (s)econds, (m)inutes, (h)ours, or (d)ays

We have looked at the function "wait_for_cron()" and it worked well in our shell terminal... We expected to have a backup every day (thanks to the CRON 0 0 * * * * * parameter) but the image is in failure and it runs in a loop while constantly saving the database.
Are we doing something wrong?
Thank you.

Support for max_allowed_packet

Hello,

We have some tables that have large amounts of data. Our DB is configured like this:

--max_allowed_packet=268435456"

When we try to run this image for backups we get the following error:

mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table

Would be helpful if we could set max_allowed-packet as ENV.

Thanks

suggestion: give main tar-file a different name

hi again,

similar to #76 - it´s important to have individual filenames. let´s say i have 5 hosts, i use your script on every of these.
i copy all backups to a NAS into one single directory. this folder will look like:

-rw-r--r-- 1 1005 1005 103M Jan  6 21:14 db_backup_20190106201342.gz
-rw-r--r-- 1 1005 1005 103M Jan  7 05:16 db_backup_20190107041502.gz
-rw-r--r-- 1 1005 1005 103M Jan  8 05:16 db_backup_20190108041502.gz
-rw-r--r-- 1 1005 1005 104M Jan  9 05:16 db_backup_20190109041501.gz
-rw-r--r-- 1 1005 1005 105M Jan 10 05:18 db_backup_20190110041500.gz
-rw-r--r-- 1 1005 1005  74M Jan 11 05:18 db_backup_20190111041500.bz2
-rw-r--r-- 1 1005 1005  74M Jan 11 18:28 db_backup_20190111172434.tbz2

now i need a backup for host "HOST5" - HOW shall i find the backups of any of my hosts ? 😏 no chance.
if you could add a new ENV VAR like "BACKUPNAME" i could use that and the folder would look like:

-rw-r--r-- 1 1005 1005 103M Jan  6 21:14 db_backup_host1_20190106201342.gz
-rw-r--r-- 1 1005 1005 103M Jan  7 05:16 db_backup_host_120190107041502.gz
-rw-r--r-- 1 1005 1005 103M Jan  8 05:16 db_backup_host1_20190108041502.gz
-rw-r--r-- 1 1005 1005 104M Jan  9 05:16 db_backup_host2_20190109041501.gz
-rw-r--r-- 1 1005 1005 105M Jan 10 05:18 db_backup_host3_20190110041500.gz
-rw-r--r-- 1 1005 1005  74M Jan 11 05:18 db_backup_host4_20190111041500.bz2
-rw-r--r-- 1 1005 1005  74M Jan 11 18:28 db_backup_host5_20190111172434.tbz2

that would be very comfortable to read and would avoid filename collisions.

THX

Override TARGET and DB_DUMP_TARGET at runtime

This is my use case:

  • Backup database

     // The documentation says this: db_backup_YYYYMMDDHHmm.sql.gz, but the final file doesn't have a .sql.gz suffix
     db_backup_201808051404.gz
    
  • Backup wordpress

  • Combine database and wordpress backup into a new backup filename

  • Upload new filename backup to S3 in a date namespaced folder structure

     original dump target: s3://bucket_name
     new dump target: s3://bucket_name/2018/08/<new_filename>
    

A workaround for ensuring that the new combined file will be uploaded to S3 is to give it the same name as the original TARGET and place it in the same path, i.e. TMPDIR, however I would much rather give the resultant file a more descriptive name.

I have not yet found a workaround for my other requirement.

Is there a way of achieving what I want currently?

Permissions error when moving gzipped file to dump local path

I'm seeing

mv: can't create '/db-backup/db_backup_20181102020657.gz': Permission denied

When running from the deitch/mysql-backup image. Everything else seems to be working as expected from viewing the DB_DUMP_DEBUG=true output.

I tried cloning the repo and removing the following lines from the Dockerfile. This fixed the issue.

# set us up to run as non-root user
RUN groupadd -g 1005 appuser && \
    useradd -r -u 1005 -g appuser appuser
USER appuser

I'm starting mysql-backup with docker-compose and mounting a volume to persist this data on the host. Is there something I'm missing in my setup that's needed to give the mysql-backup user adequate privileges?

Reading environment variables from file

While using rancher secrets, data to pass into environment variables are mounted in a file on /run/secrets. Some images on docker hub like mysql supports this kind of variable passing, e.g. you pass an environment variable MYSQL_USERNAME_FILE=/run/secrets/MYSQL_USERNAME and on entrypoint it read the environment variable from that file. Is it possible to get the same behaviour with this image?

Relevant part from rancher docs: http://rancher.com/docs/rancher/latest/en/cattle/secrets/#docker-hub-images

And this is an example entrypoint that reads environment variables from files (seefile_env function):
https://github.com/docker-library/mysql/blob/0590e4efd2b31ec794383f084d419dea9bc752c4/5.7/docker-entrypoint.sh#L25

question about MYSQLDUMP_

hi avi,

i tried your latest changes and also wanted to test the MYSQLDUMP vars like this:

-e MYSQLDUMP_opt -e MYSQLDUMP_skip-lock-tables -e MYSQLDUMP_R -e MYSQLDUMP_triggers

i don´t know if the ouput of "ps ax" misses that or if i do something wrong here, because i don´t see these options used
in the mysqldump call:

root@195:~# ps ax|grep mysqldump
19346 ?        S      0:19 mysqldump -h mysql -P 3306 -uroot --databases xxxxxxxxx

am i doing something wrong ? 🤔

Add option to use utc time

I mounted /etc/timezone and /etc/localtime for the container, cause I want to format the backup files using host tz locale , but the now variable in the functions.sh always generated date from the utc time

   now=$(date -u +"%Y%m%d%H%M%S")

Please provide a switch that if use the utc time or not 😃 ?

mv: can't preserve ownership: Permission denied

Hi,

thank you very much for your backup container. I'm running it inside a Kubernetes environment. I've mounted a NFS share to /mnt/mysqlbackup. Inside this I created a backup folder and I changed the ownerwhip of the backup-folder to 1005:1005, like described in the README.

The backup seems to be created but anyhow I get the following error:

mv: can't preserve ownership of '/mnt/mysqlbackup/backup/db_backup_20181223125827.gz': Permission denied

Crontab with day of week breaks

When running the backup with the docker image and a cron configuration with day of week, my container crashes with the following error:

/functions.sh: line 304: [: : integer expression expected
/functions.sh: line 307: 8 +  : syntax error: operand expected (error token is "+  ")

The docker service configuration I am using looks like this:

version: "3.1"

services:
  backup:
    image: databack/mysql-backup
    volumes:
      - ./backups:/db
    env_file:
      - mysql-credentials.env
    environment:
      - DB_DUMP_TARGET=/db
      - DB_DUMP_CRON=0 0 * * 1
      - DB_SERVER=container.network
      - DB_DUMP_BY_SCHEMA=true
    networks:
      - network

I have tried out different cron configurations and the only ones that brakes are the one for which I don't use * as fifth parameter. Am I missing something?
Thank you for the support!

Custom mysqldump arguments appear to be ignored

First of all, thanks for a great tool!

It appears that the custom mysqldump arguments passed as environment variables are not getting used.

On the following line, I can see you are expanding them into the DUMPVARS variable: https://github.com/deitch/mysql-backup/blob/68a0a4adbe1e75f4a39ea07326187403ce4bea04/entrypoint#L51

but that variable isn't being used on the following line or anywhere in the entrypoint script: https://github.com/deitch/mysql-backup/blob/68a0a4adbe1e75f4a39ea07326187403ce4bea04/entrypoint#L184

Am I missing something obvious, or is that really the case?

Selecting database to dump

First of all, thanks a lot for the image, it is awesome really.

Is it too much to ask to be able to select the database to backup? I want to backup only data related to my app, not the entire mysql auth data and such.

It can be set via an optional DB_NAME environment variable, if not set the current behaviour is used as default. How does it sound?

EDIT: In this case the user is root. So it has access to all databases.

restore rancher backup

Hi!
I use your tool to create rancher db backups.

Now I have a dumpfile, db_backup_20170215233157.gz.

I try to restore on a fresh rancher server install doing
docker run --link pedantic_torvalds:db -e DB_RESTORE_TARGET=/backup/db_backup_20170215233157.gz -v /mnt/stick:/backup deitch/mysql-backup
It says:
ERROR 2013 (HY000) at line 823: Lost connection to MySQL server during query
and the container starts and stops forever. The lost connection happens in different lines if I try it again.

Add support for upload via SSH/SCP/SFTP

Basically uploading via SSH / SCP / SFTP

Either by providing credentials (user, pass, server, port, target_folder) or with a ssh key, user + keyfile + server + port + target folder.

Thanks

MYSQLDUMP_OPTS doesn't work

I have docker-compose.yml file like:

version: "3"
services:
  ...

  backup:
    image: deitch/mysql-backup
    network_mode: bridge
    restart: always
    volumes:
      - ${BACKUPS_DIR}:/backups
    environment:
      ...
      MYSQLDUMP_OPTS: --routines=true

Because I want dump to contain stored procedures but with MYSQLDUMP_OPTS set to --routines=true I get:

backup_1             | DB_PORT not provided, defaulting to 3306
backup_1             | Starting at Mon Apr 8 21:34:19 UTC 2019
backup_1             | mysqldump: unknown variable 'OPTS=--routines=true'

I've also tried -R, --routines but nothing seems to work.

mysqldump with ubuntu docker image is very slow

Below is the Dockerfile I used,

# mysql backup image
FROM ubuntu:18.04
MAINTAINER Karthikeyan Sundararajan

# install the necessary client
RUN apt-get update && \
    apt-get install -y mysql-client bash python3 python3-pip samba-client && \
    rm -rf /var/cache/apk/* && \
    touch /etc/samba/smb.conf && \
    pip3 install awscli

# install the entrypoint
COPY functions.sh /
COPY entrypoint /entrypoint
RUN chmod +x /entrypoint

# start
ENTRYPOINT ["/entrypoint"]

I had to remove the non-root user commands as docker run was throwing permission denied issues.

Why Ubuntu image

alpine image comes with mariadb client and having issues while dumping database contains generated columns

Issue

While using this ubuntu docker image in Kubernetes, the was very slow. Alpine image took 25 minutes to take 25GB database. Ubuntu image took around 1.5 hours to dump the first 5-6 GB and container got restarted.

Update awscli version

The awscli version in the container doesn't support the newer regions like eu-west-2; I think you just need to trigger a rebuild of the container, since the awscli version isn't pinned. Might be a good idea to link it to the alpine repository to automatically rebuild to pick up security fixes and such.

if the out put target file starts with / smb reject it.

This fails:

-c 'put /tmp/backups/db_backup_20160216091349.gz /db_backup_20160216091349.gz'

the right syntax is:

-c 'put /tmp/backups/db_backup_20160216091349.gz db_backup_20160216091349.gz'

or

-c 'put /tmp/backups/db_backup_20160216091349.gz \\db_backup_20160216091349.gz'

mysqldump: Couldn't execute 'SHOW PACKAGE STATUS WHERE Db = 'test'

After upgrading from deitch/mysql-backup to databack/mysql-backup docker image I started seeing these errors:

mysqldump: Couldn't execute 'SHOW PACKAGE STATUS WHERE Db = 'test'': You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'PACKAGE STATUS WHERE Db = 'test'' at line 1 (1064)

My db image is mysql:5.7.10.

Log from debug mode:

Attaching to **********_backup_1
backup_1             | + file_env DB_SERVER
backup_1             | + local var=DB_SERVER
backup_1             | + local fileVar=DB_SERVER_FILE
backup_1             | + local def=
backup_1             | + '[' database ']'
backup_1             | + '[' '' ']'
backup_1             | + local val=
backup_1             | + '[' database ']'
backup_1             | + val=database
backup_1             | + export DB_SERVER=database
backup_1             | + DB_SERVER=database
backup_1             | + unset DB_SERVER_FILE
backup_1             | + file_env DB_PORT
backup_1             | + local var=DB_PORT
backup_1             | + local fileVar=DB_PORT_FILE
backup_1             | + local def=
backup_1             | + '[' '' ']'
backup_1             | + local val=
backup_1             | + '[' '' ']'
backup_1             | + '[' '' ']'
backup_1             | + export DB_PORT=
backup_1             | + DB_PORT=
backup_1             | + unset DB_PORT_FILE
backup_1             | + file_env DB_USER
backup_1             | + local var=DB_USER
backup_1             | + local fileVar=DB_USER_FILE
backup_1             | + local def=
backup_1             | + '[' test ']'
backup_1             | + '[' '' ']'
backup_1             | + local val=
backup_1             | + '[' test ']'
backup_1             | + val=test
backup_1             | + export DB_USER=test
backup_1             | + DB_USER=test
backup_1             | + unset DB_USER_FILE
backup_1             | + file_env DB_PASS
backup_1             | + local var=DB_PASS
backup_1             | + local fileVar=DB_PASS_FILE
backup_1             | + local def=
backup_1             | + '[' test ']'
backup_1             | + '[' '' ']'
backup_1             | + local val=
backup_1             | + '[' test ']'
backup_1             | + val=test
backup_1             | + export DB_PASS=test
backup_1             | + DB_PASS=test
backup_1             | + unset DB_PASS_FILE
backup_1             | + file_env DB_NAMES
backup_1             | + local var=DB_NAMES
backup_1             | + local fileVar=DB_NAMES_FILE
backup_1             | + local def=
backup_1             | + '[' test ']'
backup_1             | + '[' '' ']'
backup_1             | + local val=
backup_1             | + '[' test ']'
backup_1             | + val=test
backup_1             | + export DB_NAMES=test
backup_1             | + DB_NAMES=test
backup_1             | + unset DB_NAMES_FILE
backup_1             | + file_env DB_DUMP_FREQ 1440
backup_1             | + local var=DB_DUMP_FREQ
backup_1             | + local fileVar=DB_DUMP_FREQ_FILE
backup_1             | + local def=1440
backup_1             | + '[' 60 ']'
backup_1             | + '[' '' ']'
backup_1             | + local val=1440
backup_1             | + '[' 60 ']'
backup_1             | + val=60
backup_1             | + export DB_DUMP_FREQ=60
backup_1             | + DB_DUMP_FREQ=60
backup_1             | + unset DB_DUMP_FREQ_FILE
backup_1             | + file_env DB_DUMP_BEGIN +0
backup_1             | + local var=DB_DUMP_BEGIN
backup_1             | + local fileVar=DB_DUMP_BEGIN_FILE
backup_1             | + local def=+0
backup_1             | + '[' '' ']'
backup_1             | + local val=+0
backup_1             | + '[' '' ']'
backup_1             | + '[' '' ']'
backup_1             | + export DB_DUMP_BEGIN=+0
backup_1             | + DB_DUMP_BEGIN=+0
backup_1             | + unset DB_DUMP_BEGIN_FILE
backup_1             | + file_env DB_DUMP_DEBUG
backup_1             | + local var=DB_DUMP_DEBUG
backup_1             | + local fileVar=DB_DUMP_DEBUG_FILE
backup_1             | + local def=
backup_1             | + '[' 1 ']'
backup_1             | + '[' '' ']'
backup_1             | + local val=
backup_1             | + '[' 1 ']'
backup_1             | + val=1
backup_1             | + export DB_DUMP_DEBUG=1
backup_1             | + DB_DUMP_DEBUG=1
backup_1             | + unset DB_DUMP_DEBUG_FILE
backup_1             | + file_env DB_DUMP_TARGET /backup
backup_1             | + local var=DB_DUMP_TARGET
backup_1             | + local fileVar=DB_DUMP_TARGET_FILE
backup_1             | + local def=/backup
backup_1             | + '[' /backups ']'
backup_1             | + '[' '' ']'
backup_1             | + local val=/backup
backup_1             | + '[' /backups ']'
backup_1             | + val=/backups
backup_1             | + export DB_DUMP_TARGET=/backups
backup_1             | + DB_DUMP_TARGET=/backups
backup_1             | + unset DB_DUMP_TARGET_FILE
backup_1             | + file_env DB_DUMP_BY_SCHEMA
backup_1             | + local var=DB_DUMP_BY_SCHEMA
backup_1             | + local fileVar=DB_DUMP_BY_SCHEMA_FILE
backup_1             | + local def=
backup_1             | + '[' '' ']'
backup_1             | + local val=
backup_1             | + '[' '' ']'
backup_1             | + '[' '' ']'
backup_1             | + export DB_DUMP_BY_SCHEMA=
backup_1             | + DB_DUMP_BY_SCHEMA=
backup_1             | + unset DB_DUMP_BY_SCHEMA_FILE
backup_1             | + file_env DB_DUMP_KEEP_PERMISSIONS true
backup_1             | + local var=DB_DUMP_KEEP_PERMISSIONS
backup_1             | + local fileVar=DB_DUMP_KEEP_PERMISSIONS_FILE
backup_1             | + local def=true
backup_1             | + '[' '' ']'
backup_1             | + local val=true
backup_1             | + '[' '' ']'
backup_1             | + '[' '' ']'
backup_1             | + export DB_DUMP_KEEP_PERMISSIONS=true
backup_1             | + DB_DUMP_KEEP_PERMISSIONS=true
backup_1             | + unset DB_DUMP_KEEP_PERMISSIONS_FILE
backup_1             | + file_env DB_RESTORE_TARGET
backup_1             | + local var=DB_RESTORE_TARGET
backup_1             | + local fileVar=DB_RESTORE_TARGET_FILE
backup_1             | + local def=
backup_1             | + '[' '' ']'
backup_1             | + local val=
backup_1             | + '[' '' ']'
backup_1             | + '[' '' ']'
backup_1             | + export DB_RESTORE_TARGET=
backup_1             | + DB_RESTORE_TARGET=
backup_1             | + unset DB_RESTORE_TARGET_FILE
backup_1             | + file_env AWS_ENDPOINT_URL
backup_1             | + local var=AWS_ENDPOINT_URL
backup_1             | + local fileVar=AWS_ENDPOINT_URL_FILE
backup_1             | + local def=
backup_1             | + '[' '' ']'
backup_1             | + local val=
backup_1             | + '[' '' ']'
backup_1             | + '[' '' ']'
backup_1             | + export AWS_ENDPOINT_URL=
backup_1             | + AWS_ENDPOINT_URL=
backup_1             | + unset AWS_ENDPOINT_URL_FILE
backup_1             | + file_env AWS_ENDPOINT_OPT
backup_1             | + local var=AWS_ENDPOINT_OPT
backup_1             | + local fileVar=AWS_ENDPOINT_OPT_FILE
backup_1             | + local def=
backup_1             | + '[' '' ']'
backup_1             | + local val=
backup_1             | + '[' '' ']'
backup_1             | + '[' '' ']'
backup_1             | + export AWS_ENDPOINT_OPT=
backup_1             | + AWS_ENDPOINT_OPT=
backup_1             | + unset AWS_ENDPOINT_OPT_FILE
backup_1             | + file_env AWS_ACCESS_KEY_ID
backup_1             | + local var=AWS_ACCESS_KEY_ID
backup_1             | + local fileVar=AWS_ACCESS_KEY_ID_FILE
backup_1             | + local def=
backup_1             | + '[' '' ']'
backup_1             | + local val=
backup_1             | + '[' '' ']'
backup_1             | + '[' '' ']'
backup_1             | + export AWS_ACCESS_KEY_ID=
backup_1             | + AWS_ACCESS_KEY_ID=
backup_1             | + unset AWS_ACCESS_KEY_ID_FILE
backup_1             | + file_env AWS_SECRET_ACCESS_KEY
backup_1             | + local var=AWS_SECRET_ACCESS_KEY
backup_1             | + local fileVar=AWS_SECRET_ACCESS_KEY_FILE
backup_1             | + local def=
backup_1             | + '[' '' ']'
backup_1             | + local val=
backup_1             | + '[' '' ']'
backup_1             | + '[' '' ']'
backup_1             | + export AWS_SECRET_ACCESS_KEY=
backup_1             | + AWS_SECRET_ACCESS_KEY=
backup_1             | + unset AWS_SECRET_ACCESS_KEY_FILE
backup_1             | + file_env AWS_DEFAULT_REGION
backup_1             | + local var=AWS_DEFAULT_REGION
backup_1             | + local fileVar=AWS_DEFAULT_REGION_FILE
backup_1             | + local def=
backup_1             | + '[' '' ']'
backup_1             | + local val=
backup_1             | + '[' '' ']'
backup_1             | + '[' '' ']'
backup_1             | + export AWS_DEFAULT_REGION=
backup_1             | + AWS_DEFAULT_REGION=
backup_1             | + unset AWS_DEFAULT_REGION_FILE
backup_1             | + file_env SMB_USER
backup_1             | + local var=SMB_USER
backup_1             | + local fileVar=SMB_USER_FILE
backup_1             | + local def=
backup_1             | + '[' '' ']'
backup_1             | + local val=
backup_1             | + '[' '' ']'
backup_1             | + '[' '' ']'
backup_1             | + export SMB_USER=
backup_1             | + SMB_USER=
backup_1             | + unset SMB_USER_FILE
backup_1             | + file_env SMB_PASS
backup_1             | + local var=SMB_PASS
backup_1             | + local fileVar=SMB_PASS_FILE
backup_1             | + local def=
backup_1             | + '[' '' ']'
backup_1             | + local val=
backup_1             | + '[' '' ']'
backup_1             | + '[' '' ']'
backup_1             | + export SMB_PASS=
backup_1             | + SMB_PASS=
backup_1             | + unset SMB_PASS_FILE
backup_1             | + file_env COMPRESSION gzip
backup_1             | + local var=COMPRESSION
backup_1             | + local fileVar=COMPRESSION_FILE
backup_1             | + local def=gzip
backup_1             | + '[' '' ']'
backup_1             | + local val=gzip
backup_1             | + '[' '' ']'
backup_1             | + '[' '' ']'
backup_1             | + export COMPRESSION=gzip
backup_1             | + COMPRESSION=gzip
backup_1             | + unset COMPRESSION_FILE
backup_1             | + [[ -n 1 ]]
backup_1             | + set -x
backup_1             | + MYSQLDUMP_OPTS=-R
backup_1             | + '[' -n test ']'
backup_1             | + DBUSER=-utest
backup_1             | + '[' -n test ']'
backup_1             | + DBPASS=-ptest
backup_1             | + '[' -z database ']'
backup_1             | + '[' -z '' ']'
backup_1             | + echo 'DB_PORT not provided, defaulting to 3306'
backup_1             | DB_PORT not provided, defaulting to 3306
backup_1             | + DB_PORT=3306
backup_1             | + COMPRESS=
backup_1             | + UNCOMPRESS=
backup_1             | + case $COMPRESSION in
backup_1             | + COMPRESS=gzip
backup_1             | + UNCOMPRESS=gunzip
backup_1             | + EXTENSION=tgz
backup_1             | + TMPDIR=/tmp/backups
backup_1             | + TMPRESTORE=/tmp/restorefile
backup_1             | + declare -A uri
backup_1             | + [[ -n '' ]]
backup_1             | ++ date
backup_1             | + echo Starting at Mon Apr 15 14:21:49 UTC 2019
backup_1             | Starting at Mon Apr 15 14:21:49 UTC 2019
backup_1             | + last_run=0
backup_1             | ++ date +%s
backup_1             | + current_time=1555338109
backup_1             | + freq_time=3600
backup_1             | + '[' -n '' ']'
backup_1             | + [[ +0 =~ ^\+(.*)$ ]]
backup_1             | + waittime=0
backup_1             | + target_time=1555338109
backup_1             | + '[' -z '' ']'
backup_1             | + sleep 0
backup_1             | ++ date +%s
backup_1             | + last_run=1555338109
backup_1             | + true
backup_1             | + mkdir -p /tmp/backups
backup_1             | + do_dump
backup_1             | ++ date -u +%Y%m%d%H%M%S
backup_1             | + now=20190415142149
backup_1             | + SOURCE=db_backup_20190415142149.tgz
backup_1             | + TARGET=db_backup_20190415142149.tgz
backup_1             | + '[' -d /scripts.d/pre-backup/ ']'
backup_1             | + workdir=/tmp/backup.1
backup_1             | + rm -rf /tmp/backup.1
backup_1             | + mkdir -p /tmp/backup.1
backup_1             | + '[' -n '' -a '' = true ']'
backup_1             | + [[ -n test ]]
backup_1             | + DB_LIST='--databases test'
backup_1             | + mysqldump -h database -P 3306 -utest -ptest --databases test -R
backup_1             | mysqldump: Couldn't execute 'SHOW PACKAGE STATUS WHERE Db = 'test'': You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'PACKAGE STATUS WHERE Db = 'test'' at line 1 (1064)
backup_1             | + tar -C /tmp/backup.1 -cvf - .
backup_1             | + gzip
backup_1             | ./
backup_1             | ./backup_20190415142149.sql
backup_1             | + rm -rf /tmp/backup.1
backup_1             | + '[' -d /scripts.d/post-backup/ ']'
backup_1             | + '[' -f /scripts.d/source.sh ']'
backup_1             | + '[' -f /scripts.d/target.sh ']'
backup_1             | + for target in ${DB_DUMP_TARGET}
backup_1             | + backup_target /backups
backup_1             | + local target=/backups
backup_1             | + uri_parser /backups
backup_1             | + uri=()
backup_1             | + full=/backups
backup_1             | + full=/backups
backup_1             | + full=/backups
backup_1             | + [[ / == \/ ]]
backup_1             | + full=file://localhost/backups
backup_1             | + [[ file://l == \f\i\l\e\:\/\/\/ ]]
backup_1             | + pattern='^(([a-z0-9]{2,5})://)?((([^:\/]+)(:([^@\/]*))?@)?([^:\/?]+)(:([0-9]+))?)(\/[^?]*)?(\?[^#]*)?(#.*)?$'
backup_1             | + [[ file://localhost/backups =~ ^(([a-z0-9]{2,5})://)?((([^:\/]+)(:([^@\/]*))?@)?([^:\/?]+)(:([0-9]+))?)(\/[^?]*)?(\?[^#]*)?(#.*)?$ ]]
backup_1             | + full=file://localhost/backups
backup_1             | + uri[uri]=file://localhost/backups
backup_1             | + uri[schema]=file
backup_1             | + uri[address]=localhost
backup_1             | + uri[user]=
backup_1             | + uri[password]=
backup_1             | + uri[host]=localhost
backup_1             | + uri[port]=
backup_1             | + uri[path]=/backups
backup_1             | + uri[query]=
backup_1             | + uri[fragment]=
backup_1             | + [[ file == \s\m\b ]]
backup_1             | + [[ -n '' ]]
backup_1             | + return 0
backup_1             | + case "${uri[schema]}" in
backup_1             | + mkdir -p /backups
backup_1             | + cpOpts=-a
backup_1             | + '[' -n true -a true = false ']'
backup_1             | + cp -a /tmp/backups/db_backup_20190415142149.tgz /backups/db_backup_20190415142149.tgz
backup_1             | + /bin/rm /tmp/backups/db_backup_20190415142149.tgz
backup_1             | ++ date +%s
backup_1             | + current_time=1555338140
backup_1             | + '[' -n '' ']'
backup_1             | + '[' -n '' ']'
backup_1             | ++ date +%s
backup_1             | + current_time=1555338140
backup_1             | + backup_time=31
backup_1             | + freq_time_count=0
backup_1             | + freq_time_count_to_add=1
backup_1             | + extra_time=3600
backup_1             | + target_time=1555341709
backup_1             | + waittime=3569
backup_1             | + sleep 3569

mysqldump with generated column doesn't work while importing

I am getting below error when importing mysqldump taken from this docker image.

ERROR 3105 (HY000) at line 82: The value specified for generated column 'column_name' in table 'table_name' is not allowed.

Source and destination mysql server having same version.

MySQL server version:

mysqld Ver 5.7.26 for Linux on x86_64 (MySQL Community Server (GPL))

MySQL Dump client version:

mysqldump Ver 10.17 Distrib 10.3.12-MariaDB, for Linux (x86_64)

Exit status should be non-zero if error encountered

I'm trying to use the RUN_ONCE option in conjunction with Kubernetes CronJob scheduling.

I noticed that even though the backup is currently failing, Kubernetes reports that the job has succeeded.

This is due to the container returning a zero exit status, even though errors have occurred. Testing locally with docker:

$ docker run -e RUN_ONCE=true -e DB_SERVER=brokenhost --rm databack/mysql-backup                                                                                                                                                                                          
DB_PORT not provided, defaulting to 3306
Starting at Wed Feb 27 17:28:35 UTC 2019
mysqldump: Got error: 2005: "Unknown MySQL server host 'brokenhost' (-2)" when trying to connect
./
./backup_20190227172835.sql
mkdir: can't create directory '/backup': Permission denied
cp: can't create '/backup/db_backup_20190227172835.tgz': No such file or directory

Checking exit status:

$ echo $?                                                                                                                                                                                                                                                                 
0

add a version in DockerFile

Hi,
in the docker hub repository, there is only the latest release.
Can you add a version to your release, to get a precise one and not the 'latest' release?

thanks in advance,

Charles.

Container linking shouldn't be the recommended way

According to docker docs, container linking (with --link flag) is considered legacy. It is even written that it can be removed in future versions.

I think we should make DBSERVER a variable in entrypoint and update the readme with instructions to create mysql and mysql-backup containers in the same docker network. What do you say?

Allow to specify SMB credentials via "--env-file="

Hi,

First of all thank you for your work, it looks terrific!

I took a quick look at both your doc and your code and couldn't help noticing that although secrets are well protected for both mysql and AWS credentials, they are nonetheless in plain sight when its comes down to SMB...

Could it be possible for you to fix this?

I might be able to tackle this later, but I will first try setting this to backup our databases :-)

Commands not properly quoted

There are a few places where commands are not properly quoted. I noticed this in particular with the smbclient when I tried to use a password with spaces:

https://github.com/deitch/mysql-backup/blob/7c8e12b18451f8db04fd6ecf74e1b5cb593b6a05/entrypoint#L104

If I get a chance I'll open a PR, but I couldn't find a quick way to fix this as the current approach is using UPASS to also pass in the -U flag to smbclient:

UPASS="-U ${uri[user]}%${uri[password]}"

I tried adding escaped quotes within the string but it gave me completely unexpected (and not working) results - there must be some bash-ism I am missing. For now, I am just patching in my own image to remove the -U from UPASS and then I can quote it properly when the smbclient is called:

smbclient ... -U "${UPASS}" ...

Question about variables passed

Hi @deitch ,
thank you for providing such a helpful project.
Before I'll fire it up to put a mysqldump into the data container which is backed up via duplicity in another container (I like to go micro-service as much as possible), I' need to clarify something for me:
-v /local/file/path:/db

Should this be a path used by the database container or a temporary cache path for mysql-backup?

As I understand this image needs a linked connection to a running database container and a location to store the resulting dumb. My database containers are all working with docker volumes.

Maybe it's just to early in the morning for me.

Cheers

Add support for Percona XtraBackup

Reviewing some options, I like the direction of this solution for automating mysql backups within our docker-compose setup. One concern is that mysqldump pauses the database and for large databases this can be an issue.

Can you consider adding the Percona's XtraBackup in the Docker image and have it be the low level backup command (or optionally implemented via an environment variable) that is executed instead of mysqldump? This would allow hot backups without pausing the database transactions, not to mention open the door for other features and enhancements if you wanted to down the road.

Restoring backup from s3 doesn't work

Hello.

I've tried restore option from aws s3 on multiple image tags, including latest, and both compression formats, and it does not work. Backups are working fine.

Log from container:

download: s3://xxxxxx/xxxxx/db_backup_20190123161438.tbz2 to tmp/restorefile
BusyBox v1.28.4 (2018-12-06 15:13:21 UTC) multi-call binary.

Usage: tar c|x|t [-ZzJjahmvO] [-f TARFILE] [-C DIR] [-T FILE] [-X FILE] [--exclude PATTERN]... [FILE]...

Create, extract, or list files from a tar file

Operation:
	c	Create
	x	Extract
	t	List
	-f FILE	Name of TARFILE ('-' for stdin/out)
	-C DIR	Change to DIR before operation
	-v	Verbose
	-Z	(De)compress using compress
	-z	(De)compress using gzip
	-J	(De)compress using xz
	-j	(De)compress using bzip2
	-a	(De)compress using lzma
	-O	Extract to stdout
	-h	Follow symlinks
	-m	Don't restore mtime
	-T FILE	File with names to include
	-X FILE	File with glob patterns to exclude
	--exclude PATTERN	Glob pattern to exclude
cat: can't open '/tmp/restore.1/*': No such file or directory

Thanks for providing such nice tool.

Create backup file per db schema

The container actually creates one big dump file of all schemas found on the db server.
It would be helpful to have an option to create a dump file per schema where the schema name is part of the dump file.

eg. db_backup_schemaname_20181118181008.gz

Cannot connect to mysql host (container)

Hello,

thanks for the helpful library!

I'm having an issue when running the restore command:

docker run -e DB_SERVER=db -e DB_PORT=3306 -e DB_USER=root -e DB_PASS=pass -e DB_RESTORE_TARGET=/backup/db_backup_20180511233943.gz -v ~/mysql_backups:/backup deitch/mysql-backup

'db' is the name of my mysql service defined in docker-compose.yml. when I used DB_SERVER=db, I get ERROR 2005 (HY000): Unknown MySQL server host 'db'

then I added port mapping so that 3306 is open on the container and on the host.

when I used DB_SERVER=0.0.0.0, I get ERROR 2003 (HY000): Can't connect to MySQL server on '0.0.0.0' (111 "Connection refused")

when I used DB_SERVER=localhost, I get ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/run/mysqld/mysqld.sock' (2 "No such file or directory")

How do you usually connect to the container?

DB_PASS should be optional

I am using rancher server's image, it has its own MySQL server in the container and it is not password protected. Even if I don't set DB_PASS I get this error:

mysqldump: Got error: 1045: "Access denied for user 'root'@'***' (using password: YES)" when trying to connect

Is there a way not to pass password?

DB_DUMP_FREQ is used fully after the backup causing the effective start time slide into the future

When setting DB_DUMP_FREQ to 1440 you expect the backup to be taken daily at the same time, namely the time configured in DB_DUMP_BEGIN.

On line 113 in entrypoint, a correct calculation is being made to calculate the time so the first backup can start at the expected time.
But!
On line 204 in entrypoint, after the backup finished (which can take f.e. 2 hours) the DB_DUMP_FREQ is used completely without extracting the already spend time for taking the backup.

It would be nice if the code, starting on line 113, can be reused on line 204 to recalculate the waittime so the next backup will start again at the same time as the previous one.


Code of entrypoint on line 113

 if [[ $DB_DUMP_BEGIN =~ ^\+(.*)$ ]]; then
   waittime=$(( ${BASH_REMATCH[1]} * 60 ))
 else
   target_time=$(date --date="${today}${DB_DUMP_BEGIN}" +"%s")

   if [[ "$target_time" < "$current_time" ]]; then
     target_time=$(($target_time + 24*60*60))
   fi
 
   waittime=$(($target_time - $current_time))
 fi

# If RUN_ONCE is set, don't wait
  if [ -z "${RUN_ONCE}" ]; then
    sleep $waittime
  fi

Code of entrypoint on line 204

sleep $(($DB_DUMP_FREQ*60))

Thank you!

Just wanted to say thanks. For some reason I'd read about your dockerfile in the Rancher forums, but bypassed it. I've just gone back to it after some frustrations trying other backup solutions, and it's working great.

Thanks so much.

Andy

Back up to SMB server fails

I am getting the following error when I try to back up to an SMB server:
directory_create_or_exist: lstat failed on directory /var/lib/samba/private/msg.sock: Permission denied

Auto purge of older backups?

Is there the possibility to add a purge functionality? Like a logic that check the backup destination and deletes old backups following a set rule?
For example only keep X most recents backups, or delete all backups older than X months?

Cheers :)

strange generating of dumps

Hello. I have problem with generating sql dump for rancher server.
My backup container is running with command:

docker run -d --restart=always -e DB_USER=cattle -e DB_PASS=cattle -e DB_DUMP_BEGIN=2330 -e DB_DUMP_TARGET=/db -e DB_SERVER=rancher-sanbox -v /mnt/backup-nfs/rancher_db/sandbox:/db --name rancher-backup-sandbox --network rancher-network deitch/mysql-backup

When I list content of dump folder, there are some files, that are only 20bytes in size and have strange time of creation:

-rw-r--r--. 1 root root 7965408 May 31 01:30 db_backup_20180530233000.gz
-rw-r--r--. 1 root root      20 Jun  1 09:03 db_backup_20180601070339.gz
-rw-r--r--. 1 root root      20 Jun  1 10:11 db_backup_20180601081147.gz
-rw-r--r--. 1 root root      20 Jun  1 10:16 db_backup_20180601081619.gz
-rw-r--r--. 1 root root 8073738 Jun  2 01:30 db_backup_20180601233000.gz
-rw-r--r--. 1 root root      20 Jun  2 10:16 db_backup_20180602081619.gz
-rw-r--r--. 1 root root 8016356 Jun  3 01:30 db_backup_20180602233001.gz
-rw-r--r--. 1 root root      20 Jun  3 10:16 db_backup_20180603081619.gz
-rw-r--r--. 1 root root 8261452 Jun  4 01:30 db_backup_20180603233000.gz
-rw-r--r--. 1 root root      20 Jun  4 10:16 db_backup_20180604081619.gz
-rw-r--r--. 1 root root 8050066 Jun  5 01:30 db_backup_20180604233000.gz
-rw-r--r--. 1 root root      20 Jun  5 10:16 db_backup_20180605081619.gz

Shift in timestamp in filename and real creation time is because our server is running in CEST time zone and inside docker containers is UTC. Can be this source of my problems?

Thank you for your response.
Best regards
Michal Behúň

Post-restore hook scripts are never executed due to premature exit line 115.

Hi I noticed in entrypoint line 115 the exit 0 prevents post restore scripts from executing because it exits right after restoring mysql db and removing tmp restore. Because of this post-restore scripts are only getting executed if tmp restore file does not exist which only happens during s3, smb3, cp local copy failures. Perhaps removing line 115 will fix this issue?

Post-post-backup?

Hi,

I'm currently saving a backup directory on the local server, and have a script which retains backup according to a defined schedule. ie, the backup is done hourly, and the script retains 48x hourly backups, then each 24 hours it copies the backup to a daily archive folder, where it saves 14x daily backups. Each 7 days it copies the latest backup to a weekly archive folder, etc.

Is there any way to automatically trigger this script from the docker image, so that it fires only after the mysql-backup copies the file to it's final upload location defined by $DB_DUMP_TARGET?

Regards,
Andy

Permission denied issue

Hi.

I've been using your image on multiple projects and have noticed that suddenly backups stopped working.

I started getting this errors:

mysqldump: Got error: 1045: "Access denied for user 'appuser'@'10.0.2.11' (using password: YES)" when trying to connect
mv: can't create '/backup/db_backup_20181023070223.gz': Permission denied

My docker-compose.yml config looks like this (truncated):

backup:
    image: deitch/mysql-backup
    restart: always
    env_file: conf/production.env
    volumes:
      - ./conf/post-backup:/scripts.d/post-backup
      - backupdata:/backup
    networks:
      - webnet
    depends_on:
      - mysql

volumes:
  backupdata:

with env variables:

DB_SERVER=mysql
DB_PASS=***
DB_NAMES=mydbname
DB_DUMP_BEGIN=2130

I found you made the change to run commands as non-root user here: 37f78ca

Since I've not used a specific tag on your image, it automatically upgraded to the latest version hence broke my backups (I manage around 40-50 deployments that are using your script).

Could you please help on what would be an upgrade process to support this new change, or am I better of locking in the older tag on the image?

Thanks! :)

give single dump file name of db

hey avi,

since you added a big tar file, with individual dumps inside #64 - i really expected to see that the individual database dump (*.sql) have
the name of the database. you added the timestamp, which is really really great - but now - because the file is called backup_xxxxxxx.sql
and not mydatabase_xxxxxx.sql - i have no idea whats inside this dump. i really appriciate if you could change "backup" to the name of database that has been dumped here 😏

image

THANKS

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.