Comments (15)
Well, find on CIFS or NFS might be somewhat slow, but it works.
@deitch of course the find command can be optimized:
find /target/ -name "*.gz/bzip2" -type f -mtime +5 -exec rm {} \;
as i expect, in the target-folder shouldn´t be hundrets or thousands of files, so even the find takes a few seconds, or even minutes, that shouldn´t be a problem as the whole process runs in the background. if a backup takes 5min or 7min, i guess that is something everybody could deal with 😏
i am curious: sorry, i didn´t take a deep look into your "after script" methode: is it possible to get the target-directory within my "after-script" ? if yes, i could do this that way.
(but again, i would have to copy a seperate file next to the container, what is something i try to avoid)
from mysql-backup.
See #69 . As soon as CI passes, will merge in and push to the hub
from mysql-backup.
@deitch and again: i really appriciate your work here, thanks a lot ! 👍
from mysql-backup.
Great!
Curious how you are using it?
from mysql-backup.
@deitch what do you mean with "how"? 😏
from mysql-backup.
:-)
Use case. What you are backing up, what targets you use, how often, how much data, what orchestrator, in what cloud or private platform, etc.
from mysql-backup.
so for now, because i noticed your script a few days ago:
- a customers database (mariadb)
- i backup every 24 hours
- because of the past: i love to use duplicity, therefor i use a modified version of this script
- currently my targets are local (with your script) and s3 with duplicity, i wanted to use s3 with your script too, but i noticed one missing piece.....
- i simply use docker-compose
- one of my own dedicated hosts with xen, the services inside the guest run with docker (docker-compose)
the missing piece i am talking about: when you backup something to somewhere, you have to take care that you don´t run out of space. of course s3 is not that problem, but what i need, is a function call like: "delete backups/files older than XXXX days" - currently i use a little php script for that. i guess it would be a nice feature inside your script, if you could add this to every target.
local is easy, i don´t know who difficult this is for s3. for s3 duplicity does the job for me right now.
but as i plan to backup the mysql-dump with your script to a different s3 location (so i don´t interfere with duplicity) - i will need that, because i don´t want backups older than 30 days 😏
from mysql-backup.
what i need, is a function call like: "delete backups/files older than XXXX days"
We have looked at it in the past, including #48 and some other discussions. I don't completely object to it, but I do feel like the Unix philosophy of "do one thing and do it well" would mean that might be better as a separate task in a separate container? Or perhaps a separate run mode?
from mysql-backup.
@michabbb With S3 you can use Object Lifecycle Management rules on your bucket to expire backups.
from mysql-backup.
Good point!
from mysql-backup.
let´s say it this way: it would be nice to have this feature inside your container. if it´s a complete new script, if i have to call the container differently, i don´t care, but everything would be inside one container. of course i can create my own container based on yours, but everytime you change something, i would need to re-create my own 😕 i understand you, but i guess a new script within your container, which offers this feature for every "target" you offer, would be a nice feature for this project at all.
@kabudu thanks, correct, missed that. but for local you don´t have this, people always use own scripts for that, and this script is so so simple, but my main goal is to have everything inside one single container. so i don´t have multiple scripts at different locations 😏
from mysql-backup.
Yeah, I think it would work, you would just run it in a different mode. The current image can do backup or restore, but it changes based on mode.
Want to take a stab at it? Totally fine to have a separate script, we can merge it in after.
from mysql-backup.
@deitch for local and SMB (i guess) it´s easy 😏
find /path/to/files* -mtime +5 -exec rm {} \;
but i have no idea how that is working with s3 (if you consider to ignore the lifecycle stuff).
from mysql-backup.
Well, find
on CIFS or NFS might be somewhat slow, but it works.
s3, that wouldn't. You need to use the api.
from mysql-backup.
i have to check this lifecycle hook stuff, it´s okay for me to use that (if that works) 😏
but instead of creating a complete new mode, maybe it´s an idea to just add a another ENV to your script like "-DELETE-OLDER-THAN".
for example: ducplicity has a paramter like this: --full-if-older-than 30D
which means, if backups are older than 30 days, a complete new full backup is done.
so in your script, you could just delete files older than XXX days, after
the files are pushed to the desired location. just an idea. that wouldn´t do things too complicate, in my eyes.
from mysql-backup.
Related Issues (20)
- Go version: Default port not being used HOT 2
- Go version: Crashes on DATE, DATETIME and TIMESTAMP fields HOT 2
- Add support for configurable rolling backup retention period e.g Keep only latest 7 days of backups HOT 7
- error calling CreateSQL: sql: expected 4 destination arguments in Scan, not 2 HOT 12
- mysqldump: Couldn't execute 'SHOW FIELDS FROM `host_summary`': HOT 3
- Go Version: Cron delay wrong after first execution
- mysqldump: Got error: 2002: "Can't connect to server on 'app-mysql' (115)" when trying to connect HOT 11
- Cannot connect to mysql over the local unix domain socket HOT 3
- Documentation inconsistencies HOT 2
- How to create a backup copy of not only data and tables, but also functions and procedures? HOT 1
- golang mysql-backup missing bzip2 support
- golang mysql-backup does not implement the nice argument HOT 4
- S3 upload issue for newer versions 1.0.0-rc+ HOT 3
- DB_PORT by default is 0 in the newest versions like 1.0.0-rc HOT 1
- How to set DB_DUMP_FILENAME_PATTERN HOT 1
- MariaDB Compatability issue HOT 12
- How to ignore tables? in 1.0.4rc? HOT 3
- Retention settings do not work HOT 2
- Retention again :-( HOT 1
- Dump: bigint type issue HOT 12
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mysql-backup.