Giter Club home page Giter Club logo

zfsbackup's Introduction

zfsbackup

As described in this post post, i modified the awesome script from JΓΆrg Binnewald @ https://esc-now.de/ to suit my needs.

The script is automatically triggered, when the backup-HDD is attached to the TrueNAS-Server. The Datasets (and child-Datasets) are incrementally backed-up via send/receive (ZFS feature). This script can be used for TrueNas, as well as for Linux machines.

Content

  • truenas.poolbackup.sh (Backup script, triggert by devd rule)
  • Example file "truenas-poolbackup-conf-example.env" provided - rename to truenas-poolbackup-conf.env
  • devd-backuphdd.conf (devd rule, connecting HDD)
  • truenas-copy-devdconf.sh (Workaround, because TrueNas keeps killing the devd rule)
  • Folder "restore" (scripts to mount and unmount the backup HDD on a linux maschine; not needed if you use the script for TrueNas)
  • Folder "homebackup" (helper scripts to run in my local Linux maschine to backup the home folder; not needed if you use the script for TrueNas)

Usage

  • configure the devd-rule, and change the config in truenas-poolbackup-conf.env to suit your needs
  • place the script on a safe place in your datapool (as the system partition is not upgrade safe)
  • you may run the script manually, there are parameters available:
    • -h show help
    • -f force scrub (even if the condition, number of passed days) is not met
    • -d dry run (do not do the actual backup)
    • -c specify non-standard config-file
    • -y don't ask when creating/overwriting/deleting folders in backup (caution: intended to be used on initial backups; could cause data loss on existing backup data)
  • execute truenas-copy-devdconf.sh to enable auto backup when configured HDD is attached

zfsbackup's People

Contributors

dapuru avatar pitastic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

zfsbackup's Issues

False detection of bad pool shape

Thank you and esc-now.de for this pretty script.

I am about to tweak and use it for my purposes.

I am running TrueNAS on FreeBSD 12.2-RELEASE-p12 . One or two updates ago there was introduced a new ZFS feature upgrade which I decided not to upgrade to. Therefore there is a liitle hint in zpool statuswhich looks like this (for every pool with this zfs version):

  pool: freenas-boot
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
	still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
	the pool may no longer be accessible by software that does not support
	the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 00:00:50 with 0 errors on Sat Apr 16 03:45:50 2022
config:

	NAME          STATE     READ WRITE CKSUM
	freenas-boot  ONLINE       0     0     0
	  ada0p2      ONLINE       0     0     0

errors: No known data errors

Your regex for pool health looks among other things for 'UNAVAIL' and hit the info message. Although the pool is healthy.

Maybe this is an edge case but the following line needs to be tuned a bit to be more precise:

condition=$(zpool status | egrep -i '(DEGRADED|FAULTED|OFFLINE|UNAVAIL|REMOVED|FAIL|DESTROYED|corrupt|cannot|unrecover)')

A quick but little nasty fix could be

zpool status | egrep -i '(DEGRADED|FAULTED|OFFLINE|UNAVAIL|REMOVED|FAIL|DESTROYED|corrupt|cannot|unrecover)' | grep -v "features are unavailable"

Yes to all Option

I got ca. 10 datasets, most of them (especially the iocage) have subdatasets. Some of them are big, some of them are small.

On my initial backup I have to watch and reconnect to the script process for 24 hours because of the questions, if a new folder should be created on the (empty/clean) backup HDD. Because of the childdatasets there are tons of questions over the whole day.

I think the security question is not necessary at initial backups (hdd is clean) nor for childdatasets (if there were data, the parent already wiped it).

I would add an option for the command line to skip any "No snapshot found" question.

Better error handling

After some backups one script execution (started headless with devd) failed due to running out of disk space on my usb drive.

This is ok and zfs has handled this correctly as some snapshots were copied and the one which does not fit on the disk has not started; no partial files, no mess πŸ‘πŸΌ

The downside is that the script fails and there is no process of cleaning up changes. The zfs pool was not exported at all and in a bad state. I had to restart the NAS because of this.

There should be a cleanup function to prevent leaving the system in such a state.

Serious Error Messages are not showing up in the logs

While playing around with the devd feature I started a backup without correct attached HDD (connected but detected as DETACHED). The pool on the disk could not be found (makes sense!).

The scripts runs through all operations failed a hundred times and gives me the "SUCC Backup" mail (which was not successful at all).

The logs shows nothing about the missing pool - just about missing datasets.

I am working on another Pull to implement at least set -e (fail on error and exit) and some other features for cron (on demand via gui) compatibiliy.

Feature: Just seletc a Childdataset without its parent

Hi there,

I tried to backup one of my datasets for a jail :

MASTERPOOL="storage"
BACKUPPOOL="seagatebackup"
MAINDATASETS=("iocage/jails/sync")

Backup pretend to be successful. Looking at the disk, no new dataset were created (which makes sense from the logs) :

full send of storage/iocage/jails/sync/root@back-script-20220418-213143 estimated size is 1.89G
total estimated size is 1.89G
cannot open 'seagatebackup/iocage/jails/sync': dataset does not exist
cannot receive new filesystem stream: unable to restore to destination
warning: cannot send 'storage/iocage/jails/sync/root@back-script-20220418-213143': signal received

However backing up parent datasets is working fine.

Edit:
Ok...the parameter name "-MAIN-dataset" speaks for itself... so just see it as a feature request then πŸ˜‰

fail when triggered by devd

hi,

i get:
"****************** data_pool / xxxx ******************
ERROR - No snapshot found..."
via mail when the script is triggered by devd.

triggered from console everything works fine.

what can i do?

kind regards

backup stops when started by devd - completes when started in screen session

hi, me again.
when started with devd the script stops reproducibly at this point:

cat 2024-02-22_07-55-07-backup.log
Running on freebsd
########## Backup of pools on NAS-YVP ##########
Started Backup-job: 2024-02-22_07-55-07

+--------------+--------+------------+--------+
|Dataset |Size |Snapcount |SnapSize |
+--------------+--------+------------+--------+
| Dataset | 3.17T | none | 0B |
+--------------+--------+------------+--------+
| Dataset/Austausch | 328G | none | 2.61M |
+--------------+--------+------------+--------+
| Dataset/Org | 496G | none | 30.1M |
+--------------+--------+------------+--------+
| Dataset/Projekte | 2.33T | none | 1.08G |
+--------------+--------+------------+--------+
| Dataset/ProjekteIntern | 32.2G | none | 37.4M |
+--------------+--------+------------+--------+
| Dataset/ScanOrdner | 773M | none | 1.17M |
+--------------+--------+------------+--------+
pool: pool-backup2
state: ONLINE
scan: scrub repaired 0B in 1 days 00:06:26 with 0 errors on Thu Jan 25 10:16:29 2024
config:

NAME STATE READ WRITE CKSUM
pool-backup2 ONLINE 0 0 0
gptid/bd972045-b0af-11ee-810a-ece7a7077b48 ONLINE 0 0 0

errors: No known data errors

****************** data / Dataset ******************
most current snapshot in Backup: pool-backup2/Dataset@back-script-20240214-091013
new snapshot created: data/Dataset@back-script-20240222-075532
Send: data/Dataset@back-script-20240222-075532 to pool-backup2/Dataset
Destroying Snapshots for Dataset:
pool-backup2/Dataset@back-script-20240214-084453
data/Dataset@back-script-20240214-084453

****************** data / Dataset/Austausch ******************
most current snapshot in Backup: pool-backup2/Dataset/Austausch@back-script-20240214-091014

when started in a screen session it works:

cat 2024-02-22_09-42-09-backup.log
Running on freebsd
########## Backup of pools on NAS-YVP ##########
Started Backup-job: 2024-02-22_09-42-09

+--------------+--------+------------+--------+
|Dataset |Size |Snapcount |SnapSize |
+--------------+--------+------------+--------+
| Dataset | 3.17T | none | 0B |
+--------------+--------+------------+--------+
| Dataset/Austausch | 328G | none | 2.61M |
+--------------+--------+------------+--------+
| Dataset/Org | 496G | none | 30.1M |
+--------------+--------+------------+--------+
| Dataset/Projekte | 2.33T | none | 1.08G |
+--------------+--------+------------+--------+
| Dataset/ProjekteIntern | 32.2G | none | 37.4M |
+--------------+--------+------------+--------+
| Dataset/ScanOrdner | 773M | none | 1.17M |
+--------------+--------+------------+--------+
pool: pool-backup2
state: ONLINE
scan: scrub repaired 0B in 1 days 00:06:26 with 0 errors on Thu Jan 25 10:16:29 2024
config:

NAME STATE READ WRITE CKSUM
pool-backup2 ONLINE 0 0 0
gptid/bd972045-b0af-11ee-810a-ece7a7077b48 ONLINE 0 0 0

errors: No known data errors

****************** data / Dataset ******************
most current snapshot in Backup: pool-backup2/Dataset@back-script-20240222-075532
new snapshot created: data/Dataset@back-script-20240222-094233
Send: data/Dataset@back-script-20240222-094233 to pool-backup2/Dataset
Destroying Snapshots for Dataset:
pool-backup2/Dataset@back-script-20240214-091013
data/Dataset@back-script-20240214-091013

****************** data / Dataset/Austausch ******************
most current snapshot in Backup: pool-backup2/Dataset/Austausch@back-script-20240214-091014
new snapshot created: data/Dataset/Austausch@back-script-20240222-094234
Send: data/Dataset/Austausch@back-script-20240222-094234 to pool-backup2/Dataset/Austausch
Destroying Snapshots for Dataset/Austausch:
pool-backup2/Dataset/Austausch@back-script-20240207-144818
data/Dataset/Austausch@back-script-20240222-075532
data/Dataset/Austausch@back-script-20240214-091014
data/Dataset/Austausch@back-script-20240214-091013

****************** data / Dataset/Org ******************
most current snapshot in Backup: pool-backup2/Dataset/Org@back-script-20240214-091015
new snapshot created: data/Dataset/Org@back-script-20240222-094235
Send: data/Dataset/Org@back-script-20240222-094235 to pool-backup2/Dataset/Org
Destroying Snapshots for Dataset/Org:
pool-backup2/Dataset/Org@back-script-20240207-144819
data/Dataset/Org@back-script-20240222-075532
data/Dataset/Org@back-script-20240214-091015
data/Dataset/Org@back-script-20240214-091013

****************** data / Dataset/Projekte ******************
most current snapshot in Backup: pool-backup2/Dataset/Projekte@back-script-20240214-091018
new snapshot created: data/Dataset/Projekte@back-script-20240222-094237
Send: data/Dataset/Projekte@back-script-20240222-094237 to pool-backup2/Dataset/Projekte
Destroying Snapshots for Dataset/Projekte:
pool-backup2/Dataset/Projekte@back-script-20240207-144819
data/Dataset/Projekte@back-script-20240222-075532
data/Dataset/Projekte@back-script-20240214-091018
data/Dataset/Projekte@back-script-20240214-091013

****************** data / Dataset/ProjekteIntern ******************
most current snapshot in Backup: pool-backup2/Dataset/ProjekteIntern@back-script-20240214-091437
new snapshot created: data/Dataset/ProjekteIntern@back-script-20240222-094308
Send: data/Dataset/ProjekteIntern@back-script-20240222-094308 to pool-backup2/Dataset/ProjekteIntern
Destroying Snapshots for Dataset/ProjekteIntern:
pool-backup2/Dataset/ProjekteIntern@back-script-20240207-144820
data/Dataset/ProjekteIntern@back-script-20240222-075532
data/Dataset/ProjekteIntern@back-script-20240214-091437
data/Dataset/ProjekteIntern@back-script-20240214-091013

****************** data / Dataset/ScanOrdner ******************
most current snapshot in Backup: pool-backup2/Dataset/ScanOrdner@back-script-20240214-091441
new snapshot created: data/Dataset/ScanOrdner@back-script-20240222-094315
Send: data/Dataset/ScanOrdner@back-script-20240222-094315 to pool-backup2/Dataset/ScanOrdner
Destroying Snapshots for Dataset/ScanOrdner:
pool-backup2/Dataset/ScanOrdner@back-script-20240207-144821
data/Dataset/ScanOrdner@back-script-20240222-075532
data/Dataset/ScanOrdner@back-script-20240214-091441
data/Dataset/ScanOrdner@back-script-20240214-091013

****************** pool-backup2 - Cleanup ******************
Last Scrub for pool-backup2 was on 1706137200 (28 days ago, below 30 days-limit) - NO scrub needed...
Finished Backup-job: 2024-02-22_09-43-17

I have no idea why. How to debug?

kind regards

initial scrub logic fails

hi,
if
scrubRawDate=$(zpool status $BACKUPPOOL | grep scrub | awk '{print $15 $12 $13}')
is empty (because there was no scrubing before)
we get a confusing error message in status mails.
i am not able to correct this, sorry.
regards

Script is looking for wrong snapshot names and fails when child and parent are diffrent

I create snapshots with periodic tasks and use this script which creates also snapshots. The result for one of my dataset is this list of snapshots for example:

# Storage			# Backup
Dokumente@20220101-1200		Dokumente@20220101-1200
Dokumente@20220202-1300		Dokumente@20220202-1300
- None -			Dokumente@20220202-1330
Dokumente/Child@20220101-1200	Dokumente/Child@20220101-1200
Dokumente/Child@20220101-1300	Dokumente/Child@20220101-1300
Dokumente/Child@20220203-1330	Dokumente/Child@20220203-1330

As you can see, parent and child snapshotlist is not equal on storage and one even not equal to backup.

The following line of the script is looking for the most recent snapshot (to use this value in zfs send) :

origBSnap=$(zfs list -rt snap -H -o name "${MASTERPOOL}/${DATASET}" | grep $recentBSnap | cut -d@ -f2)

This is unfortunately not accurate as it would find the snapshotname Dokumente/Child@20220203-1330 in my example which causes the script to fail as there is no corresponding Dokumente@20220203-1330 snapshot.

Maybe every datasets snapshot (child or parent) have to be looked up on its own?

Feature: Restore Backup

Would be nice to use this script also for restoring the backup it makes.

Without looking deeper into it, is it more overhead than just changing source / destination via cli flag for example (plus some extra handling of course) ?

Feature: Log free Backupspace

Before a backup fails due to free disk space on the usb hdd, we could include the amount of free space in every backup log.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.