bolthole / zrep Goto Github PK
View Code? Open in Web Editor NEWZREP ZFS based replication and failover script from bolthole.com
License: Other
ZREP ZFS based replication and failover script from bolthole.com
License: Other
When I run /usr/local/bin/zrep -t zrep-local sync tank/officeshare
from the command line, everything syncs appropriately.
When I put that same command into my crontab, I get the following response through e-mail:
Error: Problem doing sync for tank/officeshare@zrep-local_000024. renamed to tank/officeshare@zrep-local_000024_unsent
Is there a way to enable verbose messages? I'm not familiar with ksh, but bash has the option to run set -x
and the executing script will print out every line that it's executing so you can look for the failure.
I have observed that ssh is quite slow and I don't want to use a non-default patched version.
Instead I use a poor home made script that does replicate local to remote filesytems but connects with mbuffer over openvpn encrypted channel. I am saturating a 100Mbit link, something that ssh can not be expected to.
It would be nice to drop my poorly written home script and use this popular script instead.
roughly on receiving side I do this in rc.local and ensure nobody can talk to it except the sender
while true; do (mbuffer -s 128k -m 64M -I 9090 | lz4 -d - |zfs receive -Fvdu big); done
while on the sending side
zfs send -vI @$FROMSNAP $localdataset@$TOSNAP | lz4 -c -9 - | mbuffer -s 128k -m 500M -O $remotehost:9090
thanks!
This is more or less a feature request: I use a custom snapshot script that makes snapshots via cron and stores snapshot retention time as user property. The problem is that this user property is not replicated. This causes all snapshots to be outdated.
It would be great if zrep could synchronize either all user properties or a configurable list of user properties.
Thank you,
Michael.
Hi,
today we did a failover of 2 Filesystems from the Secondary Node to the Primary, and after that, one of the Filesystems on the primary Node created many "Unsent" Snapshots, which of course were not visible on the Secondary Node.
So, the failover of one of the 2 Filesystems worked and the other not.
Here is the Overview:
Error: Problem doing sync for pool1/apachelogs@p1_apachelogs_000135. renamed to pool1/apachelogs@p1_apachelogs_000135_unsent
Primary and Secondary Node after the Failover:
NAME USED AVAIL REFER MOUNTPOINT pool1 17.3G 59.8G 32K /pool1 pool1/apachelogs 54.5M 4.95G 44.3M /pool1/apachelogs pool1/apachelogs@p1_apachelogs_000118 251K - 43.2M - pool1/apachelogs@p1_apachelogs_000119 256K - 43.2M - pool1/apachelogs@p1_apachelogs_00011a 260K - 43.2M - pool1/apachelogs@p1_apachelogs_00011b 266K - 43.2M - pool1/apachelogs@p1_apachelogs_00011c 243K - 43.2M - pool1/apachelogs@p1_apachelogs_00011d 210K - 43.2M - pool1/apachelogs@p1_apachelogs_00011e_unsent 230K - 43.2M - pool1/apachelogs@p1_apachelogs_00011f_unsent 278K - 43.3M - pool1/apachelogs@sp1_apachelogs_000120_unsent 288K - 43.3M - pool1/apachelogs@p1_apachelogs_000121_unsent 294K - 43.4M - pool1/apachelogs@p1_apachelogs_000122_unsent 296K - 43.4M - pool1/apachelogs@p1_apachelogs_000123_unsent 328K - 43.6M - pool1/apachelogs@p1_apachelogs_000124_unsent 370K - 43.7M - pool1/apachelogs@p1_apachelogs_000125_unsent 364K - 43.9M - pool1/apachelogs@p1_apachelogs_000126_unsent 376K - 43.9M - pool1/apachelogs@p1_apachelogs_000127_unsent 370K - 43.9M - pool1/apachelogs@p1_apachelogs_000128_unsent 372K - 43.9M - pool1/apachelogs@p1_apachelogs_000129_unsent 376K - 43.9M - pool1/apachelogs@p1_apachelogs_00012a_unsent 380K - 43.9M - pool1/apachelogs@p1_apachelogs_00012b_unsent 382K - 44.0M - pool1/apachelogs@p1_apachelogs_00012c_unsent 400K - 44.0M - pool1/apachelogs@p1_apachelogs_00012d_unsent 414K - 44.1M - pool1/apachelogs@p1_apachelogs_00012e_unsent 400K - 44.2M - pool1/apachelogs@p1_apachelogs_00012f_unsent 403K - 44.2M - pool1/apachelogs@p1_apachelogs_000130_unsent 406K - 44.2M - pool1/apachelogs@p1_apachelogs_000131_unsent 412K - 44.2M - pool1/apachelogs@p1_apachelogs_000132_unsent 402K - 44.2M - pool1/apachelogs@p1_apachelogs_000133_unsent 405K - 44.2M - pool1/apachelogs@p1_apachelogs_000134_unsent 426K - 44.3M - pool1/apachelogs@p1_apachelogs_000135_unsent 326K - 44.3M -
NAME USED AVAIL REFER MOUNTPOINT pool1 17.2G 59.8G 32K /pool1 pool1/apachelogs 44.7M 4.96G 43.3M /pool1/apachelogs pool1/apachelogs@p1_apachelogs_000118 252K - 43.3M - pool1/apachelogs@p1_apachelogs_000119 256K - 43.3M - pool1/apachelogs@p1_apachelogs_00011a 262K - 43.3M - pool1/apachelogs@p1_apachelogs_00011b 266K - 43.3M - pool1/apachelogs@p1_apachelogs_00011c 244K - 43.3M - pool1/apachelogs@p1_apachelogs_00011d 77K - 43.3M -
pool1/apachelogs readonly off default pool1/apachelogs p1_apachelogs:dest-fs pool1/apachelogs local pool1/apachelogs p1_apachelogs:src-fs pool1/apachelogs local pool1/apachelogs p1_apachelogs:savecount 5 received pool1/apachelogs p1_apachelogs:master yes local pool1/apachelogs p1_apachelogs:src-host sweb1a local pool1/apachelogs p1_apachelogs:lock-time 20160930103015 local pool1/apachelogs p1_apachelogs:dest-host sweb1b local pool1/apachelogs p1_apachelogs:lock-pid 11663 local
pool1/apachelogs readonly on local pool1/apachelogs p1_apachelogs:src-host [email protected] local pool1/apachelogs p1_apachelogs:dest-host sweb1b local pool1/apachelogs p1_apachelogs:src-fs pool1/apachelogs local pool1/apachelogs p1_apachelogs:dest-fs pool1/apachelogs local pool1/apachelogs p1_apachelogs:savecount 5 local
I didn't see this behavior in our previous test while doing failover or takeover.
Can be that apache was the cause of the problem, while still reading/writing in the filesystem?
In cases like this one, how can we repair the Replication without having to re-do it all over again?
Thanks!
Ximena.
I'm using zfs for one filesystem (works awesome), but I'm trying to add another that (unfortunately) has a space in the name. ("storage/storage/NETWORK PARTS").
Using zfs I can manipulate it by surrounding it in quotes, but that doesn't seem to work with zrep. I'm currently using 1.6.7, so not sure if this has been addressed in the non-stable builds.
Hello,
Thanks for your efforts on this script. I adjusted the ksch shebang as per your recommendations for FreeNAS (9.3.1).
[root@freenas] ~# ./zrep -i Backup8TB localhost Backup8TB/zrepPool
Setting properties on Backup8TB
Creating snapshot Backup8TB@zrep_000000
Sending initial replication stream to localhost:Backup8TB/zrepPool
Initialization copy of Backup8TB to localhost:Backup8TB/zrepPool complete
[root@freenas] ~# ./zrep -S Backup8TB
Error: /proc fs must be functional to use zrep
[root@freenas] ~# stat /proc/
978896654 92804 drwxr-xr-x 2 root wheel 4294967295 2 "Aug 28 11:38:41 2016" "Aug 28 11:38:41 2016" "Aug 28 11:38:41 2016" "Aug 28 11:38:41 2016" 4096 3 0x800 /proc/
Any ideas? I made a zpool called Backup8TB and the dataset has the same name so it seems like this is the proper syntax as calling it with Backup8TB/Backup8TB returned an error.
[root@freenas] ~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
Backup8TB 520K 7.02T 96K /mnt/Backup8TB
Backup8TB/zrepPool 96K 7.02T 96K /mnt/Backup8TB/zrepPool
[root@freenas] ~# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
Backup8TB 7.25T 580K 7.25T - 0% 0% 1.00x ONLINE /mnt
Hi Philip
We started using zrep in production and are pretty happy with it so far.
One thing we were concerned of was snapshot sequence exhaustion. What happens after we have arrived at sequence "ffffff"? zrep will not catch up the last snapshot sequence correctly in getlastsnap(). Proof:
#!/bin/ksh -p
oldseq="fffffe"
snapshots=""
for((i=1;i<=5;i++)); do
snapshots="rpool/demo@zrep_$oldseq\n$snapshots"
newseq=$((0x$oldseq))
newseqX=$(printf "%.6x" $(($newseq + 1)) )
print "$i: $oldseq -> $newseqX"
oldseq=$newseqX
done
echo
echo "### SNAPSHOTS"
echo -ne "$snapshots"
echo
echo "### LAST_SNAPSHOT"
lastsnap=`echo -ne $snapshots | sed -n '/@zrep_[0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f]/'p | sort | tail -1`
echo $lastsnap
Output of this test:
1: fffffe -> ffffff
2: ffffff -> 1000000
3: 1000000 -> 1000001
4: 1000001 -> 1000002
5: 1000002 -> 1000003
### SNAPSHOTS
rpool/demo@zrep_1000002
rpool/demo@zrep_1000001
rpool/demo@zrep_1000000
rpool/demo@zrep_ffffff
rpool/demo@zrep_fffffe
### LAST_SNAPSHOT
rpool/demo@zrep_ffffff
Expected last snapshot would be rpool/demo@zrep_1000002 instead of rpool/demo@zrep_ffffff.
If you can fix this easily, please go ahead. If not, no worries - I am perfectly aware that in 32yrs we will no longer be running zrep, even if it's gonna be one of the best scripts in the world. :)
maths under the assumption of running zrep every minute:
(16^6)/(365*24*60) = ~ 32 years
Cheers, Philip
Can you add a --version flag to output the current version of zrep?
Since zrep now supports tags for multiple destinations, I have been taking advantage of this and syncing source with two destinations. But zrep sends all intermediary snapshots, using the -I opiton, https://github.com/bolthole/zrep/blob/master/zrep#L1138. The snaphsots created on the source look like this
NAME USED AVAIL REFER MOUNTPOINT
datapool/ops-spool@zrep-brigade_0000c0 0 - 773M -
datapool/ops-spool@zrep_0001e1 0 - 773M -
Next when I do another zrep with zrep-brigade tag, it sends both zrep_0001e1 and zrep-brigade_0000c1, which is not what we want.
This would not be a big deal except that when it comes time to expiring old snapshots, say, for tag zrep-brigade, it only expires zrep-brigade snapshots, logically. And all the other zrep_xxx snapshots stick around, and keep accumulating. You could remove the -I option when doing a zfs send and I don't think removing it would affect zrep operation. I removed it in my local copy, and testing it, but wanted to create an issue if other folks want to see why zrep is sending and accumulating other tags, when doing multi-destination syncs.
When I played with script, I noticed that destination properties are set one by one when recv -x
is not supported. I also saw from code comments that you like to always send with -p
option. There is possibility to send with -p
option and receive with -u
option so destination filesystem isn't mounted and mountpoint can be fixed after transfer.
In my pull #36 (comment) nothing changed against your original approach, destination is mounted at the end, no init-unmounted
is needed. My code could probably be used instead of else section and settings properties one by one, but I wasn't sure, I know only SMartOS so I left it intact.
Hi Phillip,
I have tested your script and it works great! Many thanks for it (and its constant improvements)...
However, I would like to see (example of output) this written with time stamps:
sending srv2/zfs_id_1065@zrep_000235 to srv1:srv1/zfs_id_1065
Expiring zrep snaps on srv2/zfs_id_1065
Also running expire on srv1:srv1/zfs_id_1065 now...
Expiring zrep snaps on srv1/zfs_id_1065
Like
[1970-01-01 01:00:00] sending srv2/zfs_id_1065@zrep_000235 to srv1:srv1/zfs_id_1065
[1970-01-01 01:02:00] Expiring zrep snaps on srv2/zfs_id_1065
[1970-01-01 01:04:00] Also running expire on srv1:srv1/zfs_id_1065 now...
[1970-01-01 01:06:00] Expiring zrep snaps on srv1/zfs_id_1065
Or something similar (maybe if enabled, to specify format)...
I guess, it should not be so complicated, and our logs will show exact time when zrep was doing tasks.
With best regards
Predrag Zečević
Hi Philip,
What a great script! That's exactly what we were looking for when we planned to move away from our existing rsync-based solution under ext4 on Proxmox VE (Debian Linux based virtualization platform for OpenVZ & KVM). Very nice, you're providing this great script to the community. Thanks.
We are not yet running ZFS/ZREP in production but the flags are all green to start soon, since Proxmox VE 3.4 introduced ZFS support with the latest ZFSOnLinux built in.
One thing that did not work on ZFSOnLinux, obviously was the detection of Z_HAS_X / Z_HAS_SNAPPROPS / DEPTHCAP in zrep_vars, lines 45-78. As a quickfix, we were simply overriding the (wrongly) auto-detected values:
Z_HAS_X=0 # cannot use recv -x
Z_HAS_SNAPPROPS=1 # can set properties on snapshots
DEPTHCAP="-d 1" # limits "list -r"
If you could also make the auto-detection of these features compatible with zfs output of ZFSOnLinux, that would be great. Here you go:
$ zfs 2>&1
missing command
usage: zfs command args ...
where 'command' is one of the following:
create [-p] [-o property=value] ... <filesystem>
create [-ps] [-b blocksize] [-o property=value] ... -V <size> <volume>
destroy [-fnpRrv] <filesystem|volume>
destroy [-dnpRrv] <filesystem|volume>@<snap>[%<snap>][,...]
snapshot|snap [-r] [-o property=value] ... <filesystem@snapname|volume@snapname> ...
rollback [-rRf] <snapshot>
clone [-p] [-o property=value] ... <snapshot> <filesystem|volume>
promote <clone-filesystem>
rename [-f] <filesystem|volume|snapshot> <filesystem|volume|snapshot>
rename [-f] -p <filesystem|volume> <filesystem|volume>
rename -r <snapshot> <snapshot>
list [-Hp] [-r|-d max] [-o property[,...]] [-s property]...
[-S property]... [-t type[,...]] [filesystem|volume|snapshot] ...
set <property=value> <filesystem|volume|snapshot> ...
get [-rHp] [-d max] [-o "all" | field[,...]] [-t type[,...]] [-s source[,...]]
<"all" | property[,...]> [filesystem|volume|snapshot] ...
inherit [-rS] <property> <filesystem|volume|snapshot> ...
upgrade [-v]
upgrade [-r] [-V version] <-a | filesystem ...>
userspace [-Hinp] [-o field[,...]] [-s field]...
[-S field]... [-t type[,...]] <filesystem|snapshot>
groupspace [-Hinp] [-o field[,...]] [-s field]...
[-S field]... [-t type[,...]] <filesystem|snapshot>
mount
mount [-vO] [-o opts] <-a | filesystem>
unmount [-f] <-a | filesystem|mountpoint>
share <-a | filesystem>
unshare <-a | filesystem|mountpoint>
send [-DnPpRrv] [-[iI] snapshot] <snapshot>
receive [-vnFu] <filesystem|volume|snapshot>
receive [-vnFu] [-d | -e] <filesystem>
allow <filesystem|volume>
allow [-ldug] <"everyone"|user|group>[,...] <perm|@setname>[,...]
<filesystem|volume>
allow [-ld] -e <perm|@setname>[,...] <filesystem|volume>
allow -c <perm|@setname>[,...] <filesystem|volume>
allow -s @setname <perm|@setname>[,...] <filesystem|volume>
unallow [-rldug] <"everyone"|user|group>[,...]
[<perm|@setname>[,...]] <filesystem|volume>
unallow [-rld] -e [<perm|@setname>[,...]] <filesystem|volume>
unallow [-r] -c [<perm|@setname>[,...]] <filesystem|volume>
unallow [-r] -s @setname [<perm|@setname>[,...]] <filesystem|volume>
hold [-r] <tag> <snapshot> ...
holds [-r] <snapshot> ...
release [-r] <tag> <snapshot> ...
diff [-FHt] <snapshot> [snapshot|filesystem]
Each dataset is of the form: pool/[dataset/]*dataset[@name]
For the property list, run: zfs set|get
For the delegated permission list, run: zfs allow|unallow
Best regards,
Philip
I have a recent version of smartos. zrep -i zones/disk 10.1.60.30 zones/disk
results
Sorry, your zfs is too old for zrep to safely handle volume initialization
Error: Please initialize volume target by hand, if you won't upgrade
zones/g is a zfs volume
created as zfs create -V 10G zones/disk
.
However zfs filesystem
is doing OK.
This the latest zrep 1.6.1. Is this supposed to be or a bug?
Hello there, we're testing zrep on our testing systems before we go live in production and while testing I found the issue where one filesystem being replicated in another server in 2 different pools, is not removing the snapshots (the tagged ones) on the first synced filesystem (destination).
For a better understanding of my situation, here's an example:
Origin Server:
root@origin:# zfs list -t all#
NAME USED AVAIL REFER MOUNTPOINT
pool01 850K 48.2G 31K /pool01
pool01/fs01 30K 48.2G 30K /pool01/fs01
pool01/fs01@zrep_000008 0 - 30K -
pool01/fs01@zrep_2tag_000008 0 - 30K -
pool01/fs01@zrep_000009 0 - 30K -
pool01/fs01@zrep_2tag_000009 0 - 30K -
pool01/fs01@zrep_00000a 0 - 30K -
pool01/fs01@zrep_2tag_00000a 0 - 30K -
pool01/fs01@zrep_00000b 0 - 30K -
pool01/fs01@zrep_2tag_00000b 0 - 30K -
pool01/fs01@zrep_00000c 0 - 30K -
pool01/fs01@zrep_2tag_00000c 0 - 30K -
pool02 1.24G 47.0G 31K /pool02
pool02/fs02 1.24G 47.0G 1.24G /pool02/fs02
pool1 7.62M 19.2G 31K /pool1
pool1/st1 7.33M 19.2G 7.33M /pool1/st1
root@origin:
Destination Server:
root@destination:# zfs list -t all#
NAME USED AVAIL REFER MOUNTPOINT
pool01 1.22M 48.2G 31K /pool01
pool01/fs01 46K 48.2G 30K /pool01/fs01
pool01/fs01@zrep_2tag_000000 1K - 30K -
pool01/fs01@zrep_2tag_000001 1K - 30K -
pool01/fs01@zrep_2tag_000002 1K - 30K -
pool01/fs01@zrep_2tag_000003 1K - 30K -
pool01/fs01@zrep_2tag_000004 1K - 30K -
pool01/fs01@zrep_2tag_000005 1K - 30K -
pool01/fs01@zrep_2tag_000006 1K - 30K -
pool01/fs01@zrep_2tag_000007 1K - 30K -
pool01/fs01@zrep_000008 1K - 30K -
pool01/fs01@zrep_2tag_000008 1K - 30K -
pool01/fs01@zrep_000009 1K - 30K -
pool01/fs01@zrep_2tag_000009 1K - 30K -
pool01/fs01@zrep_00000a 1K - 30K -
pool01/fs01@zrep_2tag_00000a 1K - 30K -
pool01/fs01@zrep_00000b 1K - 30K -
pool01/fs01@zrep_2tag_00000b 1K - 30K -
pool01/fs01@zrep_00000c 0 - 30K -
pool02 1.48M 48.2G 31K /pool02
pool02/fs01 34K 48.2G 30K /pool02/fs01
pool02/fs01@zrep_2tag_000008 1K - 30K -
pool02/fs01@zrep_2tag_000009 1K - 30K -
pool02/fs01@zrep_2tag_00000a 1K - 30K -
pool02/fs01@zrep_2tag_00000b 1K - 30K -
pool02/fs01@zrep_2tag_00000c 0 - 30K -
root@destination:
Here you can see the Cronjob:
*/1 * * * * root zrep -S pool01/fs01; zrep -t zrep_2tag -S pool01/fs01
Here are the Filesystem properties:
root@origin:# zrep -l -v pool01/fs01#
pool01/fs01:
xattr sa
zrep_2tag:master yes
zrep:src-host vs1o
zrep_2tag:dest-fs pool02/fs01
zrep_2tag:src-fs pool01/fs01
zrep_2tag:dest-host [email protected]
zrep:savecount 5
zrep:dest-fs pool01/fs01
zrep_2tag:savecount 5
zrep:master yes
zrep_2tag:src-host vs1o
zrep:dest-host [email protected]
zrep:src-fs pool01/fs01
last snapshot synced: pool01/fs01@zrep_000013
root@origin:
As you may have noticed, on the origin I have Snapshot from: @zrep_2tag_000008 to: @zrep_2tag_00000b but on the destination, I have snapshots from @zrep_2tag_000000 to: @zrep_2tag_00000b
Is this a real issue here or am I missing something while setting up and/or initializing zrep?
Thanks a lot for your amazing contribution!
after running zrep refresh (from backup server), the old snapshots still exist on both master and slave. The _expire function is never called.
Hello
nas4free: zrep# zrep -i raid10/ubuntu1 192.168.40.109 P4Tb/ubuntu1
Sorry, your zfs is too old for zrep to safely handle volume initialization
Error: Please initialize volume target by hand, if you won't upgrade
nas4free: zrep# zfs get -H -o value type raid10/ubuntu1
volume
Is not sent volume.
nas4free: zrep# zfs get -H -o value type raid10/storage
filesystem
nas4free: zrep# zrep status -v
raid10/storage ->192.168.40.109:P4Tb/storage Mar 7 4:58 2017
filesystem is ok.
master 192.168.40.110
nas4free: zrep# uname -a
FreeBSD nas4free.local 11.0-RELEASE-p5 FreeBSD 11.0-RELEASE-p5 #0 r309722M: Thu Dec 8 22:52:57 CET 2016 [email protected]:/usr/obj/nas4free/usr/src/sys/NAS4FREE-amd64 amd64
slave 192.168.40.109
nas4free: ~# uname -a
FreeBSD nas4free.local 11.0-RELEASE-p7 FreeBSD 11.0-RELEASE-p7 #0 r312343M: Tue Jan 17 15:41:49 CET 2017 [email protected]:/usr/obj/nas4free/usr/src/sys/NAS4FREE-amd64 amd64
Why not transmitted volume?
Hi,
I'd like to package zrep for NixOS. (https://nixos.org)
Because of the nature of this distribution, I'll need to make modifications. This will be things like changing the interpreter path to /usr/bin/env ksh, and otherwise adapting it to NixOS's nonstandard directory structure. Additionally we'd like to autogenerate a "compiled" version for the Nix derivation cache, so there'd be redistribution as well.
Is this okay with you?
On a secondary note, which exact version of ksh do you need to use? The link on http://www.kornshell.com/software/ is dead.
Oh, and while you're looking at this... is it okay to use zrep on a parent filesystem, where I don't want to replicate its children?
I glanced at the source, but it wasn't immediately obvious to me. Is there a way I can get zrep to establish its SSH connection to an alternate port? I have one particular box that's behind a firewall that has multiple port forwards for SSH and it's sitting on port 229.
Can I set the environment variable SSH="ssh -p 229"
?
Sometimes, syncing fails in a way that prevents further syncing: as far as I understand, at some point a snapshot is correctly received by the destination machine, but the reception is not acknowledged by the source (probably because the connection dropped after sending the data but before the diff is applied on the destination). In that case, during the next sync zrep tries to upload the already applied snapshot, and thus fails (and if the sync runs from a cron job, the system starts to accumulate lots of snapshots).
In this case, in order to recover manually I have to roll back the destination filesystem to a previous snapshot and sync again. It would be nice if zrep could detect that condition and recover automatically.
Is there an option for "zrep sync" to avoid creating and sending snapshots when there have been no changes to the source filesystem since the last sync? For example, use zfs diff (or some other ZFS mechanism) to check if there have been any changes before making a snapshot? Or add an option to wait for inotify to indicate there has been a change?
[root@dcc3 ~]# zrep version
zrep 1.7.3
http://www.bolthole.com/solaris/zrep
http://www.github.com/bolthole/zrep
[root@dcc3 ~]# zrep list
dcc/data
[root@dcc3 ~]# zrep clear |& head
WARNING: Removing all zrep configs and snapshots from
(for TAG=zrep)
Continuing in 10 seconds
Destroying any zrep-related snapshots from
Error: zrep internalerror: no arg for list_autosnaps
Removing zrep-related properties from
missing dataset argument
usage:
inherit [-rS] <filesystem|volume|snapshot> ...
However, explicitly specifying the dataset to clear worked,
[root@dcc3 ~]# zrep clear dcc/data
WARNING: Removing all zrep configs and snapshots from dcc/data
(for TAG=zrep)
Continuing in 10 seconds
Destroying any zrep-related snapshots from dcc/data
Removing zrep-related properties from dcc/data
If zrep clear has a hard requirement for a dataset argument then the error message should come from zrep rather than an underlying zfs command. However, I think it makes sense for "zrep clear" with no argument imply clearing all zrep info.
Thanks.
Is there a syntax that will allow replication to the top level dataset that by default has the same name as the pool?
For example, after running "zpool create test" I would like to replicate the dataset "test". However, when I run,
zrep -i test rhost test
...
Sending initial replication stream to rhost:test/test
whereas I would like it to replicate to rhost:test rather than rhost:test/test.
Thanks.
Thank you very mutch for your work with zrep!
I'm a gentoo user testing zfs in Linux. I was searching for a tool to replace rsync in my desktop backups and your script is just wonderful.
I have some things to ask you as a wish list to get better integration with ebuilds in gentoo.
It'll be usefull to use tags (like 1.5.5) to set a release.
With this I can create ebuilds from this repository more easily.
Latest stable release published in the site is 1.5.4. I think that it was generated from the following commit:
970add8
But I realize that there were many bug fixes that appear in version 1.5.5. Should I use head of branch master?
The 2 "if" at lines 382 and 386 of the zrep script:
if [[ -z "seconds" ]] ; then
should be written as:
if [[ -z "$seconds" ]] ; then
Regards
Maurizio
As we discussed here #35 sending zvol on SmartOS doesn't crash, so I'd like to allow initialize for SmartOS. I believe uname -v
output could be used to identify SmartOS. Sample output:
joyent_20161222T003450Z
If zrep fails to send a snapshot, it renames that snapshot to zrep-000000_unsent
. Unfortunately, if ZREP_R=-R
is set, it does not rename the children's snapshots. This has the additional effect that when the unsent snapshots are removed from the parent, they are not removed from the children (because they still have their old name).
zrep should rename the child snapshots when it renames the parent one.
It appears to me that zrep might have a problem when running via periodic.
In the daily output email i can see the following:
sending tank/media@zrep_000007 to monique:zroot/media
missing value in property=value argument
usage:
set <property=value> <filesystem|volume|snapshot> ...
... more output from "zfs set" ...
Expiring zrep snaps on tank/media
Also running expire on monique:zroot/media now...
Expiring zrep snaps on zroot/media
sending tank/projekte@zrep_000007 to monique:zroot/projekte
Intresting part here: I think some data is transfered as I can see traffic on the interface. But: zrep status says its quite old. When I run zrep sync all manually via ssh it works fine.
My periodic runner looks like this:
cat /usr/local/etc/periodic/daily/800.zrep
#!/bin/sh
/usr/local/bin/zrep sync all
Could there be a problem with a "zfs set" call which stores the last synced snapshot?
Master system is FreeBSD 10.1, target is FreeBSD 11.0.
Not really an issue, more of a procedural question. I have a system that was rebooted manually (for completely-unrelated-to-zrep reasons) in the middle of a zrep sync process. As a result, the subsequent snaps are getting the "_unsent" suffix and I see the lock-pid, etc. hanging around in the properties. What would be the proper method for getting this going again - I don't want to just start deleting snaps and properties if there is a better way.
On source:
[root@sac-nfs04 ~]# zrep list -v
p01/docrep02:
quota 800G
sharenfs on
zrep:dest-fs dr01/docrep02
zrep:lock-time 20160504110501
zrep:master yes
zrep:src-fs p01/docrep02
zrep:dest-host boi-nfs01
zrep:lock-pid 19718
zrep:savecount 5
zrep:src-host sac-nfs04
last snapshot synced: p01/docrep02@zrep_000010
[root@sac-nfs04 ~]# zfs list -t snapshot -o name,written,creation
NAME WRITTEN CREATION
p01/docrep02@zrep_00000c 391G Tue May 3 20:05 2016
p01/docrep02@zrep_00000d 2.21G Wed May 4 5:05 2016
p01/docrep02@zrep_00000e 209M Wed May 4 6:05 2016
p01/docrep02@zrep_00000f 347M Wed May 4 7:05 2016
p01/docrep02@zrep_000010 406M Wed May 4 8:05 2016
p01/docrep02@zrep_000011 426M Wed May 4 9:05 2016
p01/docrep02@zrep_000012_unsent 414M Wed May 4 10:05 2016
p01/docrep02@zrep_000013_unsent 360M Wed May 4 11:05 2016
On destination:
[root@boi-nfs01 ~]# zrep list -v
dr01/docrep02:
readonly on
zrep:savecount 5
zrep:dest-host boi-nfs01
zrep:src-host sac-nfs04
zrep:dest-fs dr01/docrep02
zrep:src-fs p01/docrep02
last snapshot synced: dr01/docrep02@zrep_000010
[root@boi-nfs01 ~]# zfs list -t snapshot -o name,written,used,creation
NAME WRITTEN USED CREATION
dr01/docrep02@zrep_00000c 391G 88.6G Tue May 3 20:05 2016
dr01/docrep02@zrep_00000d 2.21G 59.3M Wed May 4 5:05 2016
dr01/docrep02@zrep_00000e 208M 50.8M Wed May 4 6:05 2016
dr01/docrep02@zrep_00000f 346M 62.4M Wed May 4 7:05 2016
dr01/docrep02@zrep_000010 405M 0 Wed May 4 8:05 2016
Thanks in advance.
Thank you for your script.
I am using zrep in FreeBSD and I have noted the above bug.
I create a test dataset TMP and a child TMP2:
# zfs list -r -t filesystem sys/no-backup/TMP
NAME USED AVAIL REFER MOUNTPOINT
sys/no-backup/TMP 192K 141G 96K /mnt/no-backup//TMP
sys/no-backup/TMP/TMP2 96K 141G 96K /mnt/no-backup//TMP/TMP2
I initialize the copy to the second computer named microserver
# export ZREP_R=-R
# zrep init sys/no-backup/TMP microserver zdata/backup_ativ/TMP
Repeating five times the command:
# zrep sync sys/no-backup/TMP
on the second computer microserver I have:
# zfs list -r -t snapshot zdata/backup_ativ/TMP
NAME USED AVAIL REFER MOUNTPOINT
zdata/backup_ativ/TMP@zrep_000001 8K - 96K -
zdata/backup_ativ/TMP@zrep_000002 8K - 96K -
zdata/backup_ativ/TMP@zrep_000003 8K - 96K -
zdata/backup_ativ/TMP@zrep_000004 8K - 96K -
zdata/backup_ativ/TMP@zrep_000005 0 - 96K -
zdata/backup_ativ/TMP/TMP2@zrep_000000 8K - 96K -
zdata/backup_ativ/TMP/TMP2@zrep_000001 8K - 96K -
zdata/backup_ativ/TMP/TMP2@zrep_000002 8K - 96K -
zdata/backup_ativ/TMP/TMP2@zrep_000003 8K - 96K -
zdata/backup_ativ/TMP/TMP2@zrep_000004 8K - 96K -
zdata/backup_ativ/TMP/TMP2@zrep_000005 0 - 96K -
where the snapshot "zdata/backup_ativ/TMP/TMP2@zrep_000000" was not cancelled.
Regards,
Maurizio
Newer versions of ZFS (0.7.0) support send/recv resume support.
I'm about to send a lot of data to another site and would like to do this with resume support.
But I don't think zrep doesn't support this yet?
Any plans on implementing this anywhere soon? If so I'll wait before transfering.
If no, I'll do it manually and initialize zrep after that.
I have been using zrep for a few months now. Thanks for such an insanely useful script.
We recently had a client lose 5 out of 8 drives in a RAIDZ2 all within 38 hours. We couldn't replace and resilver drives fast enough before the pool was destroyed.
We use zrep to sync data off-site, and it worked perfectly. After the drives were replaced and the pool was re-created, we were able to restore from the off-site backup. Unfortunately the restore took ~13 hours and my client wants options for restoring faster.
They didn't like Andrew's option of a fully-loaded station wagon laden down with tapes...(https://en.wikipedia.org/wiki/Sneakernet#Non-fiction)
One option I am considering is having an external USB backup drive and having zrep sync to it. As far as I can tell, zrep can only sync to one target...
Any thoughts on having it support multiple backup targets?
zrep does a nice job cleaning up if "zrep -i" is interrupted once the initial snapshot begins transferring, however, it might make sense to have it also clean up (zrep clear) the source dataset if there are any initial error conditions before then, e.g., unable to SSH to the destination machine, or an incorrect remote dataset specified.
I just spent several days backing up ~200 GB off-site using zrep init
. Now that it's finished, I tried zrep sync
and it's throwing Error: You must initialize tank/virt for zrep
Here's the full command:
root@usvansdnas01:~# zrep -t zrep-offsite sync tank/virt
DEBUG: overiding stale lock on tank/virt from pid 3501
zrep_sync could not find sent snap for tank/virt.
Error: You must initialize tank/virt for zrep
root@usvansdnas01:~#
On the source:
root@usvansdnas01:# zfs list -rt snapshot tank/virt | grep zrep# zfs get all tank/virt | grep zrep
tank/virt@zrep-local_000000 1.49G - 132G -
tank/virt@zrep-offsite_000000 1.28G - 133G -
tank/virt@zrep-local_000001 39.6M - 133G -
root@usvansdnas01:
tank/virt zrep-local:savecount 5 local
tank/virt zrep-local:master yes local
tank/virt zrep-local:src-fs tank/virt local
tank/virt zrep-local:src-host usvansdnas01 local
tank/virt zrep-offsite:savecount 5 local
tank/virt zrep-offsite:src-host usvansdnas01 local
tank/virt zrep-local:dest-host localhost local
tank/virt zrep-offsite:lock-time 20150930150628 local
tank/virt zrep-offsite:dest-fs backup-pool/usvansd/virt local
tank/virt zrep-offsite:dest-host uslog00nas03.-redacted-.local local
tank/virt zrep-offsite:lock-pid 6418 local
tank/virt zrep-offsite:master yes local
tank/virt zrep-offsite:src-fs tank/virt local
tank/virt zrep-local:dest-fs backup-pool/virt local
root@usvansdnas01:~#
On the dest:
root@uslog00nas03:# zfs list -rt snapshot backup-pool/usvansd/virt# zfs get all backup-pool/usvansd/officeshare | grep zrep
NAME USED AVAIL REFER MOUNTPOINT
backup-pool/usvansd/virt@zrep-offsite_000000 0 - 205G -
root@uslog00nas03:
backup-pool/usvansd/officeshare zrep-offsite:src-fs tank/officeshare local
backup-pool/usvansd/officeshare zrep-offsite:src-host usvansdnas01 local
backup-pool/usvansd/officeshare zrep-offsite:savecount 5 local
backup-pool/usvansd/officeshare zrep-offsite:dest-host uslog00nas03.-redacted-.local local
backup-pool/usvansd/officeshare zrep-offsite:dest-fs backup-pool/usvansd/officeshare local
root@uslog00nas03:~#
You'll notice I have another backup to a 'local' pool, and that is working fine.
I need to run zrep with nice -n19
, or system is unresponsive. When initialising, that can be long time. It would be nice to have support fir nice command directly in script.
After syncing a few times, zrep starts reporting that it "Failed to acquire global lock".
root@usvansdnas01:~# zrep -t zrep-local sync all
ln: failed to create symbolic link ‘/var/run/zrep.lock’: File exists
ln: failed to create symbolic link ‘/var/run/zrep.lock’: File exists
ln: failed to create symbolic link ‘/var/run/zrep.lock’: File exists
ln: failed to create symbolic link ‘/var/run/zrep.lock’: File exists
ln: failed to create symbolic link ‘/var/run/zrep.lock’: File exists
ln: failed to create symbolic link ‘/var/run/zrep.lock’: File exists
ln: failed to create symbolic link ‘/var/run/zrep.lock’: File exists
ln: failed to create symbolic link ‘/var/run/zrep.lock’: File exists
ln: failed to create symbolic link ‘/var/run/zrep.lock’: File exists
ln: failed to create symbolic link ‘/var/run/zrep.lock’: File exists
ln: failed to create symbolic link ‘/var/run/zrep.lock’: File exists
Failed to acquire global lock
Error: Cannot lock tank/officeshare. Cannot continue
root@usvansdnas01:~# ls -lha /var/run/zrep.lock
lrwxrwxrwx 1 root root 10 Sep 10 18:35 /var/run/zrep.lock -> /proc/6479
root@usvansdnas01:~# ps uax | grep 6479
root 14552 0.0 0.0 12720 2112 pts/2 S+ 11:18 0:00 grep 6479
root@usvansdnas01:~#
Is it possible to have zrep override the lock when the process referenced no longer exists?
Please fix this little typo in zrep_sync, line 402:
- if [[ "$MBUFFER != "" ]] ; then
+ if [[ "$MBUFFER" != "" ]] ; then
Thanks!
Hi,
I've been looking around for something like this - congrats on a nice solution.
I have a pool that has lots of zfs file systems, some and some of them are what you might call nested :)
What I really want to do is just sync the top level of the pool recursively to an fs in another pool, otherwise I'll have to manually manage a large list file systems as they come and go.
I see from your info that nested file systems are not supported, but a version of this script that can replicate a whole pool using recursive zfs send would be great.
I have created a modified version that does this, it's just some small changes so far, and testing seems fine.
Two questions:
What are the technical reasons for not supporting nested fs syncing?
Would you be interested in incorporating the recursive functionality I've added, details to be discussed?
Thanks in advance.
Hello,
I'll just ask you to avoid versions like 1.6.
Please use always the same notation with 1.6.0 for the first release of 1.6. This will help to maintain gentoo ebuild packages according to your tags.
Thanks
I'm running ZFS on linux on two Cent 7.2 boxes. When I do an initial sync with a small test filesystem it works fine, as do subsequent syncs. I have now tried it twice on our production fs, which is 13T going from our colo to our office server room and it has failed both times.
ssh keys are setup, and it's working fine on the small fs. There was no network issues both times I ran it. It appears that it does complete the transfer of the 1st snap, but then fails. Here is the output. Any thoughts?
zrep -i xxx/results kbackup.yyy.com chunky
Setting properties on xxx/results
Warning: zfs lacking -o argument
Creating destination filesystem as separate step
Creating snapshot xxx/results@zrep_000000
Sending initial replication stream to kbackup.yyy.com:chunky/results
[email protected]'s password:
Write failed: Broken pipe
cannot open 'Could': dataset does not exist
cannot open 'Could': dataset does not exist
Destroying any zrep-related snapshots from Could
cannot open 'Could': dataset does not exist
Removing zrep-related properties from Could
cannot open 'Could': dataset does not exist
Error: not set readonly for kbackup.yyy.com:chunky/results
Currently when I initialize new dataset, it is synced to destionation right away. When network error happens, this process is broken. I'd liek to initialize dataset without initial transfer and let that happen by cron on first sync.
I stupidly left ZREP_R=-R
out of my latest sync all
.
It synced tank/virt
, but obviously not all the zvols under it.
Trying a second sync with ZREP_R=-R
results in ZREP_R=-R ZREP ZREP_INC_FLAG=-i zrep -t zrep-local sync all
.
Should I manually snapshot with the appropriate snapshot name and manually sync it and continue on, or should I give up and start over? (I'd hate to have to re-sync ~4.8 TB of data over a slow Comcast connection.)
Sometimes I pipe zrep status
to another command like cut
or awk
and use space as a delimiter. But it doesn't work for datasets that have long paths/names. In the following example, the 4th one down is merged with the word "last".
root@brigade:/datapool/supportarchive# zrep status
datapool/supportarchive/dwarf/supportarchive last synced Tue Oct 11 14:07 2016
datapool/supportarchive/dwarf/wireshark last synced Tue Oct 11 14:07 2016
datapool/supportarchive/elmo/supportarchive last synced Tue Oct 11 14:07 2016
datapool/supportarchive/fortress/supportarchivelast synced Tue Oct 11 14:00 2016
datapool/supportarchive/oscar/emcdr last synced Tue Oct 11 14:07 2016
datapool/supportarchive/oscar/supportarchive last synced Tue Oct 11 14:08 2016
datapool/supportarchive/probe/autoperf last synced Tue Oct 11 14:08 2016
datapool/supportarchive/probe/bulkhome last synced Tue Oct 11 14:09 2016
datapool/supportarchive/probe/supportarchive last synced Tue Oct 11 14:09 2016
I'm running 1.5.5.
On a box containing existing data, I installed the zrep script and did a zrep init
to sync files. The initial sync appears to have run properly. Next I did a zrep sync all
and it failed. I tried again, and it failed again.
I'm not sure what the error relates to:
Initial sync:
root@kvm1:~# ZREP_R=-R zrep -t zrep-local init tank/userprofiles localhost backup-pool/userprofiles
Setting properties on tank/userprofiles
Ancient local version of ZFS detected.
Creating destination filesystem as separate step
Creating snapshot tank/userprofiles@zrep-local_000000
Sending initial replication stream to localhost:backup-pool/userprofiles
Debug: Because you have old zfs support, setting remote properties by hand
Initialization copy of tank/userprofiles to localhost:backup-pool/userprofiles complete
root@kvm1:~#
Sync all:
root@kvm1:~# ZREP_R=-R zrep -t zrep-local sync all
sending tank/userprofiles@zrep-local_000002 to localhost:backup-pool/userprofiles
Expiring zrep snaps on tank/userprofiles
Also running expire on localhost:backup-pool/userprofiles now...
Expiring zrep snaps on backup-pool/userprofiles
Error: zrep_expire Internal Err caller did not hold fs lock
REMOTE expire failed
root@kvm1:~#
This is follow up on #16, unfortunately I still get the error with latest v1.6.8.
OS and ZFS versions at the end.
I have this issue on the real data, but then I created some sample hierarchical dataset on pool called "backup" to reproduce the error more easily. Original filesystems are backup/z0, backup/z0/z1 and backup/z0/z2. All get recursively replicated to backup/zcopy/z0/...
root@nas:~# zfs destroy -r backup/z0
root@nas:~# zfs destroy -r backup/zcopy
root@nas:~# zfs create backup/z0
root@nas:~# zfs create backup/z0/z1
root@nas:~# zfs create backup/z0/z2
root@nas:~# touch /backup/z0/touch1
root@nas:~# touch /backup/z0/z1/touch1
root@nas:~# touch /backup/z0/z2/touch1
root@nas:~# zfs create backup/zcopy
root@nas:~# env ZREP_R=-R zrep init backup/z0 localhost backup/zcopy/z0
Setting properties on backup/z0
Warning: zfs recv lacking -o readonly
Creating readonly destination filesystem as separate step
Creating snapshot backup/z0@zrep_000000
Sending initial replication stream to localhost:backup/zcopy/z0
Initialization copy of backup/z0 to localhost:backup/zcopy/z0 complete
Filesystem will not be mounted
root@nas:~# zrep status -v
backup/z0 ->localhost:backup/zcopy/z0 Mar 26 21:33 2017
root@nas:~# zfs get all | grep backup/z[0c] | grep zrep:
backup/z0 zrep:master yes local
backup/z0 zrep:src-fs backup/z0 local
backup/z0 zrep:savecount 5 local
backup/z0 zrep:dest-host localhost local
backup/z0 zrep:dest-fs backup/zcopy/z0 local
backup/z0 zrep:src-host nas local
backup/z0@zrep_000000 zrep:master yes inherited from backup/z0
backup/z0@zrep_000000 zrep:src-fs backup/z0 inherited from backup/z0
backup/z0@zrep_000000 zrep:savecount 5 inherited from backup/z0
backup/z0@zrep_000000 zrep:dest-host localhost inherited from backup/z0
backup/z0@zrep_000000 zrep:dest-fs backup/zcopy/z0 inherited from backup/z0
backup/z0@zrep_000000 zrep:src-host nas inherited from backup/z0
backup/z0@zrep_000000 zrep:sent 1490556809 local
backup/z0/z1 zrep:master yes inherited from backup/z0
backup/z0/z1 zrep:src-fs backup/z0 inherited from backup/z0
backup/z0/z1 zrep:savecount 5 inherited from backup/z0
backup/z0/z1 zrep:dest-host localhost inherited from backup/z0
backup/z0/z1 zrep:dest-fs backup/zcopy/z0 inherited from backup/z0
backup/z0/z1 zrep:src-host nas inherited from backup/z0
backup/z0/z1@zrep_000000 zrep:master yes inherited from backup/z0
backup/z0/z1@zrep_000000 zrep:src-fs backup/z0 inherited from backup/z0
backup/z0/z1@zrep_000000 zrep:savecount 5 inherited from backup/z0
backup/z0/z1@zrep_000000 zrep:dest-host localhost inherited from backup/z0
backup/z0/z1@zrep_000000 zrep:dest-fs backup/zcopy/z0 inherited from backup/z0
backup/z0/z1@zrep_000000 zrep:src-host nas inherited from backup/z0
backup/z0/z2 zrep:master yes inherited from backup/z0
backup/z0/z2 zrep:src-fs backup/z0 inherited from backup/z0
backup/z0/z2 zrep:savecount 5 inherited from backup/z0
backup/z0/z2 zrep:dest-host localhost inherited from backup/z0
backup/z0/z2 zrep:dest-fs backup/zcopy/z0 inherited from backup/z0
backup/z0/z2 zrep:src-host nas inherited from backup/z0
backup/z0/z2@zrep_000000 zrep:master yes inherited from backup/z0
backup/z0/z2@zrep_000000 zrep:src-fs backup/z0 inherited from backup/z0
backup/z0/z2@zrep_000000 zrep:savecount 5 inherited from backup/z0
backup/z0/z2@zrep_000000 zrep:dest-host localhost inherited from backup/z0
backup/z0/z2@zrep_000000 zrep:dest-fs backup/zcopy/z0 inherited from backup/z0
backup/z0/z2@zrep_000000 zrep:src-host nas inherited from backup/z0
backup/zcopy/z0 zrep:src-fs backup/z0 received
backup/zcopy/z0 zrep:savecount 5 received
backup/zcopy/z0 zrep:dest-host localhost received
backup/zcopy/z0 zrep:dest-fs backup/zcopy/z0 received
backup/zcopy/z0 zrep:src-host nas received
backup/zcopy/z0@zrep_000000 zrep:src-fs backup/z0 inherited from backup/zcopy/z0
backup/zcopy/z0@zrep_000000 zrep:savecount 5 inherited from backup/zcopy/z0
backup/zcopy/z0@zrep_000000 zrep:dest-host localhost inherited from backup/zcopy/z0
backup/zcopy/z0@zrep_000000 zrep:dest-fs backup/zcopy/z0 inherited from backup/zcopy/z0
backup/zcopy/z0@zrep_000000 zrep:src-host nas inherited from backup/zcopy/z0
backup/zcopy/z0/z1 zrep:src-fs backup/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z1 zrep:savecount 5 inherited from backup/zcopy/z0
backup/zcopy/z0/z1 zrep:dest-host localhost inherited from backup/zcopy/z0
backup/zcopy/z0/z1 zrep:dest-fs backup/zcopy/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z1 zrep:src-host nas inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000000 zrep:src-fs backup/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000000 zrep:savecount 5 inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000000 zrep:dest-host localhost inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000000 zrep:dest-fs backup/zcopy/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000000 zrep:src-host nas inherited from backup/zcopy/z0
backup/zcopy/z0/z2 zrep:src-fs backup/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z2 zrep:savecount 5 inherited from backup/zcopy/z0
backup/zcopy/z0/z2 zrep:dest-host localhost inherited from backup/zcopy/z0
backup/zcopy/z0/z2 zrep:dest-fs backup/zcopy/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z2 zrep:src-host nas inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000000 zrep:src-fs backup/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000000 zrep:savecount 5 inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000000 zrep:dest-host localhost inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000000 zrep:dest-fs backup/zcopy/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000000 zrep:src-host nas inherited from backup/zcopy/z0
root@nas:~# env ZREP_R=-R DEBUG=1 zrep sync backup/z0
sending backup/z0@zrep_000001 to localhost:backup/zcopy/z0
Expiring zrep snaps on backup/z0
Also running expire on localhost:backup/zcopy/z0 now...
Expiring zrep snaps on backup/zcopy/z0
Error: zrep_expire Internal Err caller did not hold fs lock
REMOTE expire failed
root@nas:~# zfs get all | grep backup/z[0c] | grep zrep:
backup/z0 zrep:master yes local
backup/z0 zrep:src-fs backup/z0 local
backup/z0 zrep:savecount 5 local
backup/z0 zrep:dest-host localhost local
backup/z0 zrep:dest-fs backup/zcopy/z0 local
backup/z0 zrep:src-host nas local
backup/z0@zrep_000000 zrep:master yes inherited from backup/z0
backup/z0@zrep_000000 zrep:src-fs backup/z0 inherited from backup/z0
backup/z0@zrep_000000 zrep:savecount 5 inherited from backup/z0
backup/z0@zrep_000000 zrep:dest-host localhost inherited from backup/z0
backup/z0@zrep_000000 zrep:dest-fs backup/zcopy/z0 inherited from backup/z0
backup/z0@zrep_000000 zrep:src-host nas inherited from backup/z0
backup/z0@zrep_000000 zrep:sent 1490556809 local
backup/z0@zrep_000001 zrep:master yes inherited from backup/z0
backup/z0@zrep_000001 zrep:src-fs backup/z0 inherited from backup/z0
backup/z0@zrep_000001 zrep:savecount 5 inherited from backup/z0
backup/z0@zrep_000001 zrep:dest-host localhost inherited from backup/z0
backup/z0@zrep_000001 zrep:dest-fs backup/zcopy/z0 inherited from backup/z0
backup/z0@zrep_000001 zrep:src-host nas inherited from backup/z0
backup/z0@zrep_000001 zrep:sent 1490557181 local
backup/z0/z1 zrep:master yes inherited from backup/z0
backup/z0/z1 zrep:src-fs backup/z0 inherited from backup/z0
backup/z0/z1 zrep:savecount 5 inherited from backup/z0
backup/z0/z1 zrep:dest-host localhost inherited from backup/z0
backup/z0/z1 zrep:dest-fs backup/zcopy/z0 inherited from backup/z0
backup/z0/z1 zrep:src-host nas inherited from backup/z0
backup/z0/z1@zrep_000000 zrep:master yes inherited from backup/z0
backup/z0/z1@zrep_000000 zrep:src-fs backup/z0 inherited from backup/z0
backup/z0/z1@zrep_000000 zrep:savecount 5 inherited from backup/z0
backup/z0/z1@zrep_000000 zrep:dest-host localhost inherited from backup/z0
backup/z0/z1@zrep_000000 zrep:dest-fs backup/zcopy/z0 inherited from backup/z0
backup/z0/z1@zrep_000000 zrep:src-host nas inherited from backup/z0
backup/z0/z1@zrep_000001 zrep:master yes inherited from backup/z0
backup/z0/z1@zrep_000001 zrep:src-fs backup/z0 inherited from backup/z0
backup/z0/z1@zrep_000001 zrep:savecount 5 inherited from backup/z0
backup/z0/z1@zrep_000001 zrep:dest-host localhost inherited from backup/z0
backup/z0/z1@zrep_000001 zrep:dest-fs backup/zcopy/z0 inherited from backup/z0
backup/z0/z1@zrep_000001 zrep:src-host nas inherited from backup/z0
backup/z0/z2 zrep:master yes inherited from backup/z0
backup/z0/z2 zrep:src-fs backup/z0 inherited from backup/z0
backup/z0/z2 zrep:savecount 5 inherited from backup/z0
backup/z0/z2 zrep:dest-host localhost inherited from backup/z0
backup/z0/z2 zrep:dest-fs backup/zcopy/z0 inherited from backup/z0
backup/z0/z2 zrep:src-host nas inherited from backup/z0
backup/z0/z2@zrep_000000 zrep:master yes inherited from backup/z0
backup/z0/z2@zrep_000000 zrep:src-fs backup/z0 inherited from backup/z0
backup/z0/z2@zrep_000000 zrep:savecount 5 inherited from backup/z0
backup/z0/z2@zrep_000000 zrep:dest-host localhost inherited from backup/z0
backup/z0/z2@zrep_000000 zrep:dest-fs backup/zcopy/z0 inherited from backup/z0
backup/z0/z2@zrep_000000 zrep:src-host nas inherited from backup/z0
backup/z0/z2@zrep_000001 zrep:master yes inherited from backup/z0
backup/z0/z2@zrep_000001 zrep:src-fs backup/z0 inherited from backup/z0
backup/z0/z2@zrep_000001 zrep:savecount 5 inherited from backup/z0
backup/z0/z2@zrep_000001 zrep:dest-host localhost inherited from backup/z0
backup/z0/z2@zrep_000001 zrep:dest-fs backup/zcopy/z0 inherited from backup/z0
backup/z0/z2@zrep_000001 zrep:src-host nas inherited from backup/z0
backup/zcopy/z0 zrep:master yes received
backup/zcopy/z0 zrep:src-fs backup/z0 received
backup/zcopy/z0 zrep:savecount 5 received
backup/zcopy/z0 zrep:dest-host localhost received
backup/zcopy/z0 zrep:dest-fs backup/zcopy/z0 received
backup/zcopy/z0 zrep:src-host nas received
backup/zcopy/z0 zrep:lock-time 20170326213940 received
backup/zcopy/z0 zrep:lock-pid 8462 received
backup/zcopy/z0@zrep_000000 zrep:master yes inherited from backup/zcopy/z0
backup/zcopy/z0@zrep_000000 zrep:src-fs backup/z0 inherited from backup/zcopy/z0
backup/zcopy/z0@zrep_000000 zrep:savecount 5 inherited from backup/zcopy/z0
backup/zcopy/z0@zrep_000000 zrep:dest-host localhost inherited from backup/zcopy/z0
backup/zcopy/z0@zrep_000000 zrep:dest-fs backup/zcopy/z0 inherited from backup/zcopy/z0
backup/zcopy/z0@zrep_000000 zrep:src-host nas inherited from backup/zcopy/z0
backup/zcopy/z0@zrep_000000 zrep:sent 1490556809 received
backup/zcopy/z0@zrep_000000 zrep:lock-time 20170326213940 inherited from backup/zcopy/z0
backup/zcopy/z0@zrep_000000 zrep:lock-pid 8462 inherited from backup/zcopy/z0
backup/zcopy/z0@zrep_000001 zrep:master yes inherited from backup/zcopy/z0
backup/zcopy/z0@zrep_000001 zrep:src-fs backup/z0 inherited from backup/zcopy/z0
backup/zcopy/z0@zrep_000001 zrep:savecount 5 inherited from backup/zcopy/z0
backup/zcopy/z0@zrep_000001 zrep:dest-host localhost inherited from backup/zcopy/z0
backup/zcopy/z0@zrep_000001 zrep:dest-fs backup/zcopy/z0 inherited from backup/zcopy/z0
backup/zcopy/z0@zrep_000001 zrep:src-host nas inherited from backup/zcopy/z0
backup/zcopy/z0@zrep_000001 zrep:sent 1490557181 local
backup/zcopy/z0@zrep_000001 zrep:lock-time 20170326213940 inherited from backup/zcopy/z0
backup/zcopy/z0@zrep_000001 zrep:lock-pid 8462 inherited from backup/zcopy/z0
backup/zcopy/z0/z1 zrep:master yes inherited from backup/zcopy/z0
backup/zcopy/z0/z1 zrep:src-fs backup/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z1 zrep:savecount 5 inherited from backup/zcopy/z0
backup/zcopy/z0/z1 zrep:dest-host localhost inherited from backup/zcopy/z0
backup/zcopy/z0/z1 zrep:dest-fs backup/zcopy/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z1 zrep:src-host nas inherited from backup/zcopy/z0
backup/zcopy/z0/z1 zrep:lock-time 20170326213940 inherited from backup/zcopy/z0
backup/zcopy/z0/z1 zrep:lock-pid 8462 inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000000 zrep:master yes inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000000 zrep:src-fs backup/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000000 zrep:savecount 5 inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000000 zrep:dest-host localhost inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000000 zrep:dest-fs backup/zcopy/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000000 zrep:src-host nas inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000000 zrep:lock-time 20170326213940 inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000000 zrep:lock-pid 8462 inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000001 zrep:master yes inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000001 zrep:src-fs backup/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000001 zrep:savecount 5 inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000001 zrep:dest-host localhost inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000001 zrep:dest-fs backup/zcopy/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000001 zrep:src-host nas inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000001 zrep:lock-time 20170326213940 inherited from backup/zcopy/z0
backup/zcopy/z0/z1@zrep_000001 zrep:lock-pid 8462 inherited from backup/zcopy/z0
backup/zcopy/z0/z2 zrep:master yes inherited from backup/zcopy/z0
backup/zcopy/z0/z2 zrep:src-fs backup/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z2 zrep:savecount 5 inherited from backup/zcopy/z0
backup/zcopy/z0/z2 zrep:dest-host localhost inherited from backup/zcopy/z0
backup/zcopy/z0/z2 zrep:dest-fs backup/zcopy/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z2 zrep:src-host nas inherited from backup/zcopy/z0
backup/zcopy/z0/z2 zrep:lock-time 20170326213940 inherited from backup/zcopy/z0
backup/zcopy/z0/z2 zrep:lock-pid 8462 inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000000 zrep:master yes inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000000 zrep:src-fs backup/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000000 zrep:savecount 5 inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000000 zrep:dest-host localhost inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000000 zrep:dest-fs backup/zcopy/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000000 zrep:src-host nas inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000000 zrep:lock-time 20170326213940 inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000000 zrep:lock-pid 8462 inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000001 zrep:master yes inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000001 zrep:src-fs backup/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000001 zrep:savecount 5 inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000001 zrep:dest-host localhost inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000001 zrep:dest-fs backup/zcopy/z0 inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000001 zrep:src-host nas inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000001 zrep:lock-time 20170326213940 inherited from backup/zcopy/z0
backup/zcopy/z0/z2@zrep_000001 zrep:lock-pid 8462 inherited from backup/zcopy/z0
Enrivonment:
zrep v1.6.8, as downloaded from https://raw.githubusercontent.com/bolthole/zrep/master/zrep on 2017-03-26.
# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.10
Release: 16.10
Codename: yakkety
# apt list | grep zfs | grep installed
libzfs2linux/yakkety-updates,now 0.6.5.8-0ubuntu4.2 amd64 [installed,automatic]
zfs-doc/yakkety-updates,yakkety-updates,now 0.6.5.8-0ubuntu4.2 all [installed,automatic]
zfs-zed/yakkety-updates,now 0.6.5.8-0ubuntu4.2 amd64 [installed,automatic]
zfsnap/yakkety,yakkety,now 1.11.1-4 all [installed]
zfsutils-linux/yakkety-updates,now 0.6.5.8-0ubuntu4.2 amd64 [installed]
I have two systems that were synced manually and are now like this:
SOURCE:
/pool/folderA/folder1
/pool/folderA/folder2
/pool/folderA/folder3
TARGET:
/pool/backup/folderA/folder1
/pool/backup/folderA/folder2
/pool/backup/folderA/folder3
The latest snapshots are named "@zrep-backup_000000"
SOURCE and TARGET were synced using zfs send/recv manually
So folder1, folder2 and folder3 are in sync but I want to start using zrep for them now.
What would be the correct way to initialize this for zrep?
The documentation talks about setting flags like this:
srchost# "zrep changeconfig -f srcfs desthost destfs"
desthost# "zrep changeconfig -f -d destfs srchost srcfs"
Should I do this for all 3 folders or just for the parent folder "folderA" ?
Thanks for any help!
I'm sorry to open an issue with only the purpose to give you feedback.
I searched for an alternative way to send you a message in github but I don't find it.
Everything ok with this version. All the syncs work fine.
I've my overlay also published at gentoo now:
https://cgit.gentoo.org/repo/user/ssnb.git/tree/dev-util/zrep/zrep-1.6.3.ebuild
This will help all that like to use zrep in gentoo.
The following error occurred after (on backupserver node):
/usr/local/bin/zrep refresh tank01
DEBUG: zrep_lock_fs: set lock on tank01
DEBUG: refresh step 1: Going to jdsstorage01-zrep to snapshot tank01
DEBUG=1: Befehl nicht gefunden.
Error: snap of src tank01 on jdsstorage01-zrep failed
So the "command not found" looks like it is not able to make a snapshot on the primary side, what is odd, because both systems are on the same software level.
enviroment:
setup:
The resolution for now was a rollback to the old version zrep-1.6.2 where it works fine.
The main reason why we want to upgrade to a newer version is that we have another curios problem. We replicate every 5 min and on mondays 03:00 am, the replication stops working. But i'm unsure if it is zrep related so i wanted to check if an upgrade helps.. (so maybe more from that to another time). ;)
Thanks for you continuous improvement of zrep.
I would like to remember you to use git tags with always 3 versioning numbers. This will help updating ebuilds in gentoo.
For example, instead v1.7 it should be v1.7.0.
Cheers
Hi,
I'm trying to use the "backup server" mode of zrep, but I cannot get it to work.
Setup as follows;
### backup server
hostname = backup1.foo.bar
zfs_dataset = backup/client1/rpool
### client
hostname == client1.foo.bar
zfs_dataset = rpool
Goal; to use "backup server" mode to take recursive backup of existing, with existing data, pool rpool
on client1
to existing, but empty, dataset/destination backup/client1/rpool
on backup server backup1
.
Tried the following on backup1
;
$ZREP_PATH init backup/client1/rpool client1.foo.bar rpool
This results in zrep trying to create the remote dataset rpool
, which fails (since the remote dataset/pool already is present, and have lots of data);
cannot create 'rpool': missing dataset name
Error: Cannot create client1.foo.bar:rpool
Trying to do different variations of $ZREP_PATH changeconfig
doesn't seem to do the trick. One of the errors was as follows;
Error: backup/client1/rpool not master. Cannot fail over
export ZREP_R=-R
is set (on both sides, if relevant).
I'm probably brainfarting hard here, but...
See pull/issue #22
(I usually only look at issues, so basically converting the pull request to an issue, so that I remember to check it later)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.