Giter Club home page Giter Club logo

backuppc's Introduction

BackupPC

BackupPC is a high-performance, enterprise-grade system for backing up to a server's disk.

Quick Start

The latest version of BackupPC can be fetched from:

You'll need to install the perl module BackupPC::XS, available from:

and the server-side rsync from:

If you will use SMB for WinXX clients, you will need smbclient and nmblookup from the Samba distribution.

To install BackupPC run these commands as root:

tar zxf BackupPC-__VERSION__.tar.gz
cd BackupPC-__VERSION__
perl configure.pl

This will automatically determine some system information and prompt you for install paths. Do perldoc configure.pl to see the various options that configure.pl provides.

Introduction

BackupPC is a high-performance, enterprise-grade system for backing up Linux, WinXX, and MacOS PCs and laptops to a server's disk. BackupPC is highly configurable and easy to install and maintain.

Given the ever decreasing cost of disks and raid systems, it is now practical and cost effective to backup a large number of machines onto a server's local disk or network storage. This is what BackupPC does. For some sites, this might be the complete backup solution. For other sites, additional permanent archives could be created by periodically backing up the server to tape. A variety of Open Source systems are available for doing backup to tape.

BackupPC is written in Perl and extracts backup data via SMB (using Samba), rsync, or tar over ssh/rsh/nfs. It is robust, reliable, well documented and freely available as Open Source on GitHub.

Features

  • A clever pooling scheme minimizes disk storage and disk IO. Identical files across multiple backups of the same or different PCs are stored only once resulting in substantial savings in disk storage.

  • One example of disk use: 95 laptops with each full backup averaging 3.6GB each, and each incremental averaging about 0.3GB. Storing three weekly full backups and six incremental backups per laptop is around 1200GB of raw data, but because of pooling and compression only 150GB is needed.

  • No client-side software is needed. The standard smb protocol is used to extract backup data on WinXX clients. On *nix clients, either rsync or tar over ssh/rsh/nfs is used to backup the data. Various alternatives are possible: rsync can also be used with WinXX by running rsyncd/cygwin. Similarly, smb could be used to backup *nix file systems if they are exported as smb shares.

  • A powerful http/cgi user interface allows administrators to view log files, configuration, current status and allows users to initiate and cancel backups and browse and restore files from backups.

  • Flexible restore options. Single files can be downloaded from any backup directly from the CGI interface. Zip or Tar archives for selected files or directories from any backup can also be downloaded from the CGI interface. Finally, direct restore to the client machine (using SMB, rsync or tar) for selected files or directories is also supported from the CGI interface.

  • Supports mobile environments where laptops are only intermittently connected to the network and have dynamic IP addresses (DHCP).

  • Flexible configuration parameters allow multiple backups to be performed in parallel, specification of which shares to backup, which directories to backup or not backup, various schedules for full and incremental backups, schedules for email reminders to users and so on. Configuration parameters can be set system-wide or also on a per-PC basis.

  • Users are sent periodic email reminders if their PC has not recently been backed up. Email content, timing and policies are configurable.

  • Tested on Linux and Solaris hosts, and Linux, Win95, Win98, Win2000 and WinXP clients.

  • Detailed documentation.

  • Open Source hosted by GitHub and freely available under GPL.

Packaging Help Needed

BackupPC 4.x doesn't have packages available for all the main linux distros. If you are willing to create and support packaging BackupPC 4.x for your favorite linux distro, please step up and help! Feel free to create a git issue indicating your interest.

Resources

Complete documentation is available in this release in doc/BackupPC.pod or doc/BackupPC.html. You can read doc/BackupPC.pod with perldoc and doc/BackupPC.html with any browser. You can also see the documentation and general information at:

The source code is available on Github at:

and releases are available on github:

or SourceForge:

You are encouraged to subscribe to any of the mail lists available on sourceforge.net:

The backuppc-announce list is moderated and is used only for important announcements (eg: new versions). It is low traffic. You only need to subscribe to one of users and announce: backuppc-users also receives any messages on backuppc-announce.

The backuppc-devel list is only for developers who are working on BackupPC. Do not post questions or support requests there. But detailed technical discussions should happen on this list.

To post a message to the backuppc-users list, send an email to

Do not send subscription requests to this address!

Copyright

Copyright (C) 2001-2020 Craig Barratt. All rights reserved.

This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License.

See the LICENSE file.

backuppc's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

backuppc's Issues

FillKeepPeriod

In the documentation of "FillCycle" the setting "FillKeepPeriod" is mentioned but there is no setting named this. Is this an error in documentation or is that setting really missing?

From Help:
more importantly, in V4+, deleting backups is done based on Fill/Unfilled,
not whether the original backup was full/incremental. If there aren't
any filled backups (other than the most recent), then the FillKeepPeriod
settings won't have any effect

Installer misses installing libs

After recent changes, the installer no longer installs the libraries.

The error when starting BackupPC is:
Starting backuppc: Can't locate BackupPC/Lib.pm in @inc (you may need to install the BackupPC::Lib module) (@inc contains: /usr/local/BackupPC/lib /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.22.1 /usr/local/share/perl/5.22.1 /usr/lib/x86_64-linux-gnu/perl5/5.22 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.22 /usr/share/perl/5.22 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base .) at /usr/local/BackupPC/bin/BackupPC line 57.
BEGIN failed--compilation aborted at /usr/local/BackupPC/bin/BackupPC line 57.

From the mailing list:
On 01/27/2017 08:01 PM, Les Mikesell wrote:

On Fri, Jan 27, 2017 at 11:38 AM, Kent Tenney [email protected] wrote:

https://github.com/backuppc/backuppc/wiki/Installing-BackupPC-4-from-git-on-Ubuntu-Xenial-16.04-LTS

After running the wiki script,
/usr/loca/BackupPC/bin/BackupPC
fails with
Can't locate BackupPC/Lib.pm in @inc (you may need to install the
BackupPC::Lib module) (@inc contains: /usr/local/BackupPC/lib /etc/perl
/usr/local/lib/x86_64-linux-gnu/perl/5.22.1 /usr/local/share/perl/5.22.1
/usr/lib/x86_64-linux-gnu/perl5/5.22 /usr/share/perl5
/usr/lib/x86_64-linux-gnu/perl/5.22 /usr/share/perl/5.22
/usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base .) at
/usr/local/BackupPC/bin/BackupPC line 57.
BEGIN failed--compilation aborted at /usr/local/BackupPC/bin/BackupPC line
57.

It seems the lib files weren't installed,

tree /usr/local/BackupPC/lib

/usr/local/BackupPC/lib/
├── BackupPC
│ ├── CGI
│ ├── Config
│ ├── Lang
│ ├── Storage
│ ├── Xfer
│ └── Zip
└── Net
└── FTP

I haven't installed v4 yet so I can't be more specific, but what you
are showing are just the directories in /usr/local/BackupPC/lib/ which
seem to at least exist, but your error is about a particular file.
The @inc list is the search path where perl will try to find it.
Does /usr/local/BackupPC/lib/BackupPC/Lib.pm exist, and if so is it
and the path to it readable by the backuppc user?

Hi Kent and Les,

Thanks for the report! I maintain the mentioned instructions, and can confirm this error.

It seems something has changed in the latest code, and while the installer says:

Installing library in /usr/local/BackupPC/lib

The lib folders remain empty. The exit status is also OK, so the installer may not even see the error. As such it is a bug in the build automation.

The lib folder was changed a few days ago by Craig Barratt, mentioning significant changes.

I will create an issue with these notes, and link it to the wiki.

Best regards,
Johan Ehnberg

Opensuse 42.1, 42.2 systemd startup script

Just updated my BPC to 4.0. Created this startup script for systemd as Opensuse uses systemd, and and init.d script was problematic with /var/run permissions. Not sure if this is the right spot to contribute to.
Installed BackupPC from source, not from OpenSuSE repos
Steps to install

  1. Put .service file in /usr/lib/systemd/system/
    2)As root do,
    "systemctl daemon-reload"
    "systemctl enable backuppc.service" #should create a link in /etc/systemd/system/multi- user.target.wants/backuppc.service
    "systemctl start backuppc.service"
    "systemctl status backuppc.service"

Script runs as backuppc : backuppc owner, group
/var/run/BackupPC folder created for .pid use from .configure.pl

Any improvements welcome, and is available from Yast: Service-Manager for start,stop,etc

`[Unit]
Description= BackupPC server
After=syslog.target

[Service]
Type=oneshot
User=backuppc
Group=backuppc
PermissionsStartOnly=true
ExecStartPre=-/usr/bin/mkdir /var/run/BackupPC
ExecStartPre=/usr/bin/chown -R backuppc:backuppc /var/run/BackupPC/
ExecStart=/opt/BackupPC/bin/BackupPC -d
PIDFile=/var/run/BackupPC.pid
RemainAfterExit=yes
ExecStop=/usr/bin/kill -HUP $MAINPID

[Install]
WantedBy=multi-user.target.`

bpc 4.0.0 configure failed

4.0.0 misses some files (BackupPC::XS ?) :

perl ./configure.pl

Error loading BackupPC::Lib: Can't locate BackupPC/XS.pm in @INC (you may need to install the BackupPC::XS module) (@INC contains: ./lib /etc/perl /usr/local/lib64/perl5/5.22.3/x86_64-linux /usr/local/lib64/perl5/5.22.3 /usr/lib64/perl5/vendor_perl/5.22.3/x86_64-linux /usr/lib64/perl5/vendor_perl/5.22.3 /usr/local/lib64/perl5 /usr/lib64/perl5/vendor_perl /usr/lib64/perl5/5.22.3/x86_64-linux /usr/lib64/perl5/5.22.3 .) at lib/BackupPC/Lib.pm line 51.
BEGIN failed--compilation aborted at lib/BackupPC/Lib.pm line 51.
Compilation failed in require at (eval 16) line 2.

Fix nightly pool statistics updates

Stumbled across this via the build files for Sébastien Luttringer's package for Arch Linux (I have to admit that I didn't confirm the issue on an unpatched BackupPC, but I think it's worth mentioning anyway, because someone did spend time and thoughts earlier on.)

There seems to be a problem in updating the pool statistics, according to this mailing list post. There is a proposed fix from the Debian package maintainers mentioned, which has been updated to apply to v4.0 by Sébastien in his patch for the Arch package (the problematic code moved from lib/BackupPC/Lib.pm to lib/BackupPC/DirOps.pm).

makeDist does not work with dash

Running makeDist with dash (default in Ubuntu) results in an error:
sh: 1: Syntax error: Bad fd number

As a workaround in Ubuntu, choose 'no' for dash as default shell:
sudo dpkg-reconfigure dash

'Are you sure' alert when editing global config

In the web UI, it's easy to by mistake click 'Edit config' for the Server instead of for a specific host. It would be helpful to have a 'Are you sure' alert pop up when the global 'Edit config' link is clicked.

vulnerability CVE-2011-4923

Stumbled across this via the build files for Sébastien Luttringer's package for Arch Linux: there is a vulnerability in lib/BackupPC/CGI/View.pm, see the description at NIST as well as a proposed solution by Jamie Strandboge. Jamie mentions that it's basically the same vulnerabilty as CVE-2011-3361, and the same fix. AFAICS, the latter one in lib/BackupPC/CGI/Browse.pm was done slightly differently to his proposed solution, calling EscHTML on use of the variables rather than reading them; I cannot comment on the pros and cons of that.

BPC4 host summary

In BPC3, recently backed-up hosts where color-coded (light green by default). This seems to have disappeared in BPC4 (or I have managed to miss the configuration option).
This color-coding was very useful to quickly know the latest backed-up hosts. It would be nice to have it back.

Gentoo BPC package

I am working on providing a good working BPC 4.x ebuild package for Gentoo (it is currently not available upstream) at:

https://github.com/sigmoidal/gentoo-overlay (overlay installation and usage instruction enclosed)

app-backuppc/backuppc/backuppc-9999.ebuild pulls the source of git master, so it's nice for testing bleeding edge BPC 4.x on gentoo, while app-backuppc/backuppc/backuppc-4.0.0-r1.ebuild is for the respective BPC version.

It works fine for me, but I am still improving it (have to add some dependencies), some users may find it useful. Note, that I have also created ebuilds for BackupPC::XS and rsync_bpc dependencies (also in that overlay), so when one emerges backuppc with dependencies (emerge -Dva backuppc) it emerges them also, as expected.

I will improve the BPC ebuild to take into account all dependencies and runtime-dependencies, so any feedback would be useful and I will be pushing it to upstream once it is ready.

Fix RunDir for tmpfs systems

Modern distributions use tmpfs for pid files. Thus init scripts typically need to create a directory at boot in /var/run with permissions for non-root apps to store their pid. BackupPC has this variable in config.pl which is run after init and as the backuppc user so this is too late. Instead, the setting needs to be moved out to a more standard location, and the init script needs something like:

if [ ! -d /var/run/BackupPC ]; then
mkdir -p /var/run/BackupPC
chown backuppc:backuppc /var/run/BackupPC
chmod 0755 /var/run/BackupPC
fi

The symptoms when hitting this problem are BackupPC not running and this in the LOG file:
unix bind() failed: Permission denied

One liner fix for existing installations:
sed '/test/a\ \nif [ ! -d /var/run/BackupPC ]; then\n mkdir -p /var/run/BackupPC\n chown backuppc:backuppc /var/run/BackupPC\n chmod 0755 /var/run/BackupPC\nfi' /etc/init.d/backuppc

configure.pl --config-only issue(s): can't open [...]/etc/backuppc/hosts.sample for reading

Hi!

I am trying to automate the build-process with the ultimate goal of creating Debian packages.

Current issue:
can't open /home/raoul/git/raoulbhatia/bpc/backuppc/debian/backuppc//etc/backuppc/hosts.sample for reading

Steps to reproduce:

./makeDist --releasedate "`date +'%d %b %Y'`" --version "4.1.0"
cd ./dist/BackupPC-4.1.0/
./configure.pl --config-only --batch --no-fhs --uid-ignore --hostname XXXXXX \
		--bin-path perl=/usr/bin/perl \
		--bin-path tar=/bin/tar \
		--bin-path smbclient=/usr/bin/smbclient \
		--bin-path nmblookup=/usr/bin/nmblookup \
		--bin-path rsync=/usr/bin/rsync \
		--bin-path ping=/bin/ping \
		--bin-path df=/bin/df \
		--bin-path ssh=/usr/bin/ssh \
		--bin-path sendmail=/usr/sbin/sendmail \
		--bin-path hostname=/bin/hostname \
		--bin-path split=/usr/bin/split \
		--bin-path par2=/usr/bin/par2 \
		--bin-path cat=/bin/cat \
		--bin-path gzip=/bin/gzip \
		--bin-path bzip2=/bin/bzip2 \
		--config-dir /etc/backuppc \
		--cgi-dir /usr/share/backuppc/cgi-bin \
		--data-dir /var/lib/backuppc \
		--dest-dir /home/raoul/git/raoulbhatia/bpc/backuppc/debian/backuppc/ \
		--html-dir /usr/share/backuppc/image \
		--html-dir-url /backuppc/image \
		--install-dir /usr/share/backuppc;

Thanks,
Raoul

Remove hardlink test

Since v4 no longer uses hard links, the hard link test can be removed (or run only when $Conf{PoolV3Enabled} is set) from the daemon code.

backuppc-3.3.1 is incompatible with Samba Samba 4.3.9

Upgraded to Ubuntu 16.04.1 LTS with backuppc-3.3.1 and samba 4.3.9 installed from distribution. Since then, no backups over smb work. The reason seems to be a change in smbclient output that Smb.pm is unaware of. I'm checking whether this fixes it. If it does, I'll create a patch and a pull request into master.

Per-PC basis configuration

It seems the config.pl files for the per-PC basis configuration ain't working. I transfered a custom config.pl file in the backuppc/pc/hostname(my pc's hostname) folder, yet the backup doesn't seem to use the file I made.

Convert README to Markdown format

Would it be ok to convert the README to Markdown format? The current one looks like it's from the 1990's :) I could do it if I knew that such a pull request would be accepted.

Checksum caching for rsync_bpc

As the documentation also mentions, one big feature of v3 is not currently implemented in v4's rsync_bpc: checksum caching. This would enable the server side to use the cache, with enormous performance benefits.

Split configure.pl into installation and configuration scripts

configure.pl is highly interactive and makes some modifications in source and configuraton files. For this reason porting of the BackupPC is complicated and nearly impossible to create "binary" packages without major changes in the source code.

To simplify packaging I propose to split configure.pl into two different scripts:

  1. Installation script
    that will make all required modifications in source files, configure distribution specific parameters like directories locations and install files.
  2. Configuration (and upgrade) script
    that will make modifications in configuration files only.

Il 16/05/2016 20:37, Alexander Moisseev ha scritto:

As FreeBSD sysutils/backuppc port maintainer I can say one of the first thing that should be done is to split up configure.pl in two parts: installation script and configuration/update script.
It's practically impossible to create pre-built package with monolithic configure.pl script, and I didn't find a way to do that. Finally I managed to solve that problem applying awful patch to configure.pl that removed installation part from it.

On 16.05.16 22:40, Mauro Condarelli wrote:

Sounds very reasonable.
Can You create an issue on github, please?
We need to capture all these issues even if we aren't in position to respond to them at first release.
Would You be interested in working to provide a patch?
Should we ask for debian maintainer input and/or support?
This could be handled together with production of a Docker Container.

I'm definitely interested in working to provide a patch. Otherwise I'll have to maintain that patch in the port anyway.
I believe we should invite debian backuppc package maintainer Ludovic Drolez not only to discussion on this particular issue, but to the backuppc organization. His contacts are available at the debian package tracker.
I'm not familiar with Docker and do not plan to use or study it in the nearest future. It there any specific requirements?

Just to illustrate what the FreeBSD sysutils/backuppc port is doing for now. This allows to create bre-built package and keep ability to automatically update configuration files on version update.

At staging (installation) phase it invokes configure.pl in --batch mode:

106     do-install:
107     cd ${WRKSRC} && ${PERL} configure.pl \
108     --batch \
109     --backuppc-user ${USERS} \
110     --bin-path perl=${PERL} \
111     --config-dir ${ETCDIR} \
112     --cgi-dir ${CGIDIR} \
113     --data-dir /var/db/BackupPC \
114     --dest-dir ${STAGEDIR} \
115     --fhs \
116     --html-dir ${WWWDIR} \
117     --html-dir-url /${PORTNAME} \
118     --install-dir ${PREFIX} \
119     --log-dir /var/log/BackupPC \
120     --no-set-perms \
121     --uid-ignore

After installation user should run update.sh script that just passes options to update.pl script:

1   #!/bin/sh
2   
3   perl %%PREFIX%%/libexec/backuppc/update.pl \
4   --bin-path perl=%%PREFIX%%/bin/perl \
5   --config-dir %%ETCDIR%% \
6   --cgi-dir %%CGIDIR%% \
7   --data-dir /var/db/BackupPC \
8   --fhs \
9   --html-dir %%WWWDIR%% \
10  --html-dir-url /backuppc \
11  --install-dir %%PREFIX%% \
12  --log-dir /var/log/BackupPC

update.pl is configure.pl script with stripped out installation directives:
https://svnweb.freebsd.org/ports/head/sysutils/backuppc/files/patch-update.pl?revision=377862&view=markup

Move BackupPC.html into $Conf{CgiImageDir}

What if we move BackupPC.html into $Conf{CgiImageDir} (--html-dir) ?
As it is a part of CGI as well, IMO it should be placed somewhere under DocumentRoot directory.
Or maybe we should install a copy of BackupPC.html if someone needs it in the /share/doc/BackupPC/ directory?

Random tab order in EditConfig

Hello,

On the Edit Config page, the tabs are in an unpredictable order, and selecting one or refreshing the page change their order.
This happens because, since Perl 5.18, the order of keys in a hash is randomized on each access.
Before that, the order was decided at population time, and was mostly predictable for a given set of keys fed in a given order.
PR #23 proposes the simple fix of sorting the keys in the relevant foreach loops.

Add command per host when backup is finished

I came across a use case where I needed BackupPC to execute a customer-command as soon as a backup of a host has finished. I tried DumpPostUserCmd for this, but quickly noticed that this command is executed when the actual data transfer has finished, not then the execution of BackupPC_dump has finished.

In my case I am running BackupPC in an EC2 instance which I'd like to shutdown as soon as all backups have finished. So I basically used to command to touch a file per host, which are then regularly checked by cronjob. If all hosts are done, the VM is shut down.

For the typical use case (freeze/lock app, backup, unfreeze/unlock) the behaviour is totally understandable, to unlock the app as soon as possible and let BackupPC do it's thing afterwards. For my use case however, I'd like the command to be executed as soon as the whole backup has finished, basically right in front of exit(0) at the end of the script. I'd suggest to add this new command, executed at the very end of the backup process.

As for now I found a workaround with the help of ngharo from IRC. Parsing the status.pl file before shutting down is one solution, checking the state of the actual script using pidof BackupPC_dump another one. I ended up using the latter one but would like to have simply a command as described above.

rsync error: error in rsync protocol data stream

Don't know if it is a bug or just some misconfiguration.

When trying to backup a Ubuntu 12.04 LTS Server (tried with full/incremental) all of the backup will quit with the following message.

rsync error: error in rsync protocol data stream
More logs in the end.

Is there a way to get more debug data?
By the way 10 other machines just runs fine on the same backuppc server.

The backuppc server setup is Ubuntu 16.04 LTS, 4.1.1 backuppc, BackupPC-XS 0.53 and rsync-bpc 3.0.9.6.

XferLOG file /data/BackupPC/pc/192.168.0.109/XferLOG.309.z created 2017-03-28 18:23:57
Backup prep: type = full, case = 2, inPlace = 1, doDuplicate = 1, newBkupNum = 309, newBkupIdx = 15, lastBkupNum = , lastBkupIdx = (FillCycle = 0, noFillCnt = 2)
Executing /usr/local/BackupPC/bin/BackupPC_backupDuplicate -h 192.168.0.109
Xfer PIDs are now 4604
Copying v3 backup #304 to v4 #309
bpc_attrib_dirRead: got unreasonable file name length 1048962
bpc_attrib_dirRead: got unreasonable file name length 42999431
setVarInt botch: got negative argument -1275054303; setting to 0
setVarInt botch: got negative argument -2108928000; setting to 0
setVarInt botch: got negative argument -2105343864; setting to 0
Xfer PIDs are now 4604,5565
BackupPC_refCountUpdate: doing fsck on 192.168.0.109 #309 since there are no poolCnt files
BackupPC_refCountUpdate: host 192.168.0.109 got 0 errors (took 216 secs)
Xfer PIDs are now 4604
BackupPC_backupDuplicate: got 0 errors and 0 file open errors
Finished BackupPC_backupDuplicate (running time: 19529 sec)
Running: /usr/local/bin/rsync_bpc --bpc-top-dir /data/BackupPC --bpc-host-name 192.168.0.109 --bpc-share-name / --bpc-bkup-num 309 --bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 --bpc-bkup-inode0 905048 --bpc-attrib-new --bpc-log-level 1 -e /usr/bin/ssh\ -l\ root --rsync-path=/usr/bin/rsync --super --recursive --protect-args --numeric-ids --perms --owner --group -D --times --links --hard-links --delete --partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ %9l\ %f%L --stats --checksum --exclude=/dev --exclude=/lost+found --exclude=/media --exclude=/mount --exclude=/proc --exclude=/run --exclude=/sys --exclude=/bin --exclude=/lib --exclude=/mnt --exclude=/sbin --exclude=/tmp --exclude=/var/cache --exclude=/var/log --exclude=/var/tmp --exclude=/images --exclude=/var/lib/dhcp/proc --exclude=/var/lib/named/proc --exclude=/var/spool --exclude=/usr/src --exclude=/var/lib/amavis/virusmails --exclude=/var/lib/dpkg --exclude=/var/lib/mysql 192.168.0.109:/ /
full backup started for directory /
Xfer PIDs are now 5579
This is the rsync child about to exec /usr/local/bin/rsync_bpc
Xfer PIDs are now 5579,5581
xferPids 5579,5581

...Backup run...

rsync_bpc: writefd_unbuffered failed to write 8 bytes to message fd [receiver]: Broken pipe (32)
Done: 0 errors, 4 filesExist, 728454 sizeExist, 134801 sizeExistComp, 0 filesTotal, 0 sizeTotal, 425 filesNew, 1507666699 sizeNew, 1002792794 sizeNewComp, 905843 inode
rsync error: error in rsync protocol data stream (code 12) at io.c(1556) [receiver=3.0.9.6]
rsync_bpc exited with fatal status 0 (11) (rsync error: error in rsync protocol data stream (code 12) at io.c(1556) [receiver=3.0.9.6])
Xfer PIDs are now
Got fatal error during xfer (No files dumped for share /)
Backup aborted (No files dumped for share /)
BackupFailCleanup: nFilesTotal = 0, type = full, BackupCase = 6, inPlace = 1, lastBkupNum =
BackupFailCleanup: inPlace with no new files... no cleanup
Running BackupPC_refCountUpdate -h 192.168.0.109 -f on 192.168.0.109
Xfer PIDs are now 19333
BackupPC_refCountUpdate: host 192.168.0.109 got 0 errors (took 63 secs)
Xfer PIDs are now
Finished BackupPC_refCountUpdate (running time: 63 sec)
Xfer PIDs are now

v4 alpha 3 - Running BackupPC_refCountUpdate when not needed - Performance issue

Copied from the mailing list on 2016-01-22:

I have glanced at the code and also believe that BackupPC_fsck is running
unnecessarily after every backup attempt, whether it is successful or not.

In my xferLogs, BackupPC_refCountUpdate is being called twice at the end of
a backup. Once like this:

Xfer PIDs are now
Running BackupPC_refCountUpdate -h afsgaia1.cas.unc.edu on somehost.unc.edu
xferPids 4508
BackupPC_refCountUpdate: host somehost.unc.edu got 0 errors
BackupPC_refCountPrint: total errors: 0
xferPids
Finished BackupPC_refCountUpdate (running time: 16 sec)

Then again like this:
Running BackupPC_refCountUpdate -h somehost.unc.edu -f -c on somehost.unc.edu
xferPids 4509
BackupPC_refCountUpdate: host somehost.unc.edu got 0 errors
BackupPC_refCountPrint: total errors: 0
xferPids
Finished BackupPC_refCountUpdate (running time: 1334 sec)

The second refCountUpdate includes the "-f -c" args which appear to force
an fsck on the host.

I'm using rsync over ssh and there are no errors reported:

Done: 0 errors, 31 filesExist, 86366561 sizeExist, 47163323 sizeExistComp,
0 filesTotal, 0 sizeTotal, 49 filesNew, 180552022 sizeNew, 58504938
sizeNewComp, 242121 inode
Number of files: 105259
Number of files transferred: 179
Total file size: 2823131278 bytes
Total transferred file size: 292936438 bytes
Literal data: 18830521 bytes
Matched data: 248088062 bytes
File list size: 2308933
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 616768
Total bytes received: 21567585
sent 616768 bytes received 21567585 bytes 36638.07 bytes/sec
total size is 2823131278 speedup is 127.26
DoneGen: 0 errors, 2 filesExist, 8306 sizeExist, 61440 sizeExistComp, 90549
filesTotal, 2823131278 sizeTotal, 0 filesNew, 0 sizeNew, 0 sizeNewComp,
242138 inode

Unlike Gandalf, my backups are succeeding. I have successfully performed
restores. Fulls and incrs happen as expected. Expiration looks correct per
my config policy.

Yes, exactly. It seems to run the full fsck even when not required.
Preventing the second run is probably easy, but working out why it was
written that way, and ensuring it really isn't needed, that is harder.....

If you have some extra time to look into the code, then I'm sure many
people would appreciate any analysis you can do.

Missing Pool File(s)

Hello,

is there a way to fix these errors:

G bpc_fileOpen: can't open pool file /backup/BackupPC/cpool/3e/6c/3e6dce697ee175107fda3010112c7d18 (from home/confixx/awstats/web10/awstats122015.web10.txt, 3, 16)
rsync_bpc: failed to open "/home/confixx/awstats/web10/awstats122015.web10.txt", continuing: No such file or directory (2)

I tried already an _fsck and _RefCountUpdate, but I still keep getting the same missing files on every backup.

v4 alpha3 - Problem backing up a directory with large number of files with rsync

Copied from an email sent to the BPC user list on 2015-07-20:
BackupPC then tries to backup the host using rsync over SSH

The problem seems to be that there is a directory with over 700,000 files (images) in it.

This seems to be a large part of the problem (taken from /var/log/kern.log)
Jul 19 08:53:13 keep kernel: [22558938.642495] rsync_bpc[1653]: segfault at 7fc12f007928 ip 0000000000449333 sp 00007fff79684680 error 4 in rsync_bpc[400000+70000]
Jul 19 10:26:55 keep kernel: [22564550.290226] rsync_bpc[8905]: segfault at 7f3a476952a8 ip 0000000000449333 sp 00007fff23fb6e10 error 4 in rsync_bpc[400000+70000]
Jul 19 12:26:47 keep kernel: [22571728.343674] rsync_bpc[17464]: segfault at 7fd4184fbd28 ip 0000000000449333 sp 00007ffff5fbe7c0 error 4 in rsync_bpc[400000+70000]
Jul 19 13:57:40 keep kernel: [22577171.017578] rsync_bpc[23984]: segfault at 7fee9daa59a8 ip 0000000000449333 sp 00007fffa00445b0 error 4 in rsync_bpc[400000+70000]
Jul 19 14:43:35 keep kernel: [22579919.766906] rsync_bpc[27380]: segfault at 7f17ff2df5a8 ip 0000000000449333 sp 00007fff2a16b750 error 4 in rsync_bpc[400000+70000]
Jul 19 15:41:02 keep kernel: [22583360.800624] rsync_bpc[31446]: segfault at 7f5e14f6e5a8 ip 0000000000449333 sp 00007fff8096fd80 error 4 in rsync_bpc[400000+70000]
Jul 19 17:10:34 keep kernel: [22588721.803461] rsync_bpc[2352]: segfault at 7f7cc61605a8 ip 0000000000449333 sp 00007fff567451d0 error 4 in rsync_bpc[400000+70000]
Jul 19 17:44:27 keep kernel: [22590751.533047] rsync_bpc[8697]: segfault at 7f53b7f34928 ip 0000000000449333 sp 00007fff6f146d20 error 4 in rsync_bpc[400000+70000]
Jul 19 18:12:13 keep kernel: [22592413.839811] rsync_bpc[11032]: segfault at 7f9dc452fd28 ip 0000000000449333 sp 00007fffae9d27f0 error 4 in rsync_bpc[400000+70000]
Jul 19 18:40:14 keep kernel: [22594091.148687] rsync_bpc[13181]: segfault at 7f68d9d1cb28 ip 0000000000449333 sp 00007fffb6897070 error 4 in rsync_bpc[400000+70000]
Jul 19 19:07:45 keep kernel: [22595738.996516] rsync_bpc[15129]: segfault at 7f2cdc961a28 ip 0000000000449333 sp 00007fff3fbb3100 error 4 in rsync_bpc[400000+70000]
Jul 19 19:34:52 keep kernel: [22597363.711624] rsync_bpc[17262]: segfault at 7f7974af4128 ip 0000000000449333 sp 00007fffbab514a0 error 4 in rsync_bpc[400000+70000]
Jul 20 01:39:48 keep kernel: [22619216.885463] rsync_bpc[10515]: segfault at 7fc5e009dca8 ip 0000000000449333 sp 00007fff4845e9c0 error 4 in rsync_bpc[400000+70000]
Jul 20 10:09:32 keep kernel: [22649741.538150] rsync_bpc[13981]: segfault at 7fc96a14b728 ip 0000000000449333 sp 00007fff5bda86e0 error 4 in rsync_bpc[400000+70000]

Backuppc_SendEmail

Copied from: http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-lists-3/backuppc-21/bug-in-backuppc-sendemail-sysadmin-mail-sent-to-wrong-user-129678/
In many host configuration files I have an override for EMailAdminUserName.
What happens now is that backuppc's sysadmin mail is sent to the user
specified in EMailAdminUserName of the last host I added.

If you have a look in BackupPC_sendEmail starting from line 181:

foreach my $host ( sort(keys(%Status)) ) {

read any per-PC config settings (allowing per-PC email settings)

$bpc->ConfigRead($host);
%Conf = $bpc->Conf();

The problem is that after the loop is done $Conf{EMailAdminUserName} will
contain the username specified as an override in the host configuration
processed last and not the value from the main configuration anymore. But
you use it down in line 386:

if ( $adminMesg ne "" && $Conf{EMailAdminUserName} ne "" ) {
my $headers = $Conf{EMailHeaders};
$headers .= "\n" if ( $headers !~ /\n$/ );
$adminMesg = <<EOF;
To: $Conf{EMailAdminUserName}
Subject: BackupPC administrative attention needed
$headers
${adminMesg}Regards,
PC Backup Genie
EOF
SendMail($adminMesg);
}

I inserted a line to read the main configuration again after the loop (see
attached patch). Is that the way to do it?

Modern styles for backend, website, logo

I'm willing to make some contributions, but before that I would like to know which of these would be considered for merge:

General

1 Restyled logo (see attachment)

Website

2 New styles using current content
3 Using third party libraries (like Bootstrap)
4 Link to CDN hosted resources (for third party libraries)

Back-end

5 Responsive, HTML5 theme
6 Reorganizing or modifying content
7 Radically reorganizing or modifying content
8 Using third party libraries (like Bootstrap)

Restyled logo sample:

logo320

Regards.

Facilitate integration of a windows client

Adapting a windows machine to become a backuppc client is a complex activity which may be beyond the skills of an average windows user. Develop an approach that makes - at least "non-fancy" - integration easier, possibly automate this activity.

Branches

Hi Craig,

On the mailing list, you ask about branches. I think I would just rename "v3.xx" "release" and rename "master" "development" and delete all the rest (nothing forces you to have a master branch, it's only so common because no one bothers giving a non-default name to the first commit's branch).

When it's time to move 4 from alpha to stable, just do a merge.

feature request: Exec backup type

I love backuppc, is very flexible and works very well... but i miss one feature: Exec backup type

I want to execute a remote script to do a backup. The backup will not be store in backuppc, so all i need is to execute a script and save the status and the script log. My most common usage is backup some data offsite to AWS S3

Right now i workaround with tar, i replace the tar command with a script that will execute my command and then tars the log, so backuppc can see the backup status and a user can checkout the log.
It works, but is a pain to be forced to play with tar command, the full and incremental options to do what i need and then fake a tar just to make backuppc happy

A simple ssh $host $script $options would be perfect, where the $options get their parameters from the full and incremental backup. Restore could also be a script. The script status code and stdout and stderr should be captured and logged by backuppc. No files are transfered to backuppc, so the size for this type of backups could always be zero in the status (or show it inside a [], to flag a remote backup)

With this, one could use backuppc as the only interface for all backups. This is useful specially on remote machines with very slow network, or cloud. This could also open the door to use tapes, dvd burners and interact with other backup software

feature request: backup to remote pool

Hi

This is a new feature request, where a central backuppc server could use other remote backupps setups to store the backups.

So imagine one main office with 2 other remote offices (and with slow internet connections). The main office would backup to it own storage, but when backing up the remote offices, it would connect to
a branch backuppc located in the remote office and start the backup of the remote office machines locally. On each backup, write a report and the main backuppc can fetch if and update it own status.

This allow one to have a central backuppc with the list of all backups of 3 different pools. only one place to manage the backups, only one to check the status.

needed changes (guessing):

-ssh could be used to talk and share configs and status, master->nodes only, to increase security.

-Each node can be a plain backuppc config, but the master could rsync the correct host configs to the other nodes and reload. backuppc config is local to each node, including ssh keys.

-Each config would have a "backuppc_node" with the node name, so it would only run the backups which the hostname match the node backuppc_node.

-When a backup finish, a node would write a backup status and the master node can pool that status file periodically to update their own "global" backup status

-Browsing/restoring backups, it would simply link to the correct node backuppc web interface. Worst case, admin can use ssh port-forward to access it

-of course, the web interface needs a new column/config field for the node name.

-maybe add to the rrd graph the size of the remote pools/free space

-bonus (way less important): if we later rsync the remote pool to the master pool (can be a different directory, say remote/node/), master could also browser/restore backups and share the master pool for the dedupe/checksum

I think this setup is simple enough that reused almost everything already in place and need only small changes for a very good new feature

SCGI + systemd not working

I provisioned a new backuppc server (built from git tag 4.1.1) on Debian 8. I'm using the systemd unit file for starting the BackupPC daemon.

Problem is the SCGI listener doesn't respond when BackupPC is started via systemctl using the included unit file. This is the commit that broke it 51e579b. I reverted to Type=forking which fixes the problem.

Most simple test case is to enable SCGI in the config by setting the port, e.g. $Conf{SCGIServerPort} = 7000;

Then start the backuppc.service via systemctl:
systemctl start backuppc.service

At this point, you should have at least one BackupPC_Admin_SCGI process running, listening on port 7000.

You can test it's responding to requests by using curl:
curl localhost:7000
Expected output is an immediate empty reply
curl: (52) Empty reply from server (this is OK because curl doesn't speak SCGI)

Question regarding existing / new Files in BackupPC [3.3.0]

Hello,

I'm backing up an old Centos Server with almost no running processes - today I noticed that according to " BackupPC File Size/Count Reuse Summary" it seems that about 1/3 of all Files on the Server are changed and get backuped again ... repeating every day.
As I don't believe that 1/3 of all Files on a Linux Server are changed on a daily base I ask myself if there is some sort of problem with my BackupPC Settings.

Screenshot "BackupPC File Size/Count Reuse Summary": http://temp.in.futureweb.at/backuppc.jpg

Any ideas on this? ;-)

Thank you, bye from Austria
Andreas

RRDTool Graphs

Dear authors,

I can see that Ubuntu version of backuppc has the patch that produces pool size and utilisation on the front page. This repository does not have it. Does it mean that is a modification you guys don't want to incorporate into the code, or simply that you had no chance to do so yet?

Thanks,
Maksym

Strange behavior with delete

Have BackupPC 3.x after upgrade move some PC backup to V4 format (by doing fullbackup). After that i delete some directories on My PC, create incr backup (still see that folders in latest backup). Then try create full backup (this folders still persist).

DumpPreShareCmd error, but I haven't set DumpPreShareCmd!

Hello, after upgrade to v 4 many of the server I backup generate an error like this:
DumpPreShareCmd returned error status 0... exiting
but I haven't set any DumpPreShareCmd in the configuration file!

So I run BackupPC_dump from command line, under perl debugger. The function UserCommandRun is called for each directory to backup:
line 904 of BackupPC_dump:
UserCommandRun("DumpPreShareCmd", $shareName);
if ( $? && $Conf{UserCmdCheckStatus} ) {

The first time '$?' is 0, but the second time '$?' is '-1' and the if condition become true, and the error is generated:

DB<10> n
main::(/usr/local/BackupPC/bin/BackupPC_dump:905):
905: if ( $? && $Conf{UserCmdCheckStatus} ) {
DB<10> p $?
-1

I've no idea why this happen, however I've changed the above code this way:
my $UCR_ret = UserCommandRun("DumpPreShareCmd", $shareName);
if ( $UCR_ret && $Conf{UserCmdCheckStatus} ) {
and now it works.

This is the config file for the server to backup:
$Conf{XferMethod} = 'rsync';
$Conf{RsyncShareName} = ['/etc', '/opt', '/root', '/home/jboss', '/var/www'];
#$Conf{BackupFilesExclude} = ['/home/share/dogana', 'dogana/'];
$Conf{UserCmdCheckStatus} = 1;
$Conf{DumpPostUserCmd} = '$sshPath -q -x -l root drpcbatman /root/ReverseDRBackup.sh amelia64 /etc /opt /root /home/jboss /var/www';

And from /etc/BackupPC/config.pl:
$Conf{DumpPreShareCmd} = undef;
...
'DumpPostShareCmd' => 0,

BPC Socket error

This issue is copied from an email sent to the backuppc-users list on 2016-02-11 by Russ Poyner:
I have a BackupPC server running in a jail on FreeBSD 10. The system
mostly works, but recently I've noticed that the BackupPC parent process
is frequently needing to be restarted. I even went to the length of
having a cron job that checks for the existence of the 'BackupPC -d'
process and restarts BackupPC if it's missing.

Examining the logs shows the daemon exiting with:
Got signal PIPE... cleaning up

each time.

Has anyone seen this sort of thing, or perhaps have ideas on how I can
debug it? Finding the PID of the process than sent the signal is not
easy in Perl, but maybe there is some other approach?

cpool files are read during incrementals

rsync_bpc seems to read cpool files even though the backup run is an incremental. This is likely unnecessary.

$ ps axu | grep BackupPC_dump
23954 ? S 0:38 /usr/bin/perl /usr/local/BackupPC/bin/BackupPC_dump -i 10.1.0.2

$ lsof -c rsync_bpc |grep 6r
rsync_bpc 23967 backuppc 6r REG 252,3 1339727 10630613 /srv/backuppc/cpool/8c/00/8c009fd0961ad87695362436629abcde

rsync_bpc - Unable to Read /var/lib/backuppc/pc/hostname/.rsyncdpw5754

Operating system - Ubuntu server 16.04 LTS
BackupPC version: 4

This was initially a BackupPC v3 installation that was upgraded to version 4 for testing.
Installed using the tar in the release section, followed the configuration prompt/configuration wizard.

When starting a backup (xfer method rsyncd) I am constantly getting the following error:
Got fatal error during xfer (No files dumped for share backups)

When skimming through the logfile I noticed that it cant read a file that appears to be a rsync credentials file. The specific error was
Unable to Read /var/lib/backuppc/pc/hostname/.rsyncdpw5754

I Kicked of a primitive check to see if there was a file even created when the backup was started, so ran the following command:
watch -n 0.1 cp .rsyncdpw* rsyncpw.bak
The credentials file was created, but seems like it only appeared for a fraction of a second (Not sure if this is desired behavior, or if the file should stay accessible until rsyncd transfer has started).
The file appears to be empty.

I have a per host configuration that specifies the rsyncd username and password, so just tested to add a global set of credentials and set the default xfer method to rsyncd.

Did the same thing, keept a lookout for the credentials file using the same watch command as above, and now the file appears to have the stored password in it.

However, I am still getting the same error in the error log for all rsyncd hosts, "Unable to Read /var/lib/backuppc/pc/hostname/.rsyncdpw".

Samba based backups erroneously fail with samba >= 4.1.6

Samba version 4.1.6 or greater changed the way verbose output is handled. Per file backups status lines are no longer being generated by samba client. BackupPC expects these per file lines to exist in the output, or BackupPC reports that no files were dumped for the share and marks good backups as failed.

Workaround: You can workaround the issue with this patch: https://bugzilla.redhat.com/show_bug.cgi?id=1294761
and Using the BackupPC config option (Under "Schedule"):

    $Conf{BackupZeroFilesIsFatal} = 0;

Edit: fixed typo in config example

BackupPC_migrateV3toV4 bug?

Possible that BackupPC_migrateV3toV4 has a path length limitation?

I'm migrating a V3 install to a V4 install with the above tool and:
BackupPC_migrateV3toV4: can't read attribute file ./f%2fnas-9-0BKUP/fapps/fspark-1.6.1/fexternal/fmqtt-assembly/ftarget/fstreams/f$global/fassemblyOption/f$global/fstreams/fassembly/f36991b4b5e0678f822e8fcc31508b5195f304c47_0fec8066cd2b4f8dc7ff7ba7a8e0a792939d9f9a/fscala/ftools/fnsc/fmatching/attrib
bpc_attrib_dirRead: got unreasonable file name length 572626

BackupPC does not daemonize

I am trying to manually execute as a non-root backuppc user: /usr/bin/BackupPC -d but it seems that it fails right after: https://github.com/backuppc/backuppc/blob/master/bin/BackupPC#L312

If I run backuppc without the "-d" option, it stays active in the foreground.

There is nothing being logged. However, I checked that $pid does get assigned a normal process ID, but it dies right after L312.

I run BackupPC with warnings turned on to see if there is anything interesting:

"my" variable $lockFd masks earlier declaration in same scope at /usr/lib/BackupPC/Storage/Text.pm line 474.
"my" variable $locked masks earlier declaration in same scope at /usr/lib/BackupPC/Storage/Text.pm line 474.
Statement unlikely to be reached at /usr/lib/BackupPC/Lib.pm line 1166.
        (Maybe you meant system() when you said exec()?)
Statement unlikely to be reached at /usr/lib/BackupPC/Lib.pm line 1231.
        (Maybe you meant system() when you said exec()?)
Statement unlikely to be reached at /usr/bin/BackupPC line 601.
        (Maybe you meant system() when you said exec()?)
Statement unlikely to be reached at /usr/bin/BackupPC line 773.
        (Maybe you meant system() when you said exec()?)
Possible attempt to separate words with commas at /usr/bin/BackupPC line 1350.
Name "BackupPC::Storage::Text::LOCK" used only once: possible typo at /usr/lib/BackupPC/Storage/Text.pm line 482.
Use of uninitialized value $topDir in string eq at /usr/lib/BackupPC/Lib.pm line 70.
Use of uninitialized value $installDir in string eq at /usr/lib/BackupPC/Lib.pm line 71.
Use of uninitialized value $confDir in string eq at /usr/lib/BackupPC/Lib.pm line 79.
Use of uninitialized value $host in string ne at /usr/lib/BackupPC/Lib.pm line 348.
binmode() on closed filehandle CHILD at /usr/lib/BackupPC/Lib.pm line 1214.
Use of uninitialized value in string ne at /usr/bin/BackupPC line 271.

I am on gentoo x64 and using perl v5.24.1. I wrote my own ebuilds (maybe that contributes to the problem?) and I'm using BackupPC:XS-v0.52 w/ BackupPC v4.0.0

Any ideas what may be wrong? or anything interesting to try?

thanks in advance.

BackupPC_migrateV3toV4 missing and crashes

I downloaded the 4.01 release (.tar.gz) and BackupPC_migrateV3toV4 is missing.

I downloaded the source release and it was there, not sure if that was intentional.

I launched the migrate tool on a 2.5TB pool I had around, it did a dozen hosts or so with a dozen backups each without problems.

Then:

forward. #1473: 99.9%
forward. #1473: 100.0%
BackupPC_migrateV3toV4: converted backup in /var/lib/backuppc/pc/forward./1473; removing /var/lib/backuppc/pc/forward./1473.old
BackupPC_migrateV3toV4: migrating host forward. backup #1474 to V4 (approx 0 files)
Illegal division by zero at /usr/share/backuppc/bin/BackupPC_migrateV3toV4 line 508.

Weird permissions on created files/directories

Permission mask $Conf{UmaskMode} = 23;

As expected, BackupPC V3 creates directories and files with following permissions:

drwxr-x---  backuppc  backuppc
-rw-r-----  backuppc  backuppc

But BackupPC 4.0.0 creates files and directories in cpool with permissions ( permission mask is the same:23):

drwxrwxrwx  backuppc  backuppc
-r--r--r--  backuppc  backuppc

and some files in cpool and pc directories have

-r--r--r-x  backuppc  backuppc

BPC4 date format

In BPC4 web interface, dates are displayed as month/day. The date format should be configurable (best IMHO) or in ISO (YYY-MM-DD) format.

backuppc 4.x on Gentoo

The introduction of makeDist has made using the BackupPC from git complicated on gentoo.
It seems to be mandatory now, or is there an option to skip its use and carry on using the top-level source tree?

It also appears that the openrc init scripts have been moved into the systemd folder.

Any help would be great. thx.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.