Discussion:
[BackupPC-devel] rsync_bpc syntax error with 4.1.0
Richard Shaw
2017-03-26 15:40:04 UTC
Permalink
I've updated my CentOS 7 box to BackupPC 4.1.0 to continue testing and ran
into the following with a manual backup:

This is the rsync child about to exec /usr/bin/rsync_bpc
cmdExecOrEval: about to exec /usr/bin/rsync_bpc --bpc-top-dir
/var/lib/BackupPC/ --bpc-host-name hobbes --bpc-share-name
/home/richard/Documents --bpc-bkup-num 55 --bpc-bkup-comp 3
--bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 --bpc-bkup-inode0 3945
--bpc-attrib-new --bpc-log-level 1 -e /usr/bin/ssh\ -l\ root --super
--recursive --protect-args --numeric-ids --perms --owner --group -D --times
--links --hard-links --delete --partial --log-format=log:\ %o\ %i\ %B\
%8U,%8G\ %9l\ %f%L --stats hobbes:/home/richard/Documents/ /
rsync error: syntax or usage error (code 1) at options.c(1320)
[server=3.0.9.6]
rsync_bpc: connection unexpectedly closed (0 bytes received so far)
[Receiver]
Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 filesTotal, 0
sizeTotal, 0 filesNew, 0 sizeNew, 0 sizeNewComp, 3945 inode
rsync error: error in rsync protocol data stream (code 12) at io.c(629)
[Receiver=3.0.9.6]
rsync_bpc exited with fatal status 12 (3072) (rsync error: error in rsync
protocol data stream (code 12) at io.c(629) [Receiver=3.0.9.6])

This is with the stock arguments so I don't know what part of the command
rsync_bpc doesn't like..

Thanks,
Richard
Richard Shaw
2017-03-26 18:28:21 UTC
Permalink
I figured it out... Because everything is detected (rsync, ping, ping6,
tar, par, etc...) it has to be present in the buildroot during the install
even though it's not a requirement to build.

I plan on adding all these as overrides to configure.pl so I don't pull in
all these packages and their dependencies during package building.

Thanks,
Richard
Bill Broadley
2017-03-27 10:24:28 UTC
Permalink
I had a 2.7TB pool for 38 hosts, actual size after deduplication and compression.

I just upgraded to 4.0.1 and did a V3 to V4 pool migration.

The storage penalty was pretty small, about 2% or 52GB.

The inode overhead was substantial, just over a factor of 2. In my case it was
25348763 inodes (around 11% of the ext4 default) to 51809068 (around 22% of the
default).

If you are upgrading you might want to ensure that your V3 inode usage is less
than 50% of what the filesystem is capable of.

The above migration tool 52 hours or so on a RAID 5 of 4 disks on a server
that's a few years old.

To watch I graphed it over time:
Loading Image...



------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
BackupPC-devel mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Craig Barratt
2017-03-30 00:32:30 UTC
Permalink
Bill,

That's interesting data.

I'm not sure why the inode use goes up. Has it stayed at the higher level
after BackupPC_nightly has run? Is the old V3 pool now empty?

Craig
Post by Bill Broadley
I had a 2.7TB pool for 38 hosts, actual size after deduplication and compression.
I just upgraded to 4.0.1 and did a V3 to V4 pool migration.
The storage penalty was pretty small, about 2% or 52GB.
The inode overhead was substantial, just over a factor of 2. In my case it was
25348763 inodes (around 11% of the ext4 default) to 51809068 (around 22% of the
default).
If you are upgrading you might want to ensure that your V3 inode usage is less
than 50% of what the filesystem is capable of.
The above migration tool 52 hours or so on a RAID 5 of 4 disks on a server
that's a few years old.
http://broadley.org/bill/backuppc.png
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
BackupPC-devel mailing list
List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Bill Broadley
2017-03-30 00:57:44 UTC
Permalink
Post by Craig Barratt
Bill,
That's interesting data.
I'm not sure why the inode use goes up. Has it stayed at the higher level after
BackupPC_nightly has run?
Yes.
Post by Craig Barratt
Is the old V3 pool now empty?
Yes, V3-V4 migration doesn't say anything, no errors, finishes quickly, and I
disabled the V3 pool in the config.pl. The daily report/chart doesn't show any V3.

I didn't think hardlinks consumed inodes. So a file hardlinked 5 times has 5
directory entries, but only one inode.

Is it possible backuppc consumes 2?


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
BackupPC-devel mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Craig Barratt
2017-03-30 01:13:57 UTC
Permalink
Bill,

Sure, I agree that multiple hardlinks only consume one inode. Each v3 pool
file (when first encountered in a v3 backup) should get moved to the new v4
pool. So that shouldn't increase the number of inodes. The per-directory
backup storage in v4 should be more efficient; I'd expect one less inode
per v4 directory. v4 does add some reference count files per backup (128),
but that's rounding error.

Can you look in the V3 pool? Eg, is $TOPDIR/cpool/0/0/0 empty? It could
be it didn't get cleaned if you turned off the V3 pool before
BackupPC_nightly ran the next time. If so, I'd expect the old v3 pool is
full of v3 attrib files, each with one link (ie, not used any longer).

Craig
Post by Craig Barratt
Post by Craig Barratt
Bill,
That's interesting data.
I'm not sure why the inode use goes up. Has it stayed at the higher
level after
Post by Craig Barratt
BackupPC_nightly has run?
Yes.
Post by Craig Barratt
Is the old V3 pool now empty?
Yes, V3-V4 migration doesn't say anything, no errors, finishes quickly, and I
disabled the V3 pool in the config.pl. The daily report/chart doesn't show any V3.
I didn't think hardlinks consumed inodes. So a file hardlinked 5 times has 5
directory entries, but only one inode.
Is it possible backuppc consumes 2?
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
BackupPC-devel mailing list
List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Bill Broadley
2017-03-30 01:27:38 UTC
Permalink
Bill,
Sure, I agree that multiple hardlinks only consume one inode. Each v3 pool file
(when first encountered in a v3 backup) should get moved to the new v4 pool. So
that shouldn't increase the number of inodes. The per-directory backup storage
in v4 should be more efficient; I'd expect one less inode per v4 directory. v4
does add some reference count files per backup (128), but that's rounding error.
Can you look in the V3 pool? Eg, is $TOPDIR/cpool/0/0/0 empty?
Yes:
***@fs1:/backuppc/cpool/0/0/0# ls -al
total 116
drwxr-x--- 2 backuppc backuppc 110592 Mar 28 01:01 .
drwxr-x--- 18 backuppc backuppc 4096 Jan 24 2013 ..
It could be it
didn't get cleaned if you turned off the V3 pool before BackupPC_nightly ran the
next time. If so, I'd expect the old v3 pool is full of v3 attrib files, each
with one link (ie, not used any longer).
Currently:
***@fs1:~# df -i /backuppc/
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/md3 238071808 53427304 184644504 23% /backuppc

It was around 11% with V3, it's now 23% and still agrees with the plot I made.

I've not added hosts, changed the number of incremental or full backups, or many
any other changes that should increase the node count.

As each backup (like #1405) was migrated the directory would be renamed to .old,
migrated, and then removed. So there would be a steep increase in inodes, and a
drop, but never to the original number.

I have two backuppc servers, each with different pools of clients, they both
went from approximately 11% of the filesystem's inodes being used to 22%.







------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
BackupPC-devel mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Craig Barratt
2017-03-30 02:14:25 UTC
Permalink
Bill,

My previous statement wasn't correct. In V4, each directory in a backup
tree consumes 2 inodes, one for the directory and the other for the (empty)
attrib file. In V3, each directory in a backup tree consumes 1 inode for
the directory, and everything else is hardlinked, including the attrib
file.

So when you migrate a V3 backup, the number of inodes to store the backup
trees will double, as you observe. The pool inode usage shouldn't change
much, but with lots of backups the former number dominates.

In a new V4 installation the inode usage will be somewhat lower, since in
V4 incrementals don't store the entire backup tree (just the directories
that have changes get created). In a series of backups where the directory
contents change every backup, including the pool file, V4 will use 3 inodes
per backup directory (directory, attrib file, pool file), while V3 will use
2 (directory, {attrib, pool} linked). So the inode usage is 1.5 - 2x.

I'll add a mention of the inode usage to the documentation.

Craig
Post by Craig Barratt
Bill,
Sure, I agree that multiple hardlinks only consume one inode. Each v3
pool file
Post by Craig Barratt
(when first encountered in a v3 backup) should get moved to the new v4
pool. So
Post by Craig Barratt
that shouldn't increase the number of inodes. The per-directory backup
storage
Post by Craig Barratt
in v4 should be more efficient; I'd expect one less inode per v4
directory. v4
Post by Craig Barratt
does add some reference count files per backup (128), but that's
rounding error.
Post by Craig Barratt
Can you look in the V3 pool? Eg, is $TOPDIR/cpool/0/0/0 empty?
total 116
drwxr-x--- 2 backuppc backuppc 110592 Mar 28 01:01 .
drwxr-x--- 18 backuppc backuppc 4096 Jan 24 2013 ..
Post by Craig Barratt
It could be it
didn't get cleaned if you turned off the V3 pool before BackupPC_nightly
ran the
Post by Craig Barratt
next time. If so, I'd expect the old v3 pool is full of v3 attrib
files, each
Post by Craig Barratt
with one link (ie, not used any longer).
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/md3 238071808 53427304 184644504 23% /backuppc
It was around 11% with V3, it's now 23% and still agrees with the plot I made.
I've not added hosts, changed the number of incremental or full backups, or many
any other changes that should increase the node count.
As each backup (like #1405) was migrated the directory would be renamed to .old,
migrated, and then removed. So there would be a steep increase in inodes, and a
drop, but never to the original number.
I have two backuppc servers, each with different pools of clients, they both
went from approximately 11% of the filesystem's inodes being used to 22%.
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
BackupPC-devel mailing list
List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Loading...