Rather than dedupe at the file system level, I found the application level dedupe in BackupPC works really well... I've run BackupPC on both a big ZFS volume, and on a giant XFS over LVM over MDRAID volume (24 x 3TB disks organized as 2 x 11 raid6 plus 2 hot spares). The backuppc server I built at my last $job had 30 days of daily incrementals and 12 months of monthlies of about 25 servers+VMs (including Linux, Solaris, AIX, and Windows). The dedupe is done globally on a file level, so no matter how many instances of a file in all those backups ((30+12) * 25), there's only one file in the 'hive'. Bonus, BackupPC has a nice web UI for retrieving backups, I could create accounts for my various developers, and they could retrieve stuff from any covered date on any of the servers they had access to without my intervention. about the only manual intervention I ever needed to do over the several years this was running involved the Windows rsync client needing a PID file deleted after an unexpected reboot. -- -john r pierce recycling used bits in santa cruz _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx https://lists.centos.org/mailman/listinfo/centos