On 01/17/2012 03:36 PM, Les Mikesell wrote: > > I wouldn't trust any of the software block-dedup systems with my only > copy of something important - plus they need a lot of RAM which your > old systems probably don't have either. > I am interested in backuppc, however from what I read online it appears that zfs is a very featureful robust high performance filesystem that is heavily used in production environments. It has features that allow you to specify that if the reference count for a block goes above certain levels it should keep two or three copies of that block and that could be on separate storage devices within the pool. It also supports compression. With backuppc deduplication, your still hosed if your only copy of the file goes bad. Why should block level deduplication be any worse than file level deduplication? Furthermore, zfs has very high redundancy and recovery ability for the internal filesystem data structures. Here's a video describing ZFS's deduplication implementation: http://blogs.oracle.com/video/entry/zfs_dedup At this point I am only reading the experience of others, but I am inclined to try it. I backup a mediawiki/mysql database and the new records are added to the database largely by appending. Even with compression, it's a pain to backup the whole thing every day. Block level dedup seems like it would be a good solution for that. I'm not a big fan of Oracle, but from a technical standpoint zfs sounds quite good. I'm thinking of trying it on my laptop, because it's supposed to work well for storing things like virtual machines, and if a decent implementation runs on CentOS, Why not? Les, do you run backuppc on ext3 or ext4 filesystems? I remember a while back, someone saying that a filesystem with more inodes was required for substantial backuppc deployment. Nataraj _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos