I was talking to Ric about dump benchmarks, and he was of the impression that dump may not be used that often anymore, at least in the enterprise. (Ric, hope I'm paraphrasing correctly) Undaunted :) I ran off and tested an artificial backup scenario: * Untar a kernel tree into 128 different top level dirs * Make a level 0 backup * Untar a kernel tree into 128 MORE different top level dirs * Make a level 1 backup 128 kernel trees uses about 6.5M inodes, and about 80G of space. I tested ext3 with dump; ext4 with tar, and xfs with xfsdump. for ext3: dump -1 -u -f $DUMPDIR/dump1 $DATADIR for ext4: tar --atime-preserve --xattr --after-date=$DUMPDIR/dump0.tar -cf $DUMPDIR/dump1.tar $DATADIR for xfs: xfsdump -F -l 1 -f $DUMPDIR/dump0 $DATADIR DUMPDIR and DATADIR were 2 partitions on the same (fast hardware) lun. Results: level0 level1 ------ ------ ext3 38m52s 42m21s ext4 57m55s 69m35s xfs 25m18s 37m44s Clearly tar on ext4, at least for my incantation, lags. Is dump for ext4 anywhere on the todo list, or should it be? Or am I just running tar wrong? :) -Eric -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html