I have an MD RAID-1 array with two SATA drives, formatted as XFS. Occasionally, doing an umount followed by a mount causes the mount to fail with errors that strongly suggest some sort of filesystem corruption (usually 'bad clientid' with a seemingly arbitrary ID, but occasionally invalid log errors as well). The one thing in common among all these failures is that they require xfs_repair -L to recover from. This has already caused a few lost+found entries (and data loss on recently written files). I originally noticed this bug because of mount failures at boot, but I've managed to repro it reliably with this script: while true; do mount /store (cd /store && tar xf test.tar) umount /store mount /store rm -rf /store/test-data umount /store done test.tar contains around 100 files with various sizes inside test-data/, ranging from a few hundred KB to around 5-6MB. The failure triggers within minutes of starting this loop. I'm not entirely sure that this is XFS-specific, but the same script does run successfully overnight on the same MD array with ext3 on it. This is on an ARM system running kernel 2.6.39. Has something like this been seen before? Thanks, Nuno _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs