Hi Dave,
I've found xfs_bmap and did a few experiments with dd. It looks to me as
though it's RAID1 sync problem -- I've got 2 different versions of the
data during continuous reads from the device with cache drops inbetween.
So, although the cause of the problem is still unclear, it's definitely
not XFS. Thanks for the hint!
On 11/12/2011 23:53, Dave Chinner wrote:
On Sun, Dec 11, 2011 at 01:21:37PM +0000, Dmitry Panov wrote:
Hi guys,
I have a 2TiB XFS which is about 60% full. Recently I've noticed
that the daily inc. backup reports file contents change for files
that are not supposed to change.
What kernel/platform? What version of xfsprogs? What kind of
storage?
I've created an LVM snapshot and ran xfs_check/xfs_repair. xfs_check
did report a few problems (unknown node type). After that I ran a
simple test: mount, calculate md5 of the problematic files, report
if it changed, umount, sleep 10 sec. That script reported that md5
sum of at least one file was changing on every cycle.
That sounds like you've got a dodgy drive.
Analyzing the differences I found that a 4k block that should
contain all zeros sometimes contains random garbage (luckily most of
the files are pcm wavs, so it's easy to verify). However I did not
analyze every occurrence so this may be not 100% true. The files do
not look as they are sparse according to du. Interestingly one of
them appears to occupy one block more than necessary.
XFS can allocate blocks beyond EOF - it's completely valid to do so.
Then I did cp -a file newfile, mv newfile file and re-ran the test.
No problems reported since.
So the file is now in a different physical location on disk.
Defintely sounds like a dodgy disk to me.
As there were a few unclean umounts I think most likely it is a
filesystem corruption that went unspotted by xfs_repair. It would
not surprise me too much because xfs_repair took just 3.5 min.
The run time of xfs_repair is determined by how much IO it needs to
do to read all the metadata. Your filesystem is not all that densely
populated with metadata, so it doesn't take very long to run. The
short runtime does not mean it hasn't checked you filesystem
properly.
Think about scale or a minute - take your filesystem and scale it
linearly in all dimensions - a repair rate of 1.5m per TB means
2.5hrs for a 100TB filesystem or a day for a PB sized filesystem. The
speed you are seeing doesn't seem quite so fast now, does it?
Any ideas? I could just copy the files and pretend noting happened
but is there a guarantee that doing so won't corrupt other data?
I'd start by replacing hardware....
Cheers,
Dave.
--
Dmitry Panov
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs