On 6/8/12 4:45 AM, Christian J. Dietrich wrote: > > Hey all, > > I have problems with an XFS volume. Upon discovering the message > "kernel: XFS (sda3): corrupt inode 3714097 (bad size 16437 for local > fork, size = 60)." > I ran xfs_repair /dev/sda3 (/dev/sda3 was unmounted). It reported to > have fixed some errors. > However, after a while in normal operation, another XFS corruption > occurred on /dev/sda3. I noticed that repeatedly calling xfs_repair will > always report and fix new errors, even if the volume is not mounted in > between, e.g., "rebuilding directory inode XXX" with different (new) > values of XXX. > > /dev/sda is a 12 TB RAID-10 volume on an Adaptec 51245 controller. All > disks are online and none is reported faulty. > > Naive, I would assume that running xfs_repair once would fix all errors. > My guess is that the underlying RAID volume (Adaptec 51245 RAID 10) is > somehow invalid (although I can not find any indicators confirming > this). Any suggestions? If you suspect a problem with repair, you can try: # umount /dev/sda # xfs_metadump -o /dev/sda - | xfs_mdrestore - filesystem.img # xfs_repair filesystem.img # xfs_repair filesystem.img The image shouldn't take too much space, but 12T might take a little while to dump. If repair doesn't fix everything the first time please let us know. > I am running CentOS 6.2 (=RHEL 6.2) with kernel > 2.6.32-220.17.1.el6.x86_64 (most recent) and all OS updates installed. please make sure that you aren't running the old kmod-xfs (or was it xfs-kmod?) rpm. -Eric > Controller Firmware is the most recent (18948), driver version is 1.1-5. > HDDs are 2x WD2001FASS, 10x WD2002FAEX. > > Thanks in advance, > Chris > _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs