On 10/18/2011 12:52 AM, Dave Chinner wrote: > A corrupt inode allocation btree - not a particularly common type of > corruption to be reported. Do you know what caused the errors to > start being reported? A crash, a bad disk, a raid rebuild, something > else? That information always helps us understand how badly damaged > the filesystem might be.... We had a hard disc failure on an Areca 1680 RAID controller, RAID 6. I checked the firmware and the last available version is already installed. > And I'd guess that is failing on a different problem - a corrupt > inode most likely. You've build xfs-repair from the source code - > can yo urun it under gdb so we can see where it is dying? I already run it under gdb and sent some mails to Christoph Hellwig. He found an *issue* in the xfsprogs/repair/attr_repair.c code and sent me a patch that fixed it. Now "xfs_repair -n -P /dev/sdb1" runs without errors. But before repairing the XFS I have to rsync as much as I can from this XFS to another one which is still not available. So it will take a couple of days before I can run xfs_repair on the XFS. > That sounds like there's a *lot* of damage to the filesystem. That > makes it even more important that we understand what caused the > damage in the first place.... Yes, lots of damage. 8( Thanks for your help, Richard -- Richard Ems mail: Richard.Ems@xxxxxxxxxxxxxxxxx Cape Horn Engineering S.L. C/ Dr. J.J. Dómine 1, 5º piso 46011 Valencia Tel : +34 96 3242923 / Fax 924 http://www.cape-horn-eng.com _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs