Hello again XFS folks, I have finally made the time to revisit this, after copying most of my data elsewhere. On Sun 03/07/11 9:41 PM , Eric Sandeen <sandeen@xxxxxxxxxxx> wrote: > On 7/3/11 11:34 PM, kkeller@xxxxxxxxx wrote: > > How safe is running xfs_db with -r on my mounted filesystem? I > > it's safe. At worst it might read inconsistent data, but it's > perfectly safe. So, here is my xfs_db output. This is still on a mounted filesystem. # xfs_db -r -c 'sb 0' -c 'print' /dev/mapper/saharaVG-saharaLV magicnum = 0x58465342 blocksize = 4096 dblocks = 5371061248 rblocks = 0 rextents = 0 uuid = 1bffcb88-0d9d-4228-93af-83ec9e208e88 logstart = 2147483652 rootino = 128 rbmino = 129 rsumino = 130 rextsize = 1 agblocks = 91552192 agcount = 59 rbmblocks = 0 logblocks = 32768 versionnum = 0x30e4 sectsize = 512 inodesize = 256 inopblock = 16 fname = "\000\000\000\000\000\000\000\000\000\000\000\000" blocklog = 12 sectlog = 9 inodelog = 8 inopblog = 4 agblklog = 27 rextslog = 0 inprogress = 0 imax_pct = 25 icount = 19556544 ifree = 1036 fdblocks = 2634477046 frextents = 0 uquotino = 131 gquotino = 132 qflags = 0x7 flags = 0 shared_vn = 0 inoalignmt = 2 unit = 0 width = 0 dirblklog = 0 logsectlog = 0 logsectsize = 0 logsunit = 0 features2 = 0 # xfs_db -r -c 'sb 1' -c 'print' /dev/mapper/saharaVG-saharaLV magicnum = 0x58465342 blocksize = 4096 dblocks = 2929670144 rblocks = 0 rextents = 0 uuid = 1bffcb88-0d9d-4228-93af-83ec9e208e88 logstart = 2147483652 rootino = 128 rbmino = 129 rsumino = 130 rextsize = 1 agblocks = 91552192 agcount = 32 rbmblocks = 0 logblocks = 32768 versionnum = 0x30e4 sectsize = 512 inodesize = 256 inopblock = 16 fname = "\000\000\000\000\000\000\000\000\000\000\000\000" blocklog = 12 sectlog = 9 inodelog = 8 inopblog = 4 agblklog = 27 rextslog = 0 inprogress = 0 imax_pct = 25 icount = 19528640 ifree = 15932 fdblocks = 170285408 frextents = 0 uquotino = 131 gquotino = 132 qflags = 0x7 flags = 0 shared_vn = 0 inoalignmt = 2 unit = 0 width = 0 dirblklog = 0 logsectlog = 0 logsectsize = 0 logsunit = 0 features2 = 0 I can immediately see with a diff that dblocks and agcount are different. Some other variables are also different, namely icount, ifree, and fdblocks, which I am unclear how to interpret. But judging from the other threads I quoted, it seems that dblocks and agcount are using values for a 20TB filesystem, and that therefore on a umount the filesystem will become (at least temporarily) unmountable. I've seen two different routes for trying to correct this issue--either use xfs_db to manipulate the values directly, or using xfs_repair on a frozen ro-mounted filesystem with a dump from xfs_metadata. My worry about the latter is twofold--will I even be able to do a remount? And will I have space for a dump from xfs_metadata of an 11TB filesystem? I have also seen advice in some of the other threads that xfs_repair can actually make the damage worse (though presumably xfs_repair -n should be safe). If xfs_db is a better way to go, and if the values xfs_db returns on a umount don't change, would I simply do this? # xfs_db -x /dev/mapper/saharaVG-saharaLV sb 0 w dblocks = 2929670144 w agcount = 32 and then do an xfs_repair -n? A route I have used many ages ago, on ext2 filesystems, was to specify an alternate superblock when running e2fsck. Can xfs_repair do this? > Get a recent xfsprogs too, if you haven't already, it scales better > than the really old versions. I think I may have asked this in another post, but would you suggest compiling 3.0 from source? The version that CentOS distributes is marked as 2.9.4, but I don't know what patches they've applied (if any). Would 3.0 be more likely to help recover the fs? Thanks all for your patience! --keith -- kkeller@xxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs