RE: Unable to mount and repair filesystems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> > # xfs_db /dev/os/opt
> > Metadata corruption detected at block 0x4e2001/0x200
> 
> so at sector 0x4e2001, length 0x200.
> 
> xfs_db> agf 5
> xfs_db> daddr
> current daddr is 5120001
> 
> so it's the 5th AGF which is corrupt.
> 
> you could try:
> 
> xfs_db> agf 5
> xfs_db> print
> 
> to see how it looks.

That gives me this:

xfs_db> agf 5
xfs_db> daddr
current daddr is 5120001
xfs_db> print
magicnum = 0
versionnum = 0
seqno = 0
length = 0
bnoroot = 0
cntroot = 0
bnolevel = 0
cntlevel = 0
flfirst = 0
fllast = 0
flcount = 0
freeblks = 0
longest = 0
btreeblks = 0
uuid = 00000000-0000-0000-0000-000000000000
lsn = 0
crc = 0 (correct)

 
> > xfs_db: cannot init perag data (117). Continuing anyway.
> > xfs_db> sb 0
> > xfs_db> p
> > magicnum = 0x58465342
> 
> this must not be the one that repair failed on like:
> 
> > couldn't verify primary superblock - bad magic number !!!
> 
> because that magicnum is valid.  Did this one also fail to repair?

How do I know/check/test if "this one" fails to refer? I'm not sure what you're referring to (or what to do with it).

> > agcount = 25
> 
> 25 ags, presumably the fs was grown in the past, but ok...

Yes, it was. Ran out of space so I increased the size of the logical volume then used xfs_grow to increase the filesystem itself. That was the whole reason behind using LVM so this growth can be done on a live system without requiring repartitioning and such.

I did read today that growing an XFS is not necessarily something we should be doing? Some posts even suggest that LVM and XFS shouldn't be mixed together. Not sure how to separate truth from fiction.
 
> The only thing I can say is that xfs is going to depend on the storage telling
> the truth about completed IOs...  If the storage told XFS an IO was persistent,
> but it wasn't, and the storage went poof, bad things can happen.  I don't
> know the details of your setup, or TBH much about vmware over nfs ... you
> weren't mounted with -o nobarrier were you?

No I wasn't mounted with nobarrier unless it is done by default. I never specified the option on command line or in /etc/fstab at any rate for what that is worth.

Gerard 

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux