Re: "This is a bug."

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 10, 2015 at 10:51:54AM -0400, Brian Foster wrote:
> On Thu, Sep 10, 2015 at 04:05:30PM +0300, Tapani Tarvainen wrote:
> > On 10 Sep 09:01, Brian Foster (bfoster@xxxxxxxxxx) wrote:
> > 
> > > > It is 2.5GB so not really nice to mail...
> > 
> > > Can you compress it?
> > 
> > Ah. Of course, should've done it in the first place.
> > Still 250MB though:
> > 
> > https://huom.it.jyu.fi/tmp/data1.metadump.gz
> > 
> 
> First off, I see ~60MB of corruption output before I even get to the
> reported repair failure, so this appears to be an extremely severe
> corruption and I wouldn't be surprised if ultimately beyond repair (not
> that it matters for you, since you are restoring from backups).
> 
> The failure itself is an assert failure against an error return value
> that appears to have a fallback path, so I'm not really sure why it's
> there. I tried just removing it to see what happens. It ran to
> completion, but there was a ton of output, write verifier errors, etc.,
> so I'm not totally sure how coherent the result is yet. I'll run another
> repair pass and do some directory traversals and whatnot and see if it
> explodes...
> 

FWIW, the follow up repair did come up clean so it appears (so far) to
have put the fs back together from a metadata standpoint. That said,
>570k files end up in lost+found and who knows whether the files
themselves would have contained the expected data once all of the bmaps
are fixed up and whatnot.

Brian

> I suspect what's more interesting at this point is what happened to
> cause this level of corruption? What kind of event lead to this? Was it
> a pure filesystem crash or some kind of hardware/raid failure?
> 
> Also, do you happen to know the geometry (xfs_info) of the original fs?
> Repair was showing agno's up in the 20k's and now that I've mounted the
> repaired image, xfs_info shows the following:
> 
> meta-data=/dev/loop0             isize=256    agcount=24576, agsize=65536 blks
>          =                       sectsz=4096  attr=2, projid32bit=0
>          =                       crc=0        finobt=0 spinodes=0
> data     =                       bsize=4096   blocks=1610612736, imaxpct=25
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> log      =internal               bsize=4096   blocks=2560, version=2
>          =                       sectsz=4096  sunit=1 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> 
> So that's a 6TB fs with over 24000 allocation groups of size 256MB, as
> opposed to the mkfs default of 6 allocation groups of 1TB each. Is that
> intentional?
> 
> Brian
> 
> > -- 
> > Tapani Tarvainen
> > 
> > _______________________________________________
> > xfs mailing list
> > xfs@xxxxxxxxxxx
> > http://oss.sgi.com/mailman/listinfo/xfs
> 
> _______________________________________________
> xfs mailing list
> xfs@xxxxxxxxxxx
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux