On Thu, Sep 10, 2015 at 08:31:38PM +0300, Tapani Tarvainen wrote: > On Thu, Sep 10, 2015 at 10:51:54AM -0400, Brian Foster (bfoster@xxxxxxxxxx) wrote: > > > First off, I see ~60MB of corruption output before I even get to the > > reported repair failure, so this appears to be an extremely severe > > corruption and I wouldn't be surprised if ultimately beyond repair > > I assumed as much already. > > > I suspect what's more interesting at this point is what happened to > > cause this level of corruption? What kind of event lead to this? Was it > > a pure filesystem crash or some kind of hardware/raid failure? > > Hardware failure. Details are still a bit unclear but apparently raid > controller went haywire, offlining the array in the middle of > heavy filesystem use. > > > Also, do you happen to know the geometry (xfs_info) of the original fs? > > No (and xfs_info doesn't work on the copy made after crash as it > can't be mounted). > > > Repair was showing agno's up in the 20k's and now that I've mounted the > > repaired image, xfs_info shows the following: > [...] > > So that's a 6TB fs with over 24000 allocation groups of size 256MB, as > > opposed to the mkfs default of 6 allocation groups of 1TB each. Is that > > intentional? > > Not to my knowledge. Unless I'm mistaken, the filesystem was created > while the machine was running Debian Squeeze, using whatever defaults > were back then. > Strange... was the filesystem created small and then grown to a much larger size via xfs_growfs? I just formatted a 1GB fs that started with 4 allocation groups and ends with 24576 (same as above) AGs when grown to 6TB. Brian > -- > Tapani Tarvainen _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs