Re: xfs_repair of critical volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Nov 14, 2010 at 08:09:35PM -0800, Eli Morris wrote:
> On Nov 14, 2010, at 3:05 AM, Dave Chinner wrote:
> > On Fri, Nov 12, 2010 at 03:01:47PM -0800, Eli Morris wrote:
> >> On Nov 12, 2010, at 5:22 AM, Michael Monnerie wrote:
> >>> On Freitag, 12. November 2010 Eli Morris wrote:
> >>>> The filesystem must be pointing to files that don't exist, or
> >>>> something like that. Is there a way to fix that, to say, remove
> >>>> files that don't exist anymore, sort of command? I thought that
> >>>> xfs_repair would do that, but apparently not in this case.
> >>> 
> >>> The filesystem is not optimized for "I replace part of the disk contents 
> >>> with zeroes" and find that errors. You will have to look in each file if 
> >>> it's contents are still valid, or maybe bogus.
> > ....
> >> Let me see if I can give you and everyone else a little more
> >> information and clarify this problem somewhat. And if there is
> >> nothing practical that can be done, then OK. What I am looking for
> >> is the best PRACTICAL outcome here given our resources and if
> >> anyone has an idea that might be helpful, that would be awesome. I
> >> put practical in caps, because that is the rub in all this. We
> >> could send X to a data recovery service, but there is no money for
> >> that. We could do Y, but if it takes a couple of months to
> >> accomplish, it might be better to do Z, even though Z is riskier
> >> or deletes some amount of data, because it is cheap and only takes
> >> one day to do..
> > 
> > Well, the best thing you can do is work out where in the block
> > device the zeroed range was, and then walk the entire filesystem
> > running xfs_bmap on every file to work out where their physical
> > extents are. i.e. build a physical block map of the good and bad
> > regions, then find what files have bits in the bad regions.
> > I've seen this done before with a perl script, and shouldn't take
> > more than a few hours to write and run....
> 
> I think that's a really good suggestion. I was thinking along
> those same lines myself. I understand how I would find where the
> files are located using xfs_bmap. Do you know which command I
> would use to find where the 'bad region' is located, so I can
> compare them to the file locations?

There isn't a command to find the 'bad region'. The bad region(s)
need to be worked out based on the storage geometry. e.g. if you had
a linear concat of 3 luns ilke so:

	lun		logical offset		length
	 0		     0GB		500GB
	 1		   500GB		500GB
	 2		  1000GB		500GB

And you lost lun 1, then your bad region is from 500GB-1000GB, and
it's easy to map that. However, if you have a RAID5/6 of those luns,
it gets a whole lot more complex because you need to know how the
RAID layout works (e.g. left-asymmetric) to work out where all the
parity is stored for each stripe and hence which disk contains data.

I'm not sure what your layout is, but you should be able to
calculate the bad regions specifically from the geometry of the
storage and your knowledge of which lun got zeroed....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux