addendum: was Re: recovering data on a failed raid-0 installation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



ok, guy and others.

this is a followup to the case I am currently trying (still) to solve.

synopsis:
the general consensus is that raid0 writes in a striping fashion.

However, the test case I have here doesn't appear to operate in the above 
described manner. what was observed was this: on /dev/mdo (while observing 
drive activity for both hda and hdb) hda was active until filled at which 
point data was spanned to hdb.  In other words, the data was written in a 
linear, not striped, manner. 

given this behavior (as observed), it stands to reason that the data on the 
first of the 2 members of this "raid" should be recoverable, if only we could 
"trick" the raid into allowing us to mount it without its second member. at 
this point, we are assuming that the data on drive 2 (hdb) is not 
recoverable. 

In a scientific fashion, assuming that the observed behavior is correct, how 
would one go about recovering data from the first member without the second 
being present? I assume that we are going to have to use mdadm in such a way 
as to trick it into thinking it is doing something that it is not. I invite 
anyone here to setup a similar testing environment to confirm these results.

drives: 2 identical IDE drives (same make/model)
suse 9.3 os.

p.s. I have heard all the "naysayer commentary" so please, keep it to USEFUL 
information only. thanks....

On Tuesday 28 March 2006 22:26, you wrote:
> RAID0 uses all disks evenly (all 2 in your case).  I don?t see how you can
> recover from a drive failure with a RAID0.  Never use RAID0 unless you are
> willing to lose all the data!
>
> Are you sure the second disk is dead?  Have you done a read test on the
> disk?  dd works well for read testing.  Try this:
> dd if=/dev/hdb2 of=/dev/null bs=64k
> or
> dd if=/dev/hdb of=/dev/null bs=64k
>
> Guy
>
> } -----Original Message-----
> } From: linux-raid-owner@xxxxxxxxxxxxxxx [mailto:linux-raid-
> } owner@xxxxxxxxxxxxxxx] On Behalf Of Technomage
> } Sent: Wednesday, March 29, 2006 12:09 AM
> } To: linux-raid@xxxxxxxxxxxxxxx
> } Subject: recovering data on a failed raid-0 installation
> }
> } ok,
> } here's the situation in a nutshell.
> }
> } one of the 2 HD's in a linux raid-0 installation has failed.
> }
> } Fortunately, or otherwise, it was NOT the primary HD.
> }
> } problem is, I need to recover data from the first drive but appear to be
> } unable to do so because the raid is not complete. the second drive only
> } had
> } 193 MB written to it and I am fairly certain that the data I would like
> to } recover is NOT on that drive.
> }
> } can anyone offer any solutions to this?
> }
> } the second HD is not usable (heat related failure issues).
> }
> } The filesystem used on the md0 partition (under mdadm) was xfs. now I
> have } tried the xfs_check and xfs_repair tools and they are not helpful at
> this } point.
> }
> } The developer (of mdadm) suggested I use the following commands in an
> } attempt
> } to recover:
> }
> }   mdadm -C /dev/md0 -l0 -n2 /dev/......
> }   fsck -n /dev/md0
> }
> } However, the second one was a no go.
> }
> } I am stumped as to how to proceed here. I need the data off the first
> } drive,
> } but do not appear to have any way (other than using cat to see it) to get
> } at
> } it.
> }
> } some help would be greatly appreciated.
> }
> } technomage
> }
> } p. here is the original response sent back to me from the developer of
> } mdadm:
> } ***************************
> } Re: should have been more explicit here -> Re: need some help <URGENT!>
> } From: Neil Brown <neilb@xxxxxxx>
> } To: Technomage <technomage-hawke@xxxxxxx>
> } Date: Sunday 22:01:45
> } On Sunday March 26, technomage-hawke@xxxxxxx wrote:
> } > ok,
> } >
> } > you gave me more info than some local to that mentioned e-mail list.
> } >
> } > ok, the vast majority of the data I need to recover is on /dev/hda
> } > and /dev/hdb only has 193 MB and is probably irrelevant.
> } >
> } > can you help me with this?
> } > can you baby me through this. I really need to recover this data (if at
> } all
> } > possible).
> }
> } Not really, and certainly not now (I have to go out).
> } I have already make 2 suggestions
> }   mail linux-raid@xxxxxxxxxxxxxxx
> } and
> }   mdadm -C /dev/md0 -l0 -n2 /dev/......
> }   fsck -n /dev/md0
> }
> } try one of those.
> }
> } NeilBrown
> }
> } >
> } > the friend of mine that this actually happened to is on the phone,
> } begging
> } me
> } > and grovelling before the gods of linux in order to have this fixed. I
> } have
> } > setup an identical test situation here.
> } >
> } > the important data is on drive 1 and drive 2 is mostly irrelevant.
> } > is there any way to convince raid-0 to truncate to the end of drive 1
> } and
> } > allow me to get whatever data I can off. btw, the filesystem that was
> } > formatted was xfs (for linux) on md0.
> } >
> } > if you have questions, please do not hesitate to ask.
> } >
> } > thank you.
> } >
> } > p. real name here is Eric.
> } >
> } >
> } > On Sunday 26 March 2006 21:33, you wrote:
> } > > On Sunday March 26, technomage-hawke@xxxxxxx wrote:
> } > >
> } > > With a name like "Technomage" and a vague subject "need some help
> } > > <URGENT>", I very really discarded your email assuming it was spam!
> } > >
> } > > Questions like this are best sent to linux-raid@xxxxxxxxxxxxxxxx
> } > >
> } > > If one drive in a raid0 has failed non-recoverably, then half your
> } > > data is gone, so you are out of luck.
> } > >
> } > > Your best bet would be to recreate the raid0 in exactly the same
> } > > configuration as before, and see if you can find the data there.
> } > > e.g.
> } > >    mdadm -C /dev/md0 -l0 -n2 /dev/hda2 /dev/hdb2
> } > >   fsck -n /dev/md0
> } > >
> } > > or something like that.
> } > >
> } > > NeilBrown
> } > >
> } > > > I recently ran into a problem after an install using mdadm. the
> } software
> } > > > raid-0 environment suffered a failure after a HD in the system
> } failed
> } due
> } > > > to thermal run-away.
> } > > >
> } > > > the setup goes like this:
> } > > >
> } > > > /dev/hda has:
> } > > > /dev/hda1 -> boot (512 MN)
> } > > > /dev/hda2 -> partition 1 (linux raid autodetect)
> } > > >
> } > > > /dev/hdb has:
> } > > > /dev/hdb1 -> swap (512 MB)
> } > > > /dev/hdb2 -> partition 2 (linux raid autodetect)
> } > > >
> } > > > /dev/hdb is the drive that failed. according to a drive imager,
> only } 129
> } > > > MB of data was actually written to the second raid partition (in
> } serial).
> } > > > unfortunately, without it, I cannot recover any data off of the
> } first HD
> } > > > and I would like very much to do so. Some of this data is for my
> } work as
> } > > > a forensics examiner.
> } > > >
> } > > > I am fairly certain that the data I need to recover is on /dev/hda
> } but
> } so
> } > > > far, I have been unable to read the data in any meaningful way
> } (except
> } > > > the use of cat piped through less to see if the data is, in fact,
> } > > > readable).
> } > > >
> } > > > can you help?
> } > > >
> } > > > thank you.
> } -
> } To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> } the body of a message to majordomo@xxxxxxxxxxxxxxx
> } More majordomo info at  http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux