Re: Harddisk gone bad (longish)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Sun, Nov 10, 2002 at 08:55:26PM +0100, Francesco Peeters wrote:
<SNIP>
>
> I suppose one of these days someone really should write a "hard disk
> catastrophe" HOWTO.....

That'd be cool!

>
> When you have a lot of precious data on a disk that hasn't been backed
> up.  The very **first** thing you should do is to get the cursing
> yourself for being twenty different kinds of full for not having a
> backup system out of your system.  Get that out of your system, so you
> don't make any further mistakes.....

LOL... Done that!!!
>
> Next, get yourself a backup hard drive which is at least as big as the
> disk which is in trouble, and do a full disk-to-disk copy of the disk
> that's in trouble:
>
> dd if=/dev/hdc of=/dev/hdd bs=1k conv=sync,noerr
>
> Do this right away, because if the problem was due to hardware
> failure, you want to grab a snapshot before the disk gets any worse.

That's happening right now...

>
> For experimental purposes, if you're not sure what you're doing, it's
> useful to get another spare disk, and make a second-generation copy
> from your first primary backup.  That way, you can experiment on the
> second-generation copy, and if one recovery technique doesn't work,
> you can try again with a different technique, and not have to worry
> about making any irrecoverable mistakes.

Was planning on doing that as well.... Waiting for a 2nd drive &
controllercard to arrive this week... I'll then implement (hardware) RAID1
when all data has been retrieved... (or found irreparably lost!)

>
> The first thing I would try at this point, is an "e2fsck -y" on the
> second generation backup.  See what you can save when it's all done;
> don't forget to check the lost+found directory in the root of the
> filesystem.  Sometimes files will end up there.
>
> If that doesn't work, the next steps will require a lot more expertise
> and special work.  So I'd start with that, and see how much you can
> recover from that.
>
> > Now when I try to do e2fsck /dev/hdc1 I get 'a corruption was found
> > in the superblock' When I try e2fsck -b 8193 /dev/hdc1 It claims it
> > is not a valid superblock... The same for for instance 32679, a.s.o.
>
> For a 4k filesystem, the backup block is 32768.  But please, make the
> full disk-to-disk backups first, and experiment on the backups.  That
> way, you don't need to worry about panic-induced mistakes from making
> the problem any worse.

Oops! That was a typo... I meant 32768... Anyway, it didn't work... That is
the real problem!  :-( But I didn't want to go on without a backup

>
> - Ted
>
> P.S.  For those people for whom backups are just too much effort,
> *please* consider using the "e2image" program to snapshot and backup
> critical filesystem metadata.  It's not a replacement for doing full
> data backups, but at least if you have an e2image dump, in the worst
> case you'll be able to recover more files if a disk failure damages
> your inode table.  The problem without the inode table there is no
> record of which blocks go with which files, which means that
> recovering files because a very, very painful manual process.  e2image
> will create a backup copy of the inode table, which even if it is not
> fully up-to-date, will be a help in trying to reconstruct data from a
> filesystem after a disk failure.  Of course, the real answer is to do
> real backups.....
>
>


Until I can afford a backup system that can effectively backup all live data
(prolly gonna be an onstream drive for price/performance reasons) would it
be possible to run e2image from cron on a daily basis?

TIA !

--FP



_______________________________________________

Ext3-users@redhat.com
https://listman.redhat.com/mailman/listinfo/ext3-users

[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux