Re: Raid recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 13/09/18 00:24, fido ro wrote:
> Hi
> 
> My name is Madalin and I figure it out how to damage my raid array.

Hi Madalin,

Firstly, please don't send personal emails - send them to the raid
mailing list - you'll get more info that way. I guess you're not
subscribed to the list, so by all means hit "reply all" to any email
responses, so they keep going to the list ...
> 
> 
> Short version of the story: I should read more before, than after.
> 
It sounds like you've stopped immediately you hit a problem - that's
good! Provided you haven't made any cack-handed attempts to recover the
array, I think your chances of recovery are very good.

Read before not after? Have you read the linux raid wiki? Is that where
you got my address from? Take a good look at it.

https://raid.wiki.kernel.org/index.php/Linux_Raid
> 
> Long one is like that: I have a netgear readynas based on BTRFS. I had
> installed on it 2 WD  X 3 TB each and I decided to grow from raid 1 -
> 3 TB to something bigger. In my mind was (what i discovered after, by
> the way) raid 10 or raid 01 but in reality the xRaid from NAS start
> to configure a Raid 5 when I added another 2 x 4 TB WD hdd.
> 
> I didn't backup any data from my previous raid configuration and I start
> to play with the system like that. After resync, it was xraid 5 with a
> total of almost 9 TB.

Okay, we have a working raid here ...
> 
> Because I read only the documentation from Netgear I switch from xraid
> to flex raid. I was very trustful I will gain more options from the
> system, to change raid 5 into a raid 6 at least. But was not like that.
> 
> In my ignorance I thought  if I will pull out the last 2 drives that I
> inserted in the array I will get the initial state of my ride, when you
> finish to laugh I will continue,..... ok, than I realize the raid change
> the status from damaged to dead when I pull the second drive out.
> 
Of course ...

> I push it back in, reboot for few times but with no results, still dead.
> 
So long as you've done nothing since this point, I think we're good.
Once raid encounters a fatal error, it marks the array as bad, and
refuses to do anything with it, precisely in order to protect your data.
That's why the array is dead when you reboot...

> I read a lot of stuff since June but nothing based on a moron experience
> like mine.

If you search the linux raid list I'm sure you'll find plenty :-) It's
just that every disaster tends to be slightly different ...
> 
> I am not so good on linux because I don't know the line commands very
> well (I was a msdos trained guy years ago). I am not afraid to use linux
> but I have some gaps. 
> 
> My question for you is: there is any chance to re assemble the raid like
> it was using command line?

It should be very easy. I don't know NASes, but this sounds like you can
get to a linux command line? On the raid wiki, go to the section "When
things go wrogn", especially the bit where you get the event count from
the drives. They should all be almost the same. POST THAT INFO HERE.
> 
> On windows I was able to read the raid configuration and I read the
> whole content from it, but trial version of the software recovery
> programs are very expensive for a home user like me. And my most
> valuable data are pictures and some documents.
> 
Nah, you don't want expensive software. You should be able to fix
everything with a simple "force assembly" command, but as I say, I don't
know NASes. Over to the list. Does anybody know this NAS and can help?

Cheers,
Wol




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux