Re: Failed RAID5 array - recovery help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 12/09/18 15:54, Wols Lists wrote:
On 12/09/18 00:46, Adam Goryachev wrote:
There are some lessons learned here, and I have decided to rethink my
storage strategy. Not going to do RAID5 anymore, rather do RAID10
instead, with 4 bigger disks. That leaves one slot free in my NAS,
which I'm going to use with a very large disk, that is as big as my
whole RAID10 volume, and I will setup data replication between the
RAID10 and that single disk. Not only that, I'll have another place
where the data is also going to be synchronized, as an off-site backup.
Not a bad solution, in the past, I've done RAID10 + RAID1 with my last
really big disk, and used write-mostly for the single drive to reduce
the performance impact. RAID10 only protects you against a single drive
loss (though you might be lucky and lose two drives and still be OK).
With RAID10 + 1 you can lose any two drives, and if you are lucky, you
can lose three and survive data loss.
There are other advantages to using the 5th drive for "backups" instead
of RAID, but the disadvantage is that you will lose some data between
the last backup and current, and/or potential to "miss" some data from
the backup. Either option is valid though, just depends on your
environment and risks/needs.
Any reason for not considering raid-6? That means you'll definitely
survive losing two disks. And then I'd consider using all five slots for
the raid, and using your large backup disk via esata or similar - if
you've got a PCIe slot you're probably looking at about £50 for a card
and docking bay. And then you can have two or three backup drives which
can be easily rotated.

(Other people might disagree with me, but if your main and backup
volumes are formatted btrfs, you could use the btrfs push command to
push updates the the backup disk, but I'd probably use btrfs on the
backup and use snapshots and an in-place rsync. Gives me full backups
for the cost of incrementals.)

Cheers,
Wol

A couple reasons, yes:
First of all, loosing 2 disks on a RAID-6 means that my volume can still go bad if there's a URE on any of the 3 remaining disks, during rebuild, which is far from being rare, apparently. With RAID-10, the risk is different, but I can still loose 2 disks, as long as they are not in the same mirrorring group (33% chance that once a first disk fails the 2nd one to fail is the one that shouldn't have failed). Also, when rebuilding, the process is quicker and has less risk of encountering a URE, since only 1 of the disks has to be accessed. So, that's why I think RAID-10 is better than RAID-6 in my opinion (and also because I have seen a total failure of a raid-6 volume, couple years ago, where 2 disks failed for URE during resync. Might have been really bad luck, but well...). Maybe I'm wrong and I'd be happy to hear how/why.

Another reason being that I like the idea of having some kind of cold backup. This is a personal NAS and there isn't a lot of data changing on the volume each and every day, so running a nightly sync task isn't going to be affecting me or anyone else when it comes to performance (at worse, my partner may be unhappy about the noise of the disks during the sync, but I can probably deal with that since some of the data on the NAS is hers). On the other hand, I can make this copy use shapshots, and I'll have a way to go back in time which can be useful when I mistakenly delete a file from the main volume and don't realize it immediately, and the next day it's also gone from the copy (but still available through the snapshots).

And with regards to btrfs, this is simply out of question because 1) I don't quite trust it :-P and 2) this isn't a regular linux machine where I can do whatever I want. This is a Synology NAS running DSM and I don't want to mess too much with it (even though I can get a root shell, it probably isn't a great idea to do things they didn't want me to).

Regards,
Francois



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux