Re: RAID6 questions (mdadm 3.2.6/3.3.x)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Jul 11, 2014, at 9:20 PM, Vlad Dobrotescu <vlad@xxxxxxxxxxxxx> wrote:

> On 11/07/2014 21:09, Chris Murphy wrote:
>> On Jul 11, 2014, at 4:41 PM, Vlad Dobrotescu<vlad@xxxxxxxxxxxxx>  wrote:
>>> 6. mdadm on top of LVM2 LGs (not the other way around): would there be any issues or performance penalties?
>> You're not assured what PV the LV's are located on. So those 6 LVs you're using as md members might not be on six physical devices. One drive dies, you can lose the whole array. You're better off using LVM raid, or doing things conventionally by first creating the md raid set and then making the md logical device a PV.
> Thanks for the advice, it makes a lot of sense. However, this question wasn't focused on the RAID6 itself, but related to some fancy (crazy?) mirroring scheme for the Linux partition I was considering: take a LV chunk from the VG that sits on the RAID6 and mirror (md RAID1) it with a partition from the SSD I'll be using for keeping the ext4 journal for the big data partition.

Sounds a bit nutty, no offense. It's complicated, non-standard, and therefore at high risk of user induced data loss.

It's basically raid61, which tells me you want the data always for sure always available. Because raid61 is about uptime. The problem is, you're not going to get that because you've overbuilt the storage stack and haven't considered (or mentioned) other fail points like the network, the power supply, power itself. So it just sounds wrongly overbuilt because the data can't possibly require this kind of uptime, chances are you're confusing raid with back ups. If the data is both important and it really needs to be available, build yourself a gluster cluster.

> In this way I can have a functional OS even if I take all the RAID6 disks offline. Of course, this can be achieved in other ways as well.

Well if everything you care about on this raid6 fits on an SSD partition, why don't you just set up an hourly rsync to the raid6 and use the SSD volume for live work? And then if you accidentally delete a file or crash when writing to the SSD chances are the states of the raid6 LV and the SSD volume are different and one is recoverable. If you raid1 them, any accidents affect both and you're hosed.

> 
> Anyhow, do you have any estimation of the speed penalty when overlaying such layers (md-md, md-lvm, …)?

No. But what you're talking about is md raid6> lv > md raid1, so three layers.


Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux