Re: looking for RAID 1+0 setup instructions?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Tue, Sep 1, 2009 at 11:09 AM, Chan Chung Hang
Christopher<christopher.chan@xxxxxxxxxxxxxxx> wrote:
>
>>>>> I would NOT do that. You should like the md layer handle all things raid
>>>>> and let lvm do just volume management.
>>>>
>>>> Your under the asumption that they are two different systems.
>>>
>>> You're under the assumption that they are not.
>>
>> http://en.m.wikipedia.org/wiki/Device_mapper
>>
>> If you want I can forward LXR references to MD and LVM into the device
>> mapper code or LKML references that talk about rewriting MD and LVM
>> for device mapper.
>
> md can make use of dm to get devices for its use but it certainly does
> not just ask dm to create a raid1 device. md does the actually raiding
> itself. Not dm.

Actually I am going to eat crow on this.

While device mapper has support for "fake raid" devices and managing
RAID under those, the actual kernel RAID modules are still handled
under linux-raid. Though there is an ongoing effort to bring these
into device mapper, it still isn't there yet.

>>>> Md RAID and LVM are both interfaces to the device mapper system which
>>>> handles the LBA translation, duplication and parity calculation.
>>>
>>> Are they? Since when was md and dm the same thing? dm was added after md
>>> had had a long presence in the linux kernel...like since linux 2.0
>>>
>>
>> Both MD RAID and LVM were rewritten to use the device mapper interface
>> to mapped block devices back around the arrival of 2.6.
>
> That does not equate to md and dm being the same thing. Like you say,
> 'TO USE' dm. When did that mean they are the same thing?

As I stated above I was wrong here.

>>>> I have said it before, but I'll say it again, how much I wish md RAID
>>>> and LVM would merge to provide a single interface for creation of
>>>> volume groups that support different RAID levels.
>>>
>>> Good luck with that. If key Linux developers diss the zfs approach and
>>> vouch for the multi-layer approach, I do not ever see md and dm
>>> merging.
>>
>> I'm not talking ZFS, I'm not talking about merging the file system,
>> just the RAID and logical volume manager which could make designing
>> installers and managing systems simpler.
>
> Good luck taking Neil Brown out then. http://lwn.net/Articles/169142/
> and http://lwn.net/Articles/169140/
>
> Get rid of Neil Brown and md will disappear. I think.

People change over time and if a convincing argument can be made why
device mapper and linux raid should merge code then I'm sure Neil
would reconsider his stance.

>>>>> To create a raid1+0 array, you first create the mirrors and then you
>>>>> create a striped array that consists of the mirror devices. There is
>>>>> another raid10 module that does its own thing with regards to
>>>>> 'raid10', is not supported by the installer and does not necessarily
>>>>> behave like raid1+0.
>>>>
>>>> Problem is the install program doesn't support setting up RAID10 or
>>>> layered MD devices.
>>>
>>> Oh? I have worked around it before even in the RH9 days. Just go into
>>> the shell (Hit F2), create what you want, go back to the installer.
>>> Are you so sure that anaconda does not support creating layered md
>>> devices?

I tested it and it doesn't work/isn't supported.


>>> BTW, why are you talking about md devices now? I thought you said md
>>> and dm are the same?
>>
>> You know what, let me try just that today, I have a new install to do,
>> so I'll try pre-creating a RAID10 on install and report back. First
>> I'll try layered MD devices and then I'll try creating a RAID10 md
>> device and we'll see if it can even boot off them.
>
> Let me just point out that I never said you can boot off a raid1+0
> device. I only said that you can create a raid1+0 device at install
> time. /boot will have to be on a raid1 device. The raid1+0 device can be
> used for other filesystems including root or as a physical volume.
> Forget raid10, that module is not even available at install time with
> Centos 4 IIRC. Not sure about Centos 5.

My tests had a separate RAID1 for /boot to take the whole booting off
of RAID10 out of the picture.

The problem I had with pre setting the layered MD devices for
anaconda, is while I was able to do that after a couple reboots to get
it to see the partitioning, anaconda didn't actually start the arrays
until further in the install process, so it wasn't able to see the
nested array and starting the arrays manually didn't help because
anaconda wouldn't re-scan the devices again.

>>>> I would definitely avoid layered MD devices as it's more complicated
>>>> to resolve disk failures.
>>>
>>> Huh?
>>>
>>> I do not see what part of 'cat /proc/mdstat' will confuse you. It will
>>> always report which md device had a problem and it will report which
>>> device, be they md devices (rare) or disks.
>>
>> Having a complex setup is always more error prone to a simpler one.
>> Always.
>
> -_-
>
> Both are still multilayered...just different codepaths/tech. I do not
> see how lvm is simpler than md.

Well running 2 layers of MD is not as trivial to setup and maintain as
1 layer of MD and 1 layer of LVM.

>>>> In my tests an LVM striped across two RAID1 devices gave the exact
>>>> same performance as a RAID10, but it gave the added benefit of
>>>> creating LVs with varying stripe segment sizes which is great for
>>>> varying workloads.
>>>
>>> Now that is complicating things. Is the problem in the dm layer or in
>>> the md layer...yada, yada
>>
>> Not really, have multiple software or hardware RAID1s make a VG out of
>> them, then create LVs. One doesn't have to do anything special if it
>> isn't needed, but it's there and simple to do if you need to. Try
>> changing the segment size of an existing software or hardware array
>> when it's already setup.
>
> Yeah, using lvm to stripe is certainly more convenient.
>
>> You know you really are an arrogant person that doesn't tolerate
>> anyone disagreeing with them. You are the embodyment of everything
>> people talk about when they talk about the Linux community's elist
>> attitude and I wish you would make at least a small attempt to change
>> your attitude.
>
> How have I been elitist? Did I tell you to get lost like elites like to
> do? Did I snub you or something? Only you can say that I made
> assumptions and not you? ???

Maybe I'm too thin skinned, but your email tone on your earlier posts
came across as a little smug, but then again it may just be me being
too sensitive, your latest post didn't come across that way.

-Ross
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux