Re: Best way to run LVM over multiple SW RAIDs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>>>> "Stuart" == Stuart D Gathman <stuart@xxxxxxxxxxx> writes:

Stuart> On Tue, Oct 29, 2019 at 12:14 PM Daniel Janzon <daniel.janzon@xxxxxxxxxxx> wrote:
>> I have a server with very high load using four NVMe SSDs and
>> therefore no HW RAID. Instead I used SW RAID with the mdadm tool.
>> Using one RAID5 volume does not work well since the driver can only
>> utilize one CPU core which spikes at 100% and harms performance.
>> Therefore I created 8 partitions on each disk, and 8 RAID5s across
>> the four disks.

>> Now I want to bring them together with LVM. If I do not use a striped
>> volume I get high performance (in expected magnitude according to disk
>> specs). But when I use a striped volume, performance drops to a
>> magnitude below. The reason I am looking for a striped setup is to

Stuart> The mdadm layer already does the striping.  So doing it again
Stuart> in the LVM layer completely screws it up.  You want plain JBOD
Stuart> (Just a Bunch Of Disks).

Umm... not really.  The problem here is more the MD layer not being
able to run RAID5 across multiple cores at the same time, which is why
he split things the way he did.

But we don't know the Kernel version, the LVM version, or the OS
release so as to give better ideas of what to do.

The biggest harm to performance here is really the RAID5, and if you
can instead move to RAID 10 (mirror then stripe across mirrors) then
you should be a performance boost.

As Daniel says, he's got lots of disk load, but plenty of CPU, so the
single thread for RAID5 is a big bottleneck.

I assume he wants to use LVM so he can create volume(s) larger than
individual RAID5 volumes, so in that case, I'd probably just build a
regular non-striped LVM VG holding all your RAID5 disks.  Hopefully
the Parity disk is spread across all the partitions, though NVMe
drives should have enough IOPs capacity to mask the RMW cost of RAID5
to a degree.

In any case, I'd just build it like:

  pvcreate /dev/md#     (do for each of 8 RAID5 MD devices)
  vgcreate datavg /dev/md[#-#]   (give all 8 RAID5 MD devices here.
  lvcreate -n "name" -L <size> datavg

And then test your performance.  Since you only have four disks, the 8
RAID5 volumes in your VG are all going to suck for small writes, but
NVMe SSDs will mask that to an extent.

If you can, I'd get more SSDs and move to RAID1+0 (RAID10) instead,
though you do have the problem where a double disk failure could kill
your data if it happens to both halves of a mirror.

But, numbers talk, BS walks.  So if the original poster can provide
some details and numbers... then maybe we can help more.

John


_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux