Re: Best way to run LVM over multiple SW RAIDs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>>>> "Stuart" == Stuart D Gathman <stuart@xxxxxxxxxxx> writes:

Stuart> On Sat, 7 Dec 2019, John Stoffel wrote:
>> The biggest harm to performance here is really the RAID5, and if you
>> can instead move to RAID 10 (mirror then stripe across mirrors) then
>> you should be a performance boost.

Stuart> Yeah, That's what I do.  RAID10, and use LVM to join together as JBOD.
Stuart> I forgot about the raid 5 bottleneck part, sorry.

Yeah, it's not ideal, and I don't know enough about the code to figure
out if it's even possible to fix that issue without major
restructuring.  

>> As Daniel says, he's got lots of disk load, but plenty of CPU, so the
>> single thread for RAID5 is a big bottleneck.

>> I assume he wants to use LVM so he can create volume(s) larger than
>> individual RAID5 volumes, so in that case, I'd probably just build a
>> regular non-striped LVM VG holding all your RAID5 disks.  Hopefully

Stuart> Wait, that's what I suggested!

Must have missed that, sorry!  Again, let's see if the original poster
can provide more details of the setup. 

>> If you can, I'd get more SSDs and move to RAID1+0 (RAID10) instead,
>> though you do have the problem where a double disk failure could kill
>> your data if it happens to both halves of a mirror.

Stuart> No worse than raid5.  In fact, better because the 2nd fault
Stuart> always kills the raid5, but only has a 33% or less chance of
Stuart> killing the raid10.  (And in either case, it is usually just
Stuart> specific sectors, not the entire drive, and other manual
Stuart> recovery techniques can come into play.)

I don't know the failure mode of NVMe drives, a bunch of SSDs didn't
so much fail single sectors as they just up and died instantly,
without any chance of recovery.  So I worry about the NVMe drive
failure modes, and I'd want some hot spares in the system if at all
possible, because you know they're going to fail just as you get home
and stop checking email... so having it rebuild automatically is a big
help.  If your business can afford it.  Can it afford not too?  :-)

John


_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux