Re: Bad workloads for RAID0?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2022-02-10 11:11 a.m., Matti Pulkkinen wrote:
TL;DR are there particular workloads that suffer from having to access
a RAID0 array?

I've currently got my /home partition in a BTRFS RAID0 array with two 1
TB mechanical drives, and I'm considering getting SSDs for /home
instead. I could get one 2 TB SSD and be happy with it, but I could
instead get two 1 TB SSDs and make a RAID0 array again. The latter
option would of course get me better overall throughput, but I'm
wondering whether there are workloads that might suffer from being run
from a RAID0 array vs. just running on a "bare" disk.

Read the SSD reviews before picking one.  There are quite a lot of variations in burst and sustained write speeds, number of rewrites before failure, etc.  There is even one out there that has performance issues when you write the stupid flashy lights on it to a particular colour (!!!).

SSDs have no appreciable seek time and have much faster read rates that spinning rust.  Depending upon how much onboard RAM cache they provide (some even provide no cache), you may also see considerably better burst write speed, although sustained write speeds are generally no better than a disk.  So, unless you are doing something that requires sustained intense writes, moving to an SSD is a no-brainer.  I would not bother with RAID, as eliminating the seek times will speed up virtually any app.  Go with a single SSD because unless you are writing many TB/month, a single SSD will also probably last longer than your spinning disks.  They also use less power and are quieter.  IMHO, I think that LVM and/or MD are a lot of extra and unnecessary complexity more useful on larger servers (at least dozens of disks), and buy you very little on a smaller server or personal workstation.  If drive failure is a new concern (remembering that you're using non-redundant RAID-0), then get a second one and run them as a BTRFS or ZFS RAID-1 set.

P.S: Why are you using a RAID-0 array?  You have no redundancy, higher software complexity, somewhat better read speeds and much slower write speeds, and a much higher chance of hardware error. RAID-0 is generally used for things like very short-lived DB caches and not much else.  If you have a RAID controller, trash it, as the IOPS are inferior to software solutions already in the kernel and FS logic.  RAID rebuild times with a controller are also generally so bad that you have excellent odds that you will probably experience a second drive failure while rebuilding with the drive sizes sold today.

--

John Mellor
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx
Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure



[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [EPEL Devel]     [Fedora Magazine]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Desktop]     [Fedora Fonts]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Fedora Sparc]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux