Re: cyrus spool on btrfs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



m.roth@xxxxxxxxx wrote:
Mark Haney wrote:
On 09/08/2017 09:49 AM, hw wrote:
Mark Haney wrote:
<snip>

It depends, i. e. I can´t tell how these SSDs would behave if large
amounts of data would be written and/or read to/from them over extended
periods of time because I haven´t tested that.  That isn´t the
application, anyway.

If your I/O is going to be heavy (and you've not mentioned expected
traffic, so we can only go on what little we glean from your posts),
then SSDs will likely start having issues sooner than a mechanical drive
might.  (Though, YMMV.)  As I've said, we process 600 million messages a
month, on primary SSDs in a VMWare cluster, with mechanical storage for
older, archived user mail.  Archived, may not be exactly correct, but
the context should be clear.

One thing to note, which I'm aware of because I was recently spec'ing out
a Dell server: Dell, at least, offers two kinds of SSDs, one for heavy
write, I think it was, and one for equal r/w. You might dig into that.

But mdadm does, the impact is severe.  I know there are ppl saying
otherwise, but I´ve seen the impact myself, and I definitely don´t want
it on that particular server because it would likely interfere with other
services.  I don´t know if the software RAID of btrfs is better in that
or not, though, but I´m seeing btrfs on SSDs being fast, and testing
with the particular application has shown a speedup of factor 20--30.

Odd, we've never seen anything like that. Of course, we're not handling
the kind of mail you are... but serious scientific computing hits storage
hard, also.

I never said anything about MD RAID.  I trust that about as far as I
could throw it.  And having had 5 surgeries on my throwing shoulder
wouldn't be far.

Why? We have it all over, and have never seen a problem with it. Nor have
I, personally, as I have a RAID 1 at home.
<snip>

Make a test and replace a software RAID5 with a hardware RAID5.  Even with
only 4 disks, you will see an overall performance gain.  I´m guessing that
the SATA controllers they put onto the mainboards are not designed to handle
all the data --- which gets multiplied to all the disks --- and that the
PCI bus might get clogged.  There´s also the CPU being burdened with the
calculations required for the RAID, and that may not be displayed by tools
like top, so you can be fooled easily.

Graphics cards have acceleration in hardware for a reason.  What was the last
time you tried to do software rendering, and what frame rates did you get? :)
Offloading the I/O to a designated controller gives you room for the things
you actually want to do, similar to a graphics card.
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos




[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]


  Powered by Linux