Re: cyrus spool on btrfs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



> Am 09.09.2017 um 19:22 schrieb hw <hw@xxxxxxxx>:
> 
> Mark Haney wrote:
>> On 09/08/2017 01:31 PM, hw wrote:
>>> Mark Haney wrote:
>>> 
>>> I/O is not heavy in that sense, that´s why I said that´s not the application.
>>> There is I/O which, as tests have shown, benefits greatly from low latency, which
>>> is where the idea to use SSDs for the relevant data has arisen from.  This I/O
>>> only involves a small amount of data and is not sustained over long periods of time.
>>> What exactly the problem is with the application being slow with spinning disks is
>>> unknown because I don´t have the sources, and the maker of the application refuses
>>> to deal with the problem entirely.
>>> 
>>> Since the data requiring low latency will occupy about 5% of the available space on
>>> the SSDs and since they are large enough to hold the mail spool for about 10 years at
>>> its current rate of growth besides that data, these SSDs could be well used to hold
>>> that mail spool.
>> See, this is the kind of information that would have made this thread far shorter.  (Maybe.)  The one thing that you didn't explain is whether this application is the one /using/ the mail spool or if you're adding Cyrus to that system to be a mail server.
> 
> It was a simple question to begin with; I only wanted to know if something speaks
> against using btrfs for a cyrus mail spool.  There are things that speak against
> doing that with NFS, so there might be things with btrfs.
> 
> The application doesn´t use the mail spool at all, it has its own dataset.
> 
>>>>> Do you use hardware RAID with SSDs?
>>>> We do not here where I work, but that was setup LONG before I arrived.
>>> 
>>> Probably with the very expensive SSDs suited for this ...
>> Possibly, but that's somewhat irrelevant.  I've taken off the shelf SSDs and hardware RAID'd them.  If they work for the hell I put them through (processing weather data), they'll work for the type of service you're saying you have.
> 
> Well, I can´t very well test them with the mail spool, so I´ve beeing going
> with what I´ve been reading about SSDs with hardware RAID.


It really depends on the RAID-controller and the SSDs.
Every RAID-controller has a maximum number of IOPS it can process.


Also, as pointed out, consumer SSD have various deficiencies that make them unsuitable for enterprise-use:


https://blogs.technet.microsoft.com/filecab/2016/11/18/dont-do-it-consumer-ssd/ <https://blogs.technet.microsoft.com/filecab/2016/11/18/dont-do-it-consumer-ssd/>


Enterprise SSDs also fail much more predictably. You basically get an SLA with them about the DWPD/TBW data.

For small amounts of highly volatile data, I recommend looking into Optane SSDs.



> 
> Well, that´s a problem because when you don´t want md-RAID and can´t do hardware RAID,
> the only other option is ZFS, which I don´t want either.  That leaves me with not using
> the SSDs at all.
> 



As for BTRFS: RedHat dumped it.
So, it’s a SuSE/Ubuntu thing right now.
Make of that what you want ;-)

Personally, I’d prefer to use ZFS for SSDs. No Hardware-RAID for sure. Not sure if I’d use it on anything else but FreeBSD (even though a Linux port is available and code-wise it’s more or less the same).

>From personal experience, it’s better to even ditch the non-RAID HBA and just go with NVMe SSDs for the 2.5“ drive slots (a.k.a. 8639 a.k.a U.2 form factor).
If you have spare PCIe slots, you can also go for HHHL PCIe NVMe cards - but of course, you’d have to RAID them.






_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos




[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]


  Powered by Linux