Re: Some hint for a DELL PowerEdge T440/PERC H750 Controller...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> 
> The truth is that:
> - hdd are too slow for ceph, the first time you need to do a rebalance or
> similar you will discover...

Depends on the needs.  For cold storage, or sequential use-cases that aren't performance-sensitive ...  Can't say "too slow" without context.  In Marco's case, I wonder how the results might differ with numjobs=1 -- with a value of 4 as reported, seems to me like the drive will be seeking an awful lot.  Mind you many Ceph multi-client workloads exhibit the "IO Blender"  effect where they present to the drives as random, but this FIO job may not be entirely indicative.

If you have to expand just to get more IOPs, that's a different story.

> - if you want to use hdds do a raid with your controller and use the
> controller BBU cache (do not consider controllers with hdd cache), and
> present the raid as one ceph disk.

Take care regarding OSD and PG counts with that strategy.  Plus, Ceph does replication, so replication under the OSD layer can be ... gratuitous.  

> - enabling single hdd write cache (that is not battery protected) is far
> worse than enabling controller cache (which I assume is always protected by
> BBU)

There are plenty of RoC HBAs out there without cache RAM or BBU/supercap, and also ones with cache RAM but without BBU/supercap.  These often default to writethrough caching and arguably don't have much or any net benefit.

> - anyway the best thing for ceph is to use nvme disks.

I wouldn't disagree, but it's not entirely cut and dried.  Notably the cost and hassle of an RoC HBA, cache, BBU/supercap, additional monitoring, replacement ...  See my post a few years back about reasons I don't like RoC HBAs.  Go with a plain, non-RoC HBA and the savings can help justify going with SATA SSDs at a minimum.

> 
> Mario
> 
> Il giorno gio 6 apr 2023 alle ore 13:40 Marco Gaiarin <
> gaio@xxxxxxxxxxxxxxxxx> ha scritto:
> 
>> 
>> We are testing an experimental Ceph cluster with server and controller at
>> subject.
>> 
>> The controller have not an HBA mode, but only a 'NonRAID' mode, come sort
>> of
>> 'auto RAID0' configuration.
>> 
>> We are using SSD SATA disks (MICRON MTFDDAK480TDT) that perform very well,
>> and SAS HDD disks (SEAGATE ST8000NM014A) that instead perform very bad
>> (particulary, very low IOPS).
>> 
>> 
>> There's some hint for disk/controller configuration/optimization?
>> 
>> 
>> Thanks.
>> 
>> --
>>  Io credo nella chimica tanto quanto Giulio Cesare credeva nel caso...
>>  mi va bene fino a quando non riguarda me :)   (Emanuele Pucciarelli)
>> 
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux