Very informative article you did Mark.
IMHO if you find yourself with very high per-OSD core count, it may be
logical to just pack/add more nvmes per host, you'd be getting the best
price per performance and capacity.
/Maged
On 17/01/2024 22:00, Mark Nelson wrote:
It's a little tricky. In the upstream lab we don't strictly see an
IOPS or average latency advantage with heavy parallelism by running
muliple OSDs per NVMe drive until per-OSD core counts get very high.
There does seem to be a fairly consistent tail latency advantage even
at moderately low core counts however. Results are here:
https://ceph.io/en/news/blog/2023/reef-osds-per-nvme/
Specifically for jitter, there is probably an advantage to using 2
cores per OSD unless you are very CPU starved, but how much that
actually helps in practice for a typical production workload is
questionable imho. You do pay some overhead for running 2 OSDs per
NVMe as well.
Mark
On 1/17/24 12:24, Anthony D'Atri wrote:
Conventional wisdom is that with recent Ceph releases there is no
longer a clear advantage to this.
On Jan 17, 2024, at 11:56, Peter Sabaini <peter@xxxxxxxxxx> wrote:
One thing that I've heard people do but haven't done personally with
fast NVMes (not familiar with the IronWolf so not sure if they
qualify) is partition them up so that they run more than one OSD
(say 2 to 4) on a single NVMe to better utilize the NVMe bandwidth.
See
https://ceph.com/community/bluestore-default-vs-tuned-performance-comparison/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx