Building a Ceph cluster with Ubuntu 18.04 and NVMe SSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ceph users!

We are currently configuring our new production Ceph cluster and I have some questions regarding Ubuntu and NVMe SSDs.

Basic setup:
- Ubuntu 18.04 with HWE Kernel 5.3
- Deployment via ceph-ansible (Ceph stable "Nautilus")
- 5x Nodes with AMD EPYC 7402P CPUs
- 25Gbit/s NICs and switches for Ceph private and public network
- 4x Intel P4510 2TB NVMe SSDs (all flash) per Node

My questions:
1. Should we deploy more than one OSD per NVMe SSD? (as P4510's performance can sustain e.g. 2 OSDs)
2. Does anyone know NVMe specific Linux settings we should enable?
3. Can we use io_uring, if yes how can we enable it? Is it enough to set bluestore_iouring=true?

What I know so far:
Ad 1: My opinion is to use at least 2 OSDs per NVMEe SSD as the Intel P4510 is fast enough to serve the parallel requests. Please be aware to use the latest firmware version VDV10170 -> with version VDV10131 we had massive stalls on Ceph side!

Ad 2: I have already enabled NVMe polling queues, Ubuntu has disabled them by default: Added nvme.poll_queues=1 to /etc/default/grub, then checked /sys/block/nvme1n1/queue/io_poll Cf. https://lore.kernel.org/linux-block/20190318222133.GA24176@localhost.localdomain/

Ad 3: This commit states it should be possible to use io_uring:
https://github.com/ceph/ceph/pull/27392
This issue also shows how to set bluestore_iouring=true but it's not clear if any more setup is required, like liburing:
https://github.com/axboe/liburing
A presentation from Christoph Hellwig shows the advantages:
https://www.snia.org/sites/default/files/SDC/2019/presentations/NVMe/Hellwig_Christoph_Linux_NVMe_and_Block_Layer_Status_Update.pdf

Any help and inputs would be appreciated,
THX - Georg
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux