Re: NVMe's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 23/09/2020 10:54, Marc Roos wrote:
Depends on your expected load not? I already read here numerous of times
that osd's can not keep up with nvme's, that is why people put 2 osd's
on a single nvme. So on a busy node, you probably run out of cores? (But
better verify this with someone that has an nvme cluster ;))


Did you? I just start to though about this idea too, as some devices can deliver about twice of the own ceph-osd performance.

How they did it?

I have an idea to create a new bucket type under host, and put two LV from each ceph osd VG into that new bucket. Rules are the same (different host), so redundancy won't be affected, but doubling number of ceph-osd daemons can squeeze a bit more iops from backend devices at expense of doubling Rocksdb size (reducing payload size) and using more cores.

And I really want to hear all bad things about this setup before trying it.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux