Re: How to use ceph-volume to create multiple OSDs per NVMe disk, and with fixed WAL/DB partition on another device?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting in your message looks kind of messy so forgive me if I’m propagating that below.

Honestly I agree that the Optanes will give diminishing returns at best for all but the most extreme workloads (which will probably want to use NVMoF natively anyway).  

>>> 
>>> This does split up the NVMe disk into 4 OSDs, and creates WAL/DB
>> partition on the Optane drive - however, it creates 4 x 223 GB partitions
>> on the Optane (whereas I want 35GB partitions).

Because you want to use the rest of the space on the Optane for something else?

>>> 
>>> It talks there about "osd op num shards" and "osd op num threads per
>> shard" - is there some way to set those, to achieve similar performance to
>> say, 4 x OSDs per NVMe drive, but with only 1 x NVMe? Has anybody done any
>> testing/benchmarking on this they can share?

I’d like to see that too, since I’m curious if this is not still limited by per-OSD serialization.  Those options have been around for years haven’t they?  

I’m also curious how splitting an NVMe device into multipe OSDs could affect write amplification.

— aad
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux