Re: ceph-osd iodepth for high-performance SSD OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



No, we sticked at the moment with octopus.

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx
---------------------------------------------------

-----Original Message-----
From: Stefan Kooman <stefan@xxxxxx> 
Sent: Wednesday, December 1, 2021 6:05 PM
To: Frank Schilder <frans@xxxxxx>; Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>; ceph-users <ceph-users@xxxxxxx>
Subject: Re:  Re: ceph-osd iodepth for high-performance SSD OSDs

Email received from the internet. If in doubt, don't click any link nor open any attachment !
________________________________

On 12/1/21 11:19, Frank Schilder wrote:
> Hi Szabo,
>
> no, I didn't. I deployed 4 OSDs per drive and get maybe 25-50% of their performance out. The kv_sync thread is the bottleneck.

Are you running Pacific? Have you tried RocksDB sharding [1]?

I haven't tried this myself yet, but I'm planning on testing Ceph with SPDK [2].

Gr. Stefan

[1]:
https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#rocksdb-sharding

[2]:
https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#spdk-usage
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux