Re: ceph cluster iops low

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

You SSD is a "desktop" SSD, not a "enterprise" SSD, see [1]
This mostly was't suitable for Ceph


[1] https://yourcmc.ru/wiki/Ceph_performance#CAPACITORS.21

k

> On 25 Jan 2023, at 05:35, petersun@xxxxxxxxxxxx wrote:
> 
> Hi Mark,
> Thanks for your response, it is help!
> Our Ceph cluster use Samsung SSD 870 EVO all backed with NVME drive. 12 SSD drives to 2 NVMe drives per storage node. Each 4TB SSD backed 283G NVMe lvm partition as DB. 
> Now cluster throughput only 300M write, and around 5K IOPS.  I could see NVMe drive utilization over 95% show on ‘iostat’ command. Will NVMe drive be a bottle neck quickly if we have large of IO in cluster?
> I have read the top article about OSD bundle with CPU cores. However I can only find script called pincpu on the github to automate process to allocate CPU core with OSDs. It seems not work for me. Do you have any tool or official instruction that can guide me to test it?

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux