Re: low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 1/13/20 6:37 PM, vitalif@xxxxxxxxxx wrote:
>> Hi,
>>
>> we're playing around with ceph but are not quite happy with the IOs.
>> on average 5000 iops / write
>> on average 13000 iops / read
>>
>> We're expecting more. :( any ideas or is that all we can expect?
> 
> With server SSD you can expect up to ~10000 write / ~25000 read iops per
> a single client.
> 
> https://yourcmc.ru/wiki/Ceph_performance
> 
>> money is NOT a problem for this test-bed, any ideas howto gain more
>> IOS is greatly appreciated.
> 
> Grab some server NVMes and best possible CPUs :)

And then:

- Disable all powersaving
- Pin the CPUs in C-State 1

That might even increase performance even more. But due to the
synchronous nature of Ceph the performance and latency of a single
thread will be limited.

Wido

> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux