Re: Single threaded IOPS on SSD pool.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Hi,
>
> El 5/6/19 a las 16:53, vitalif@xxxxxxxxxx escribió:
>>> Ok, average network latency from VM to OSD's ~0.4ms.
>>
>> It's rather bad, you can improve the latency by 0.3ms just by
>> upgrading the network.
>>
>>> Single threaded performance ~500-600 IOPS - or average latency of 1.6ms
>>> Is that comparable to what other are seeing?
>>
>> Good "reference" numbers are 0.5ms for reads (~2000 iops) and 1ms for
>> writes (~1000 iops).
>>
>> I confirm that the most powerful thing to do is disabling CPU
>> powersave (governor=performance + cpupower -D 0). You usually get 2x
>> single thread iops at once.
>
> We have a small cluster with 4 OSD host, each with 1 SSD INTEL
> SSDSC2KB019T8 (D3-S4510 1.8T), connected with a 10G network (shared with
> VMs, not a busy cluster). Volumes are replica 3:
>
> Network latency from one node to the other 3:
> 10 packets transmitted, 10 received, 0% packet loss, time 9166ms
> rtt min/avg/max/mdev = 0.042/0.064/0.088/0.013 ms
>
> 10 packets transmitted, 10 received, 0% packet loss, time 9190ms
> rtt min/avg/max/mdev = 0.047/0.072/0.110/0.017 ms
>
> 10 packets transmitted, 10 received, 0% packet loss, time 9219ms
> rtt min/avg/max/mdev = 0.061/0.078/0.099/0.011 ms

What NIC / Switching components are in play here .. I simply cannot get
latencies
this far down.

Jesper

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux