Not exactly. You can also tune network/software. Network - go for lower latency interfaces. If you have 10G go to 25G or 100G. 40G will not do though, afaik they're just 4x10G so their latency is the same as in 10G. Software - it's closely tied to your network card queues and processor cores. In short - tune affinity so that the packet receive queues and osds processes run on the same corresponding cores. Disabling process power saving features helps a lot. Also watch out for NUMA interference. But overall all these tricks will save you less than switching from HDD to SSD. пн, 2 нояб. 2020 г. в 02:45, Tony Liu <tonyliu0592@xxxxxxxxxxx>: > Hi, > > AWIK, the read latency primarily depends on HW latency, > not much can be tuned in SW. Is that right? > > I ran a fio random read with iodepth 1 within a VM backed by > Ceph with HDD OSD and here is what I got. > ================= > read: IOPS=282, BW=1130KiB/s (1157kB/s)(33.1MiB/30001msec) > slat (usec): min=4, max=181, avg=14.04, stdev=10.16 > clat (usec): min=178, max=393831, avg=3521.86, stdev=5771.35 > lat (usec): min=188, max=393858, avg=3536.38, stdev=5771.51 > ================= > I checked HDD average latency is 2.9 ms. Looks like the test > result makes perfect sense, isn't it? > > If I want to get shorter latency (more IOPS), I will have to go > for better disk, eg. SSD. Right? > > > Thanks! > Tony > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx