Strange read results using FIO inside RBD QEMU VM ...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We have a pure SSD based Ceph cluster (+100 OSDs with Enterprise SSDs and IT mode cards) Hammer 0.94.9 over 10G. It's really stable and we are really happy with the performance we are getting. But after a customer ran some tests, we realized about something quite strange. Our user did some tests using FIO, and the strange thing is that Write tests did work as expected, but some Read tests did not.

The VM he used was artificially limited via QEMU to 3200  read and 3200  write IOPS. In the write department everything works more or less as expected. The results get close to 3200 IOPS but the read tests are the ones we don't really understand.

We ran tests using different IO Engines: Sync, libaio and POSIX AIO, during the write tests the 3 of them expect quite similar -which is something I did not really expect- but on the read department there is a huge difference:

Read Results (Random Read - Buffered: No - Direct: Yes - Block Size: 4KB):

LibAIO - Average: 3196 IOPS
POSIX AIO - Average: 878 IOPS
Sync -   Average: 929 IOPS

Write Results (Random Read - Buffered: No - Direct: Yes - Block Size: 4KB):

LibAIO -    Average: 2741 IOPS
POSIX AIO -    Average: 2673 IOPS
Sync -  Average: 2795 IOPS

I would expect a difference when using LibAIO or POSIX AIO, but I would expect it in both read and write results,  not only during reads.

So, I'm quite disoriented with this one... Does anyone have an idea about what might be going on?

Thanks!

Saludos Cordiales,
Xavier Trilla P.
Clouding.io<https://clouding.io/>

?Un Servidor Cloud con SSDs, redundado
y disponible en menos de 30 segundos?

?Pru?balo ahora en Clouding.io<https://clouding.io/>!

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20170307/fdcd1836/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux