Ceph libaio queue depth understanding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
we would like to write a testplan to benchmark our ceph cluster. We want to 
use fio for it.

According to an article from Sebastian Han [1] ceph is using libaio with 
O_DIRECT for writing data to the journal. In a different blog article [2] I 
read that ceph is using D_SYNC as well for this. This basically means it is 
using a queue depth of 1 (issue one IO request and wait for it to be done), 
right? Testing this with fio can be done by using the params direct=1 and 
iodepth=1 with engine=libaio.

After this the journal gets flushed to the OSD disk. This time buffered IO is 
used (in fio terms: direct=0). My question is:
Which iodepth is used for this (so which value to use in fio)?

In the source code of ceph I can see that the io_setup() function gets called 
with '128' concurrent events available. So should I use iodepth=128 in fio for 
this?

Maybe I do have a wrong understanding of async IO as well :-)

Thanks for any clarification of this topic

Cheers
Nick
 
[1] https://www.sebastien-han.fr/blog/2013/10/03/quick-analysis-of-the-ceph-io-layer

[2] http://bryanapperson.com/blog/ceph-raw-disk-performance-testing/

-- 
Sebastian Nickel
Nine Internet Solutions AG, Albisriederstr. 243a, CH-8047 Zuerich
Tel +41 44 637 40 00 | Support +41 44 637 40 40 | www.nine.ch

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux