fio paramenters --------------fio-------- [global] ioengine=libaio direct=1 rw=randwrite filename=/dev/vdb time_based runtime=300 stonewall [iodepth32] iodepth=32 bs=4k At 2014-09-11 05:04:09, "yuelongguang" <fastsync at 163.com> wrote: hi, josh durgin: please look at my test. inside vm using fio to test rbd performance. fio paramters: dircet io, bs=4k, iodepth >> 4 from the infomation below, it does not match. avgrq-sz is not approximately 8, for avgqu-sz , its value is small and ruleless, lesser than 32. why? in ceph , which part maybe gather/scatter io request. why avgqu-sz is so small? let's work it out. haha thanks ----iostat-----iodepth=32-- blocksize=4k------------------ Linux 2.6.32-358.el6.x86_64 (cephosd4-mdsa) 2014?09?11? _x86_64_ (2 CPU) Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util vdd 0.12 5.81 8.19 35.39 132.09 670.65 18.42 0.31 7.06 0.55 2.41 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util vdd 0.00 291.50 0.00 1151.00 0.00 13091.50 11.37 5.06 4.40 0.23 26.35 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util vdd 0.00 208.50 0.00 1020.00 0.00 8294.50 8.13 2.52 2.47 0.39 39.30 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util vdd 0.00 36.00 0.00 1076.00 0.00 17560.00 16.32 0.60 0.56 0.30 32.30 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util vdd 0.00 242.50 0.00 1143.00 0.00 22402.00 19.60 3.78 3.31 0.25 28.90 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util vdd 0.00 31.00 0.00 906.50 0.00 5351.50 5.90 0.37 0.40 0.28 25.70 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util vdd 0.00 294.50 0.00 1148.50 0.00 16620.50 14.47 4.49 3.91 0.21 24.60 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util vdd 0.00 26.50 0.00 810.50 0.00 4922.50 6.07 0.37 0.45 0.35 28.35 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util vdd 0.00 45.50 0.00 1022.00 0.00 6117.00 5.99 0.38 0.37 0.28 28.15 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util vdd 0.00 300.00 0.00 1155.00 0.00 16997.50 14.72 3.58 3.10 0.21 24.30 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util vdd 0.00 27.00 0.00 962.50 0.00 6846.50 7.11 0.44 0.46 0.35 33.60 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util vdd 0.00 270.00 0.00 1249.50 0.00 14400.00 11.52 4.61 3.69 0.25 31.25 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util vdd 0.00 15.00 3.00 660.00 24.00 4247.00 6.44 0.38 0.57 0.45 29.60 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util vdd 0.00 17.00 24.50 592.50 196.00 8039.00 13.35 0.58 0.94 0.83 51.05 ? 2014-09-10 08:37:23?"Josh Durgin" <josh.durgin at inktank.com> ??? >On 09/09/2014 07:06 AM, yuelongguang wrote: >> hi, josh.durgin: >> i want to know how librbd launch io request. >> use case: >> inside vm, i use fio to test rbd-disk's io performance. >> fio's pramaters are bs=4k, direct io, qemu cache=none. >> in this case, if librbd just send what it gets from vm? i mean no >> gather/scatter. the rate , io inside vm : io at librbd: io at osd >> filestore = 1:1:1? > >If the rbd image is not a clone, the io issued from the vm's block >driver will match the io issued by librbd. With caching disabled >as you have it, the io from the OSDs will be similar, with some >small amount extra for OSD bookkeeping. > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140911/53f4fea9/attachment.htm>