ceph 0.86 : rbd_cache=true, iops a lot slower on randread 4K

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm not sure it's related to this bug
http://tracker.ceph.com/issues/9513

But, when with this fio-rbd benchmark 

[global]
ioengine=rbd
clientname=admin
pool=test
rbdname=test
invalidate=0  
rw=randread
bs=4k
direct=1
numjobs=8
group_reporting=1
size=10G

[rbd_iodepth32]
iodepth=32



I have around 

40000iops (cpu bound on client) with rbd_cache=false

vs

13000iops (40%cpu usage on client) with rbd_cache=true


(Note that it should be direct ios, so It should bypass the cache).
Seem to be a lock or something like that, as the cpu usage is a lot lower too.

Is it the expected behavior ?
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux