If you are mapping the RBD with the kernel driver then you're not using librbd so these settings will have no effect I believe. The kernel driver does its own caching but I don't believe there are any settings to change its default behavior.
On Mon, Feb 29, 2016 at 9:36 PM, Shinobu Kinjo <skinjo@xxxxxxxxxx> wrote:
You may want to set "ioengine=rbd", I guess.
Cheers,
_______________________________________________
----- Original Message -----
From: "min fang" <louisfang2013@xxxxxxxxx>
To: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Sent: Tuesday, March 1, 2016 1:28:54 PM
Subject: rbd cache did not help improve performance
Hi, I set the following parameters in ceph.conf
[client]
rbd cache=true
rbd cache size= 25769803776
rbd readahead disable after byte=0
map a rbd image to a rbd device then run fio testing on 4k read as the command
./fio -filename=/dev/rbd4 -direct=1 -iodepth 64 -thread -rw=read -ioengine=aio -bs=4K -size=500G -numjobs=32 -runtime=300 -group_reporting -name=mytest2
Compared the result with setting rbd cache=false and enable cache model, I did not see performance improved by librbd cache.
Is my setting not right, or it is true that ceph librbd cache will not have benefit on 4k seq read?
thanks.
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com