RBD caching on 4K reads???

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a cluster and have created a rbd device - /dev/rbd1. It shows up as expected with ‘rbd –image test info’ and rbd showmapped. I have been looking at cluster performance with the usual Linux block device tools – fio and vdbench. When I look at writes and large block sequential reads I’m seeing what I’d expect with performance limited by either my cluster interconnect bandwidth or the backend device throughput speeds – 1 GE frontend and cluster network and 7200rpm SATA OSDs with 1 SSD/osd for journal. Everything looks good EXCEPT 4K random reads. There is caching occurring somewhere in my system that I haven’t been able to detect and suppress - yet.

 

I’ve set ‘rbd_cache=false’ in the [client] section of ceph.conf on the client, monitor, and storage nodes. I’ve flushed the system caches on the client and storage nodes before test run ie vm.drop_caches=3 and set the huge pages to the maximum available to consume free system memory so that it can’t be used for system cache . I’ve also disabled read-ahead on all of the HDD/OSDs.

 

When I run a 4k randon read workload on the client the most I could expect would be ~100iops/osd x number of osd’s – I’m seeing an order of magnitude greater than that AND running IOSTAT on the storage nodes show no read activity on the OSD disks.

 

Any ideas on what I’ve overlooked? There appears to be some read-ahead caching that I’ve missed.

 

Thanks,

Bruce

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux