> -----Original Message----- > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of > Ilja Slepnev > Sent: 05 December 2015 19:45 > To: Blair Bethwaite <blair.bethwaite@xxxxxxxxx> > Cc: ceph-users@xxxxxxxxxxxxxx > Subject: Re: CephFS and single threaded RBD read performance > > Hi Blairo, > > fio shows 70 MB/s seq read with 4M blocks, libaio, 1 thread, direct. > > fio seq write 200 MB/s > > The fio numbers are from fio running on a CephFS mount I take it? > > exactly. > > > # rados bench -t 1 -p test 60 write --no-cleanup > > I don't see an rbd test anywhere here...? > > I suggest comparing fio on CephFS with fio on rbd (as in using fio's > rbd ioengine), then at least the application side of your tests is > constant. > > I tested four access types: > 1. rbd kernel module, xfs, fio-libaio > 2. rados bench seq > 3. fio-librbd > 4. ceph kernel module, fio-libaio > > Done tests, no big diference, all methods give 50 to 80 MB/s single threaded > bandwidth. > In my setup I observe rados bench results very similar to kernel module rbd > results while > fio-librbd having 1.5 times lower bandwidth (maybe because it runs in > userspace?) > What can be tuned to improve sequential read? > > readahead settings are: > "client_readahead_min": "131072", > "client_readahead_max_bytes": "2097152", > "client_readahead_max_periods": "4", > "rbd_readahead_trigger_requests": "10", > "rbd_readahead_max_bytes": "524288", > "rbd_readahead_disable_after_bytes": "52428800", > -- Crank those readahead values right up, and disable the "disable_after_bytes", you need to turn that long sequential chain of requests into lots of parallel requests. Make sure the rbd cache is big enough to contain the data though. > WBR, > Ilja. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com