Re: CephFS and single threaded RBD read performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Blairo,

> fio shows 70 MB/s seq read with 4M blocks, libaio, 1 thread, direct.
> fio seq write 200 MB/s

The fio numbers are from fio running on a CephFS mount I take it?

exactly.
 
> # rados bench -t 1 -p test 60 write --no-cleanup

I don't see an rbd test anywhere here...?

I suggest comparing fio on CephFS with fio on rbd (as in using fio's
rbd ioengine), then at least the application side of your tests is
constant.

I tested four access types:
 1. rbd kernel module, xfs, fio-libaio
 2. rados bench seq
 3. fio-librbd
 4. ceph kernel module, fio-libaio

Done tests, no big diference, all methods give 50 to 80 MB/s single threaded bandwidth.
In my setup I observe rados bench results very similar to kernel module rbd results while
fio-librbd having 1.5 times lower bandwidth (maybe because it runs in userspace?)

What can be tuned to improve sequential read?

readahead settings are:
    "client_readahead_min": "131072",
    "client_readahead_max_bytes": "2097152",
    "client_readahead_max_periods": "4",
    "rbd_readahead_trigger_requests": "10",
    "rbd_readahead_max_bytes": "524288",
    "rbd_readahead_disable_after_bytes": "52428800",

--
WBR,
Ilja.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux