Re: ceph random read performance is better than sequential read?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



thanks, what I am talking is IOPs, which also reflect the bandwidth in fio testing. I did not set readahead parameter, I will try. One more question here is, for only 4k write or read IO random pattern, comparing with using a single bigger rbd image, will performance(IOPS) be improved if I distribute IOs to a virtual volume consisted by multiple rbd images in a stripe way?

Thanks.


2016-02-02 22:41 GMT+08:00 Mark Nelson <mnelson@xxxxxxxxxx>:
If testing with fio and librbd, you may also find that increasing the thresholds for RBD readahead will help significantly.  Specifically, set "rbd readahead disable after bytes" to 0 so rbd readahead stays enabled.  In most cases with buffered reads on a real client volume, rbd readahead isn't necessary, but with fio and the librbd engine this can make a big difference, especially with newstore.

Mark

On 02/02/2016 07:29 AM, Wade Holler wrote:
Could you share the fio command and your read_ahead_kb setting for the
OSD devices ?  "performance is better" is a little too general.  I
understand that we usually mean higher IOPS or higher aggregate
throughput when we say performance is better.  However, application
random read performance "generally" implies an interest in lower latency
- which of course is much more involved from a testing perspective.

Cheers
Wade


On Tue, Feb 2, 2016 at 7:28 AM min fang <louisfang2013@xxxxxxxxx
<mailto:louisfang2013@xxxxxxxxx>> wrote:

    Hi, I did a fio testing on my ceph cluster, and found ceph random
    read performance is better than sequential read. Is it true in your
    stand?

    Thanks.
    _______________________________________________
    ceph-users mailing list
    ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux