Re: ceph random read performance is better than sequential read?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Without read ahead set, seq io performance will be lower than random..This is because Ceph IO path is serialized on a PG and for seq case there is more chance to hit the same PG for consecutive IOs..

 

Thanks & Regards

Somnath

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of min fang
Sent: Tuesday, February 02, 2016 11:34 PM
To: Mark Nelson
Cc: ceph-users
Subject: Re: [ceph-users] ceph random read performance is better than sequential read?

 

thanks, what I am talking is IOPs, which also reflect the bandwidth in fio testing. I did not set readahead parameter, I will try. One more question here is, for only 4k write or read IO random pattern, comparing with using a single bigger rbd image, will performance(IOPS) be improved if I distribute IOs to a virtual volume consisted by multiple rbd images in a stripe way?

Thanks.

 

 

2016-02-02 22:41 GMT+08:00 Mark Nelson <mnelson@xxxxxxxxxx>:

If testing with fio and librbd, you may also find that increasing the thresholds for RBD readahead will help significantly.  Specifically, set "rbd readahead disable after bytes" to 0 so rbd readahead stays enabled.  In most cases with buffered reads on a real client volume, rbd readahead isn't necessary, but with fio and the librbd engine this can make a big difference, especially with newstore.

Mark

On 02/02/2016 07:29 AM, Wade Holler wrote:

Could you share the fio command and your read_ahead_kb setting for the
OSD devices ?  "performance is better" is a little too general.  I
understand that we usually mean higher IOPS or higher aggregate
throughput when we say performance is better.  However, application
random read performance "generally" implies an interest in lower latency
- which of course is much more involved from a testing perspective.

Cheers
Wade


On Tue, Feb 2, 2016 at 7:28 AM min fang <louisfang2013@xxxxxxxxx
<mailto:louisfang2013@xxxxxxxxx>> wrote:

    Hi, I did a fio testing on my ceph cluster, and found ceph random
    read performance is better than sequential read. Is it true in your
    stand?

    Thanks.
    _______________________________________________
    ceph-users mailing list
    ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux