Re: rbd cache did not help improve performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/01/2016 10:03 PM, min fang wrote:
thanks, with your help, I set the read ahead parameter. What is the
cache parameter for kernel module rbd?
Such as:
1) what is the cache size?
2) Does it support write back?
3) Will read ahead be disabled if max bytes has been read into cache?
(similar the concept as "rbd_readahead_disable_after_bytes".

The kernel rbd module does not implement any caching itself. If you're doing I/O to a file on a filesystem on top of a kernel rbd device,
it will go through the usual kernel page cache (unless you use O_DIRECT
of course).

Josh


2016-03-01 21:31 GMT+08:00 Adrien Gillard <gillard.adrien@xxxxxxxxx
<mailto:gillard.adrien@xxxxxxxxx>>:

    As Tom stated, RBD cache only works if your client is using librbd
    (KVM clients for instance).
    Using the kernel RBD client, one of the parameter you can tune to
    optimize sequential read is increasing
    /sys/class/block/rbd4/queue/read_ahead_kb

    Adrien



    On Tue, Mar 1, 2016 at 12:48 PM, min fang <louisfang2013@xxxxxxxxx
    <mailto:louisfang2013@xxxxxxxxx>> wrote:

        I can use the following command to change parameter, for example
        as the following,  but not sure whether it will work.

          ceph --admin-daemon /var/run/ceph/ceph-mon.openpower-0.asok
        config set rbd_readahead_disable_after_bytes 0

        2016-03-01 15:07 GMT+08:00 Tom Christensen <pavera@xxxxxxxxx
        <mailto:pavera@xxxxxxxxx>>:

            If you are mapping the RBD with the kernel driver then
            you're not using librbd so these settings will have no
            effect I believe.  The kernel driver does its own caching
            but I don't believe there are any settings to change its
            default behavior.


            On Mon, Feb 29, 2016 at 9:36 PM, Shinobu Kinjo
            <skinjo@xxxxxxxxxx <mailto:skinjo@xxxxxxxxxx>> wrote:

                You may want to set "ioengine=rbd", I guess.

                Cheers,

                ----- Original Message -----
                From: "min fang" <louisfang2013@xxxxxxxxx
                <mailto:louisfang2013@xxxxxxxxx>>
                To: "ceph-users" <ceph-users@xxxxxxxxxxxxxx
                <mailto:ceph-users@xxxxxxxxxxxxxx>>
                Sent: Tuesday, March 1, 2016 1:28:54 PM
                Subject:   rbd cache did not help improve
                performance

                Hi, I set the following parameters in ceph.conf

                [client]
                rbd cache=true
                rbd cache size= 25769803776
                rbd readahead disable after byte=0


                map a rbd image to a rbd device then run fio testing on
                4k read as the command
                ./fio -filename=/dev/rbd4 -direct=1 -iodepth 64 -thread
                -rw=read -ioengine=aio -bs=4K -size=500G -numjobs=32
                -runtime=300 -group_reporting -name=mytest2

                Compared the result with setting rbd cache=false and
                enable cache model, I did not see performance improved
                by librbd cache.

                Is my setting not right, or it is true that ceph librbd
                cache will not have benefit on 4k seq read?

                thanks.


                _______________________________________________
                ceph-users mailing list
                ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
                http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
                _______________________________________________
                ceph-users mailing list
                ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
                http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



            _______________________________________________
            ceph-users mailing list
            ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
            http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



        _______________________________________________
        ceph-users mailing list
        ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
        http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux