Re: I/O Speed Comparisons

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for all of this feedback guys! It gives us some good data to try to replicate on our end. Hopefully I'll have some time next week to take a look.

Thanks!
Mark

On 03/09/2013 08:14 AM, Erdem Agaoglu wrote:
Mark,

If it's any help, we've done a small totally unreliable benchmark on our
end. For a KVM instance, we had:
260MB/s write, 200MB/s read on local SAS disks, attached as LVM LVs,
250MB/s write, 90MB/s read on RBD, 32 osds, all SATA.

All sequential, a 10G network. It's more than enough currently but we'd
like to improve RBD read performance.

Cheers,


On Sat, Mar 9, 2013 at 7:27 AM, Andrew Thrift <andrew@xxxxxxxxxxxxxxxxx
<mailto:andrew@xxxxxxxxxxxxxxxxx>> wrote:

    Mark,


    I would just like to add, we too are seeing the same behavior with
    QEMU/KVM/RBD.  Maybe it is a common symptom of high IO with this setup.



    Regards,





    Andrew


    On 3/8/2013 12:46 AM, Mark Nelson wrote:

        On 03/07/2013 05:10 AM, Wolfgang Hennerbichler wrote:



            On 03/06/2013 02:31 PM, Mark Nelson wrote:
            t

                If you are doing sequential reads, you may benefit by
                increasing the
                read_ahead_kb value for each device in
                /sys/block/<device>/queue on the
                OSD hosts.


            Thanks, that didn't really help. It seems the VM has to
            handle too much
            I/O, even the mouse-cursor is jerking over the screen when
            connecting
            via vnc. I guess this is the wrong list, but it has somehow
            to do with
            librbd in connection with kvm, as the same machine on LVM
            works just ok.


        Thanks for the heads up Wolfgang.  I'm going to be looking into
        QEMU/KVM
        RBD performance in the coming weeks so I'll try to watch out for
        this
        behaviour.


            Wolfgang
            _________________________________________________
            ceph-users mailing list
            ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
            http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com
            <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>



    _________________________________________________
    ceph-users mailing list
    ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
    http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com
    <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>




--
erdem agaoglu



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Mark Nelson
Performance Engineer
Inktank
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux