Re: RBD vs RADOS benchmark performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I believe that this is fixed in the most recent versions of libvirt, sheepdog and rbd were marked erroneously as unsafe.


On May 11, 2013, at 8:36 AM, Mike Kelly <pioto@xxxxxxxxx> wrote:

(Sorry for sending this twice... Forgot to reply to the list)

Is rbd caching safe to enable when you may need to do a live migration of the guest later on? It was my understanding that it wasn't, and that libvirt prevented you from doing the migration of it knew about the caching setting.

If it isn't, is there anything else that could help performance? Like, some tuning of block size parameters for the rbd image or the qemu

On May 10, 2013 8:57 PM, "Mark Nelson" <mark.nelson@xxxxxxxxxxx> wrote:
On 05/10/2013 07:21 PM, Yun Mao wrote:
Hi Mark,

Given the same hardware, optimal configuration (I have no idea what that
means exactly but feel free to specify), which is supposed to perform
better, kernel rbd or qemu/kvm? Thanks,

Yun

Hi Yun,

I'm in the process of actually running some tests right now.

In previous testing, it looked like kernel rbd and qemu/kvm performed about the same with cache off.  With cache on (in cuttlefish), small sequential write performance improved pretty dramatically vs without cache.  Large write performance seemed to take more concurrency to reach peak performance, but ultimately aggregate throughput was about the same.

Hopefully I should have some new results published in the near future.

Mark



On Fri, May 10, 2013 at 6:56 PM, Mark Nelson <mark.nelson@xxxxxxxxxxx
<mailto:mark.nelson@inktank.com>> wrote:

    On 05/10/2013 12:16 PM, Greg wrote:

        Hello folks,

        I'm in the process of testing CEPH and RBD, I have set up a small
        cluster of  hosts running each a MON and an OSD with both
        journal and
        data on the same SSD (ok this is stupid but this is simple to
        verify the
        disks are not the bottleneck for 1 client). All nodes are
        connected on a
        1Gb network (no dedicated network for OSDs, shame on me :).

        Summary : the RBD performance is poor compared to benchmark

        A 5 seconds seq read benchmark shows something like this :

                sec Cur ops   started  finished  avg MB/s  cur MB/s
              last lat   avg
            lat
                  0       0         0         0         0 0         -
                   0
                  1      16        39        23   91.9586        92
            0.966117  0.431249
                  2      16        64        48   95.9602       100
            0.513435   0.53849
                  3      16        90        74   98.6317       104
            0.25631   0.55494
                  4      11        95        84   83.9735        40
            1.80038   0.58712
              Total time run:        4.165747
            Total reads made:     95
            Read size:            4194304
            Bandwidth (MB/sec):    91.220

            Average Latency:       0.678901
            Max latency:           1.80038
            Min latency:           0.104719


        91MB read performance, quite good !

        Now the RBD performance :

            root@client:~# dd if=/dev/rbd1 of=/dev/null bs=4M count=100
            100+0 records in
            100+0 records out
            419430400 bytes (419 MB) copied, 13.0568 s, 32.1 MB/s


        There is a 3x performance factor (same for write: ~60M
        benchmark, ~20M
        dd on block device)

        The network is ok, the CPU is also ok on all OSDs.
        CEPH is Bobtail 0.56.4, linux is 3.8.1 arm (vanilla release + some
        patches for the SoC being used)

        Can you show me the starting point for digging into this ?


    Hi Greg, First things first, are you doing kernel rbd or qemu/kvm?
      If you are doing qemu/kvm, make sure you are using virtio disks.
      This can have a pretty big performance impact.  Next, are you
    using RBD cache? With 0.56.4 there are some performance issues with
    large sequential writes if cache is on, but it does provide benefit
    for small sequential writes.  In general RBD cache behaviour has
    improved with Cuttlefish.

    Beyond that, are the pools being targeted by RBD and rados bench
    setup the same way?  Same number of Pgs?  Same replication?



        Thanks!
        _________________________________________________
        ceph-users mailing list
        ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxx.com>
        http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com
        <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>


    _________________________________________________
    ceph-users mailing list
    ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxx.com>
    http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com
    <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux