Re: Rbd image performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the info everyone.

On Dec 16, 2013 1:23 AM, "Kyle Bader" <kyle.bader@xxxxxxxxx> wrote:

>> Has anyone tried scaling a VMs io by adding additional disks and
>> striping them in the guest os?  I am curious what effect this would have
>> on io performance?

> Why would it? You can also change the stripe size of the RBD image. Depending on the workload you might change it from 4MB to something like 1MB or 32MB? That would give you more or less RADOS objects which will also give you a different I/O pattern.

The question comes up because it's common for people operating on EC2 to stripe EBS volumes together for higher iops rates. I've tried striping kernel RBD volumes before but hit some sort of thread limitation where throughput was consistent despite the volume count. I've since learned the thread limit is configurable. I don't think there is a thread limit that needs to be tweaked for RBD via KVM/QEMU but I haven't tested this empirically. As Wido mentioned, if you are operating your own cluster configuring the stripe size may achieve similar results. Google used to use a 64MB chunk size with GFS but switched to 1MB after they started supporting more and more seek heavy workloads.


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux