Re: Qemu iotune values for RBD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 6, 2014 at 10:54 PM, Wido den Hollander <wido@xxxxxxxx> wrote:
> On 03/06/2014 08:38 PM, Dan van der Ster wrote:
>>
>> Hi all,
>>
>> We're about to go live with some qemu  rate limiting to RBD, and I
>> wanted to crosscheck our values with this list, in case someone can
>> chime in with their experience or known best practices.
>>
>> The only reasonable, non test-suite,  values I found on the web are:
>>
>> iops_wr 200
>> iops_rd 400
>> bps_wr 40000000
>> bps_rd 80000000
>>
>> and those seem (to me) to offer a "pretty good" service level, with more
>> iops than a typical disk yet lower throughput (which is good considering
>> our single gigabit NICs on the hypervisors).
>>
>> Our main goal for the rate limiting is to protect the cluster from
>> abusive users running fio, etc., while not overly restricting our varied
>> legitimate applications.
>>
>> Any opinions here?
>>
>
> I normally only limit the writes since those are the most expensive in a
> Ceph cluster due to replication. With reads you can't really kill the disks
> since at some point all the objects will probably be in the page cache of
> the OSDs.
>
> I don't see any good reason to limit reads, but if you do, I'd set it to
> something like 2.5k reads and 200MB/sec or so. Just to give the VM to boost
> with reads when needed.
>
> You'll probably see that your cluster does a lot of writes and not so many
> reads.

Thanks, that sounds like good advice. Our per-VM throughput is limited
to ~1Gig-E anyway so the 80MBps limit I proposed is pretty close.

I do indeed see more writes that reads already -- I guess that makes
sense since the VMs are caching most reads already.

Do you think it's really safe to allow 2.5k reading iops? -- I guess a
psychotic user could run 10 VMs with 1TB fio jobs, 4k randreads  --
and that would be disruptive.

Cheers, dan

>
> Wido
>
>
>> Cheers, Dan
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> --
> Wido den Hollander
> 42on B.V.
>
> Phone: +31 (0)20 700 9902
> Skype: contact42on
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux