Re: Qemu+RBD recommended cache mode and AIO settings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Wido,

Le 22/03/2016 13:52, Wido den Hollander a écrit :
Hi,

I've been looking on the internet regarding two settings which might influence
performance with librbd.

When attaching a disk with Qemu you can set a few things:
- cache
- aio

The default for libvirt (in both CloudStack and OpenStack) for 'cache' is
'none'. Is that still the recommend value combined with librbd (write)cache?


We've been using "writeback" since end of last year, looking for an explicit writeback policy taking advantage of the librbd cache, but we haven't got any problem with "none" before that.

In libvirt you can set 'io' to:
- native
- threads

This translates to the 'aio' flags to Qemu. What is recommended here? I found:
- io=native for block device based VMs
- io=threads for file-based VMs

This seems to suggest that 'native' should be used for librbd. Is that still
correct?


I interpret native as kernel-managed I/O, and as the RBD through librbd isn't exposed as a block device on the hypervisor, I configured threads I/O for all our guest VMs.

librbd has a setting called 'rbd_op_threads' which seems to be related to AIO.
When does this kick in?

Yes, a lot of questions where the internet gives a lot of answers.

Some feedback would be nice!

Thanks,

Wido
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux