Re: Qemu+RBD recommended cache mode and AIO settings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 22, 2016 at 4:48 PM, Jason Dillaman <dillaman@xxxxxxxxxx> wrote:
>> Hi Jason,
>>
>> Le 22/03/2016 14:12, Jason Dillaman a écrit :
>> >
>> > We actually recommend that OpenStack be configured to use writeback cache
>> > [1].  If the guest OS is properly issuing flush requests, the cache will
>> > still provide crash-consistency.  By default, the cache will automatically
>> > start up in writethrough mode (when configured for writeback) until the
>> > first OS flush is received.
>> >
>>
>> Phew, that was the good reasoning then, thank you for your confirmation. :)
>>
>> >> I interpret native as kernel-managed I/O, and as the RBD through librbd
>> >> isn't exposed as a block device on the hypervisor, I configured threads
>> >> I/O for all our guest VMs.
>> >
>> > While I have nothing to empirically back up the following statement, I
>> > would actually recommend "native".  When set to "threads", QEMU will use a
>> > dispatch thread to invoke librbd IO operations instead of passing the IO
>> > request "directly" from the guest OS.  librbd itself already has its own
>> > IO dispatch thread with is enabled by default (via the
>> > rbd_non_blocking_aio config option), so adding an extra IO dispatching
>> > layer will just add additional latency / thread context switching.
>> >
>>
>> Well, if only that would be possible...
>> Here's the error message from libvirt when starting a VM with
>> native+writeback:
>>
>> """
>> native I/O needs either no disk cache or directsync cache mode, QEMU
>> will fallback to aio=threads
>> """
>>
>
> Learn something new everyday: looking at QEMU's internals, that flag actually only makes a difference for local IO backends (files and devices).  Therefore, no need to set it for librbd volumes.

And libvirt's error message is misleading.  What they are after is
O_DIRECT modes, i.e. cache=none or cache=directsync.  cache=none != "no
disk cache"...

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux