Re: Qemu+RBD recommended cache mode and AIO settings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Hi Jason,
> 
> Le 22/03/2016 14:12, Jason Dillaman a écrit :
> >
> > We actually recommend that OpenStack be configured to use writeback cache
> > [1].  If the guest OS is properly issuing flush requests, the cache will
> > still provide crash-consistency.  By default, the cache will automatically
> > start up in writethrough mode (when configured for writeback) until the
> > first OS flush is received.
> >
> 
> Phew, that was the good reasoning then, thank you for your confirmation. :)
> 
> >> I interpret native as kernel-managed I/O, and as the RBD through librbd
> >> isn't exposed as a block device on the hypervisor, I configured threads
> >> I/O for all our guest VMs.
> >
> > While I have nothing to empirically back up the following statement, I
> > would actually recommend "native".  When set to "threads", QEMU will use a
> > dispatch thread to invoke librbd IO operations instead of passing the IO
> > request "directly" from the guest OS.  librbd itself already has its own
> > IO dispatch thread with is enabled by default (via the
> > rbd_non_blocking_aio config option), so adding an extra IO dispatching
> > layer will just add additional latency / thread context switching.
> >
> 
> Well, if only that would be possible...
> Here's the error message from libvirt when starting a VM with
> native+writeback:
> 
> """
> native I/O needs either no disk cache or directsync cache mode, QEMU
> will fallback to aio=threads
> """
> 

Learn something new everyday: looking at QEMU's internals, that flag actually only makes a difference for local IO backends (files and devices).  Therefore, no need to set it for librbd volumes.

> >>> librbd has a setting called 'rbd_op_threads' which seems to be related to
> >>> AIO.
> >>> When does this kick in?
> >
> > This is related to the librbd IO dispatch thread pool.  Keep it at the
> > default value of "1" as higher settings will prevent IO flushes from
> > operating correctly.
> >
> >>>
> >>> Yes, a lot of questions where the internet gives a lot of answers.
> >>>
> >>> Some feedback would be nice!
> >>>
> >>> Thanks,
> >>>
> >>> Wido
> >>> _______________________________________________
> >>> ceph-users mailing list
> >>> ceph-users@xxxxxxxxxxxxxx
> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> >
> > [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/#configuring-nova
> >
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 

Jason Dillaman 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux