Re: Qemu+RBD recommended cache mode and AIO settings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > I've been looking on the internet regarding two settings which might
> > influence
> > performance with librbd.
> >
> > When attaching a disk with Qemu you can set a few things:
> > - cache
> > - aio
> >
> > The default for libvirt (in both CloudStack and OpenStack) for 'cache' is
> > 'none'. Is that still the recommend value combined with librbd
> > (write)cache?
> >
> 
> We've been using "writeback" since end of last year, looking for an
> explicit writeback policy taking advantage of the librbd cache, but we
> haven't got any problem with "none" before that.
> 

We actually recommend that OpenStack be configured to use writeback cache [1].  If the guest OS is properly issuing flush requests, the cache will still provide crash-consistency.  By default, the cache will automatically start up in writethrough mode (when configured for writeback) until the first OS flush is received.

> > In libvirt you can set 'io' to:
> > - native
> > - threads
> >
> > This translates to the 'aio' flags to Qemu. What is recommended here? I
> > found:
> > - io=native for block device based VMs
> > - io=threads for file-based VMs
> >
> > This seems to suggest that 'native' should be used for librbd. Is that
> > still
> > correct?
> >
> 
> I interpret native as kernel-managed I/O, and as the RBD through librbd
> isn't exposed as a block device on the hypervisor, I configured threads
> I/O for all our guest VMs.

While I have nothing to empirically back up the following statement, I would actually recommend "native".  When set to "threads", QEMU will use a dispatch thread to invoke librbd IO operations instead of passing the IO request "directly" from the guest OS.  librbd itself already has its own IO dispatch thread with is enabled by default (via the rbd_non_blocking_aio config option), so adding an extra IO dispatching layer will just add additional latency / thread context switching.

> > librbd has a setting called 'rbd_op_threads' which seems to be related to
> > AIO.
> > When does this kick in?

This is related to the librbd IO dispatch thread pool.  Keep it at the default value of "1" as higher settings will prevent IO flushes from operating correctly.

> >
> > Yes, a lot of questions where the internet gives a lot of answers.
> >
> > Some feedback would be nice!
> >
> > Thanks,
> >
> > Wido
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/#configuring-nova


-- 

Jason Dillaman 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux