Re: [libvirt] PATCH: Disable QEMU drive caching

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 08, 2008 at 01:15:46PM +0200, Chris Lalancette wrote:
> Daniel P. Berrange wrote:
> > QEMU defaults to allowing the host OS to cache all disk I/O. THis has a
> > couple of problems
> > 
> >  - It is a waste of memory because the guest already caches I/O ops
> >  - It is unsafe on host OS crash - all unflushed guest I/O will be
> >    lost, and there's no ordering guarentees, so metadata updates could
> >    be flushe to disk, while the journal updates were not. Say goodbye
> >    to your filesystem.
> >  - It makes benchmarking more or less impossible / worthless because
> >    what the benchmark things are disk writes just sit around in memory
> >    so guest disk performance appears to exceed host diskperformance.
> > 
> > This patch disables caching on all QEMU guests. NB, Xen has long done this
> > for both PV & HVM guests - QEMU only gained this ability when -drive was
> > introduced, and sadly kept the default to unsafe cache=on settings.
> 
> I'm for this in general, but I'm a little worried about the "performance
> regression" aspect of this.  People are going to upgrade to 0.4.7 (or whatever),
> and suddenly find that their KVM guests perform much more slowly.  This is
> better in the end for their data, but we might hear large complaints about it.

Yes & no. They will find their guests perform more consistently. With the
current system their guests will perform very erratically depending on 
memory & I/O pressure on the host. If the host I/O cache is empty & has 
no I/O load, current guests will be "fast", but if host I/O cache is full
and they do something which requires more host memory (eg start up another
guest), then all existing guests get their I/O performance trashed as the
I/O cache has to be flushed out, and future I/O is unable to be cached. 

Xen went through this same change and there were not any serious
complaints, particularly when explained that previous system had
zero data integrity guarentees. The current system merely provides an
illusion of performance - any attempt to show that performance has 
decreased is impossible because any attempt to run benchmarks with
existing caching just results in meaningless garbage.

https://bugzilla.redhat.com/show_bug.cgi?id=444047

The idea that a guest can have x5 the performance of the underlying host
device is just ridiculous

> Might it be a better idea to make the default "cache=off", but provide a toggle
> in the domain XML to turn it back to "cache=on" for the people who really want
> it and know what they are doing?

Perhaps, but that's a separate issue for discussion. The immediate need
is data integrity & consistent performance, so we can actually measure
performance going forward.

Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

--
Libvir-list mailing list
Libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list

[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]