RE: [Qemu-devel] [QEMU 3/7] Add the hmp and qmp interface for dropping cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Mon, Jun 13, 2016 at 11:50:08AM +0100, Daniel P. Berrange wrote:
> > On Mon, Jun 13, 2016 at 06:16:45PM +0800, Liang Li wrote:
> > > Add the hmp and qmp interface to drop vm's page cache, users can
> > > control the type of cache they want vm to drop.
> > >
> > > Signed-off-by: Liang Li <liang.z.li@xxxxxxxxx>
> > > ---
> > >  balloon.c        | 19 +++++++++++++++++++
> > >  hmp-commands.hx  | 15 +++++++++++++++
> > >  hmp.c            | 22 ++++++++++++++++++++++
> > >  hmp.h            |  3 +++
> > >  monitor.c        | 18 ++++++++++++++++++
> > >  qapi-schema.json | 35 +++++++++++++++++++++++++++++++++++
> > >  qmp-commands.hx  | 23 +++++++++++++++++++++++
> > >  7 files changed, 135 insertions(+)
> >
> > > diff --git a/qapi-schema.json b/qapi-schema.json index
> > > 8483bdf..117f70a 100644
> > > --- a/qapi-schema.json
> > > +++ b/qapi-schema.json
> > > @@ -1655,6 +1655,41 @@
> > >  { 'command': 'balloon', 'data': {'value': 'int'} }
> > >
> > >  ##
> > > +# @DropCacheType
> > > +#
> > > +# Cache types enumeration
> > > +#
> > > +# @clean: Drop the clean page cache.
> > > +#
> > > +# @slab: Drop the slab cache.
> > > +#
> > > +# @all: Drop both the clean and the slab cache.
> > > +#
> > > +# Since: 2.7
> > > +##
> > > +{ 'enum': 'DropCacheType', 'data': ['clean', 'slab', 'all'] }
> >
> > Presumably these constants are corresponding to the 3 options for
> > vm.drop_caches sysctl knob
> >
> > [quote]
> > To free pagecache, use:
> >
> >   echo 1 > /proc/sys/vm/drop_caches
> >
> > To free dentries and inodes, use:
> >
> >   echo 2 > /proc/sys/vm/drop_caches
> >
> > To free pagecache, dentries and inodes, use:
> >
> >   echo 3 > /proc/sys/vm/drop_caches
> >
> > Because writing to this file is a nondestructive operation and dirty
> > objects are not freeable, the user should run sync(1) first.
> > [/quote]
> >
> > IOW, by 'slab' you mean dentries and inodes ?
> >
> > > +
> > > +##
> > > +# @balloon_drop_cache:
> > > +#
> > > +# Request the vm to drop its cache.
> > > +#
> > > +# @value: the type of cache want vm to drop # # Returns: Nothing on
> > > +success
> > > +#          If the balloon driver is enabled but not functional because the
> KVM
> > > +#            kernel module cannot support it, KvmMissingCap
> > > +#          If no balloon device is present, DeviceNotActive
> > > +#
> > > +# Notes: This command just issues a request to the guest.  When it
> returns,
> > > +#        the drop cache operation may not have completed.  A guest can
> drop its
> > > +#        cache independent of this command.
> > > +#
> > > +# Since: 2.7.0
> > > +##
> > > +{ 'command': 'balloon_drop_cache', 'data': {'value':
> > > +'DropCacheType'} }
> >
> > Also, as noted in the man page quote above, it is recommended to call
> > sync() to minimise dirty pages. Should we have a way to request a sync
> > as part of this monitor command.
> >
> > More generally, it feels like this is taking as down a path towards
> > actively managing the guest kernel VM from the host. Is this really a
> > path we want to be going down, given that its going to take us into
> > increasing non-portable concepts which are potentially different for
> > each guest OS kernel.  Is this drop caches feature at all applicable
> > to Windows, OS-X, *BSD guest OS impls of the balloon driver ? If it is
> > applicable, are the 3 fixed constants you've defined at all useful to
> > those other OS ?
> >
> > I'm warying of us taking a design path which is so Linux specific it
> > isn't useful elsewhere. IOW, just because we can do this, doesn't mean
> > we should do this...
> 
> Also, I'm wondering about the overall performance benefit of dropping guest
> cache(s). Increasing the amount of free memory pages may have a benefit in
> terms of reducing data that needs to be migrated, but it comes with a
> penalty that if the guest OS needs that data, it will have to repopulate the
> caches.
> 
> If the guest is merely reading those cached pages, it isn't going to cause any
> problem with chances of convergance of migration, as clean pages will be
> copied only once during migration. IOW, dropping clean pages will reduce the
> total memory that needs to be copied, but won't have notable affect on
> convergance of live migration. Cache pages that are dirty will potentially
> affect live migration convergance, if the guest OS re-dirties the pages before
> they're flushed to storage. Dropping caches won't help in this respect though,
> since you can't drop dirty pages. At the same time it will have a potentially
> significant negative penalty on guest OS performance by forcing the guest to
> re-populate the cache from slow underlying storage.  I don't think there's
> enough info exposed by KVM about the guest OS to be able to figure out
> what kind of situation we're in wrt the guest OS cache usage.
> 
> Based on this I think it is hard to see how a host mgmt app can make a well
> informed decision about whether telling the guest OS to drop caches is a
> positive thing overall. In fact I think most likely is that a mgmt app would take
> a pessimistic view and not use this functionality, because there's no clearly
> positive impact on migration convergance and high liklihood of negatively
> impacting guest performance.
> 
> Regards,
> Daniel

Thanks for your detailed analyzation.
I did some test and found that drop the clean cache can speed up live migration, 
and drop the dirty page cache can make it slower.
The reason I added more options than just the clean cache is for 'integrate', and
it's too  Linux specific. 

How about just dropping the clean page cache? is it still too Linux specific?

Liang

> --
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             http://virt-manager.org :|
> |: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|
��.n��������+%������w��{.n�����o�^n�r������&��z�ޗ�zf���h���~����������_��+v���)ߣ�

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux