Re: [kvm] Re: [kvm] Re: Questions about duplicate memory work

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 27, 2011 at 12:49:29PM +0300, Avi Kivity wrote:
> On 09/27/2011 12:00 PM, Robin Lee Powell wrote:
> >On Tue, Sep 27, 2011 at 01:48:43AM -0700, Robin Lee Powell wrote:
> >>  On Tue, Sep 27, 2011 at 04:41:33PM +0800, Emmanuel Noobadmin wrote:
> >>  >  On 9/27/11, Robin Lee Powell<rlpowell@xxxxxxxxxxxxxxxxxx>  wrote:
> >>  >  >  On Mon, Sep 26, 2011 at 04:15:37PM +0800, Emmanuel Noobadmin
> >>  >  >  wrote:
> >>  >  >>  It's unrelated to what you're actually using as the disks,
> >>  >  >>  whether file or block devices like LVs. I think it just makes
> >>  >  >>  KVM tell the host not to cache I/O done on the storage device.
> >>  >  >
> >>  >  >  Wait, hold on, I think I had it backwards.
> >>  >  >
> >>  >  >  It tells the *host* to not cache the device in question, or the
> >>  >  >  *VMs* to not cache the device in question?
> >>  >
> >>  >  I'm fairly certain it tells the qemu not to cache the device in
> >>  >  question. If you don't want the guest to cache their i/o, then the
> >>  >  guest OS should be configured if it allows that. Although I'm not
> >>  >  sure if it's possible to disable disk buffering/caching system
> >>  >  wide in Linux.
> >>
> >>  OK, great, thanks.
> >>
> >>  Now if I could just figure out how to stop the host from swapping
> >>  out much of the VMs' qemu-kvm procs when it has almost a GiB of RAM
> >>  left.  -_-  swappiness 0 doesn't seem to help there.
> >
> >Grrr.
> >
> >I turned swap off to clear it.  A few minutes ago, this host was at
> >zero swap:
> >
> >top - 01:59:10 up 10 days, 15:17,  3 users,  load average: 6.39, 4.26, 3.24
> >Tasks: 151 total,   1 running, 150 sleeping,   0 stopped,   0 zombie
> >Cpu(s):  6.6%us,  1.0%sy,  0.0%ni, 85.9%id,  6.3%wa,  0.0%hi,  0.2%si,  0.0%st
> >Mem:   8128772k total,  6511116k used,  1617656k free,    14800k buffers
> >Swap:  8388604k total,   672828k used,  7715776k free,    97536k cached
> >
> >   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
> >  2504 qemu      20   0 2425m 1.8g  448 S 10.0 23.4   3547:59 qemu-kvm
> >  2258 qemu      20   0 2425m 1.7g  444 S  2.7 21.7   1288:15 qemu-kvm
> >18061 qemu      20   0 2433m 1.8g  428 S  2.3 23.4 401:01.99 qemu-kvm
> >10335 qemu      20   0 1864m 861m  456 S  1.0 10.9   2:04.26 qemu-kvm
> >[snip]
> >
> >Why is it doing this?!?  ;'(
> >
> 
> Please post the contents of /proc/meminfo and /proc/zoneinfo when
> this is happening.

I just noticed that the amount of RAM the VMs had in VIRT added up
to considerably more than the host's actual RAM; hard_limit is now
on.  So I may not be able to replicate this.  :)

-Robin
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux