Re: LXC + passthrough mount and host filesystem-cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Daniel, please see my response inline.

On Thu, Mar 27, 2014 at 10:48:17AM +0000, Daniel P. Berrange wrote:
> On Wed, Mar 26, 2014 at 02:52:58PM -0500, James R. Leu wrote:
> > Hello,
> > 
> > I'm using libvirt to build/run LXC instances.  My LXC instances use
> > passthrough filesystem mounts.  When I try to do large file systems
> > operations (ex tar or rsync) the file systems cache on the host
> > spikes and causes the OOM handler to run and kills processes
> > in the LXC.
> 
> So it is specifically targetting processes inside the container
> and not the host ?
> 
> If you run the same tar/rsync operations outside the container
> on the same filesystem, presumably you don't see the OOM killer
> behaviour ?

Correct.  For example, the LXC has

  <memory unit='KiB'>1048576</memory>

The processes inside the LXC are not using anywhere near the the limit,
but they are growing (ie doing sbrk to allocate new memory).  The rsync
or tar are running inside the LXC.  If they cause the page cache to grow
by more then the LXC memory limit then the next time a process inside the
LXC tries to allocate even a small amount of memory, the OOM killer kills
that process.

It seems like the page cache growth incured by the rsync or tar in the LXC
is being attributed to the LXC memory usage, but the page cache is not
being freed infavor of memory allocation by processes in the LXC.

Is there anything I can do to provide/gather more information?

> 
> > Has anyone else seen this? Is there a way around this?
> > At this point I'm resorting to running a cron job that dumps
> > the filesystem cache every 5 minutes.  The result is the filesystem
> > cache on the host never grows too large and OOM never runs against
> > LXC processes.  The obvious down fall is that I'm killing my filesystem
> > performance by duming the cache.
> 
> This is the first I've heard of this problem and it certainly
> seems odd/bad.  I wonder if there's some cgroup tunables that
> are being set badly, or that need to be set ? Or some global
> proc/sysfs settings
> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             http://virt-manager.org :|
> |: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|


-- 
James R. Leu | Director of Technology | INOC | Madison, WI, USA
O: +1-608-204-0203 | F: +1-608-663-4558 | jleu@xxxxxxxx | www.inoc.com
Service. Not Software.®

Attachment: pgpkeeDyfHf3D.pgp
Description: PGP signature

_______________________________________________
libvirt-users mailing list
libvirt-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvirt-users

[Index of Archives]     [Virt Tools]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux