PROBLEM: Processes writing large files in memory-limited LXC container are killed by OOM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is better explained here:
http://serverfault.com/questions/516074/why-are-applications-in-a-memory-limited-lxc-container-writing-large-files-to-di
(The
highest-voted answer believes this to be a kernel bug.)

Summary: I have set up a system where I am using LXC to create multiple
virtualized containers on my system with limited resources. Unfortunately, I'm
running into a troublesome scenario where the OOM killer is hard killing
processes in my LXC container when I write a file with size exceeding the
memory limitation (set to 300MB). There appears to be some issue with the
file buffering respecting the containers memory limit.


Reproducing:

/done on a c1.xlarge instance running on Amazon EC2

Create 6 empty lxc containers (in my case I did lxc-create -n testcon -t
ubuntu -- -r precise)

Modify the configuration of each container to set lxc.cgroup.memory.
limit_in_bytes = 300M

Within each container run:
dd if=/dev/zero of=test2 bs=100k count=5010
parallel

This will with high probability activate the OOM (as seen in demsg); often
the dd processes themselves will be killed.

This has been verified to have problems on:
Linux 3.8.0-25-generic #37-Ubuntu SMP and Linux ip-10-8-139-98
3.2.0-29-virtual #46-Ubuntu SMP Fri Jul 27 17:23:50 UTC 2012 x86_64 x86_64
x86_64 GNU/Linux

Please let me know your thoughts.

Regards,
Aaron Staley
_______________________________________________
Containers mailing list
Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/containers




[Index of Archives]     [Cgroups]     [Netdev]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux