Re: PROBLEM: Processes writing large files in memory-limited LXC container are killed by OOM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Aaron Staley (aaron@xxxxxxxxxxx):
> This is better explained here:
> http://serverfault.com/questions/516074/why-are-applications-in-a-memory-limited-lxc-container-writing-large-files-to-di
> (The
> highest-voted answer believes this to be a kernel bug.)

Yeah, sorry I haven't had time to look more into it, but I'm pretty
that's the case.  When you sent the previous email I looked quickly at
the dd source.  I had always assumed that dd looked at available memory
and malloced as much as it thought it could - but looking at the source,
it does not in fact do that.  So yes, I think the kernel is simply
leaving it all in page cache and accounting that to the process which
then gets OOMed.

Instead, the kernel should be throttling the task while it waits for
the page cache to be written to disk (since blkio might also be
slowed down).

-serge
_______________________________________________
Containers mailing list
Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/containers




[Index of Archives]     [Cgroups]     [Netdev]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux