Re: [PATCH] Repeated fork() causes SLAB to grow without bound

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 19, 2014 at 1:15 AM, Konstantin Khlebnikov <koct9i@xxxxxxxxx> wrote:
> On Tue, Nov 18, 2014 at 11:19 PM, Andrew Morton
> <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
>> On Mon, 17 Nov 2014 21:41:57 -0500 Rik van Riel <riel@xxxxxxxxxx> wrote:
>>
>>> > Because of the serial forking there does indeed end up being an
>>> > infinite number of vmas.  The initial vma can never be deleted
>>> > (even though the initial parent process has long since terminated)
>>> > because the initial vma is referenced by the children.
>>>
>>> There is a finite number of VMAs, but an infite number of
>>> anon_vmas.
>>>
>>> Subtle, yet deadly...
>>
>> Well, we clearly have the data structures screwed up.  I've forgotten
>> enough about this code for me to be unable to work out what the fixed
>> up data structures would look like :( But surely there is some proper
>> solution here.  Help?
>
> Not sure if it's right but probably we could reuse on fork an old anon_vma
> from the chain if it's already lost all vmas which points to it.
> For endlessly forking exploit this should work mostly like proposed patch
> which stops branching after some depth but without magic constant.

Something like this. I leave proper comment for tomorrow.

>
>>
>>> > I can't say, but it only affects users who fork more than five
>>> > levels deep without doing an exec.  On the other hand, there are at
>>> > least three users (Tim Hartrick, Michal Hocko, and myself) who have
>>> > real world applications where the consequence of no patch is a
>>> > crashed system.
>>> >
>>> > I would suggest reading the thread starting with my initial bug
>>> > report for what others have had to say about this.
>>>
>>> I suspect what Andrew is hinting at is that the
>>> changelog for the patch should contain a detailed
>>> description of exactly what the bug is, how it is
>>> triggered, what the symptoms are, and how the
>>> patch avoids it.
>>>
>>> That way people can understand what the code does
>>> simply by looking at the changelog - no need to go
>>> find old linux-kernel mailing list threads.
>>
>> Yes please, there's a ton of stuff here which we should attempt to
>> capture.
>>
>> https://lkml.org/lkml/2012/8/15/765 is useful.
>>
>> I'm assuming that with the "foo < 5" hack, an application which forked
>> 5 times then did a lot of work would still trigger the "catastrophic
>> issue at page reclaim time" issue which Rik identified at
>> https://lkml.org/lkml/2012/8/20/265?
>>
>> There are real-world workloads which are triggering this slab growth
>> problem, yes?  (Detail them in the changelog, please).
>>
>> This bug snuck under my radar last time - we're permitting unprivileged
>> userspace to exhaust memory and that's bad.  I'm OK with the foo<5
>> thing for -stable kernels, as it is simple.  But I'm reluctant to merge
>> (or at least to retain) it in mainline because then everyone will run
>> away and think about other stuff and this bug will never get fixed
>> properly.
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>

Attachment: mm-reuse-old-anon_vma-if-it-s-lost-all-vmas
Description: Binary data


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]