Re: Why do we let munmap fail?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/21/2018 04:16 PM, Daniel Colascione wrote:
> On Mon, May 21, 2018 at 4:02 PM Dave Hansen <dave.hansen@xxxxxxxxx> wrote:
> 
>> On 05/21/2018 03:54 PM, Daniel Colascione wrote:
>>>> There are also certainly denial-of-service concerns if you allow
>>>> arbitrary numbers of VMAs.  The rbtree, for instance, is O(log(n)), but
>>>> I 'd be willing to be there are plenty of things that fall over if you
>>>> let the ~65k limit get 10x or 100x larger.
>>> Sure. I'm receptive to the idea of having *some* VMA limit. I just think
>>> it's unacceptable let deallocation routines fail.
>> If you have a resource limit and deallocation consumes resources, you
>> *eventually* have to fail a deallocation.  Right?
> That's why robust software sets aside at allocation time whatever resources
> are needed to make forward progress at deallocation time.

I think there's still a potential dead-end here.  "Deallocation" does
not always free resources.

> That's what I'm trying to propose here, essentially: if we specify
> the VMA limit in terms of pages and not the number of VMAs, we've
> effectively "budgeted" for the worst case of VMA splitting, since in
> the worst case, you end up with one page per VMA.
Not a bad idea, but it's not really how we allocate VMAs today.  You
would somehow need per-process (mm?) slabs.  Such a scheme would
probably, on average, waste half of a page per mm.

> Done this way, we still prevent runaway VMA tree growth, but we can also
> make sure that anyone who's successfully called mmap can successfully call
> munmap.

I'd be curious how this works out, but I bet you end up reserving a lot
more resources than people want.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux