Re: Why do we let munmap fail?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 21, 2018 at 4:02 PM Dave Hansen <dave.hansen@xxxxxxxxx> wrote:

> On 05/21/2018 03:54 PM, Daniel Colascione wrote:
> >> There are also certainly denial-of-service concerns if you allow
> >> arbitrary numbers of VMAs.  The rbtree, for instance, is O(log(n)), but
> >> I 'd be willing to be there are plenty of things that fall over if you
> >> let the ~65k limit get 10x or 100x larger.
> > Sure. I'm receptive to the idea of having *some* VMA limit. I just think
> > it's unacceptable let deallocation routines fail.

> If you have a resource limit and deallocation consumes resources, you
> *eventually* have to fail a deallocation.  Right?

That's why robust software sets aside at allocation time whatever resources
are needed to make forward progress at deallocation time. That's what I'm
trying to propose here, essentially: if we specify the VMA limit in terms
of pages and not the number of VMAs, we've effectively "budgeted" for the
worst case of VMA splitting, since in the worst case, you end up with one
page per VMA.

Done this way, we still prevent runaway VMA tree growth, but we can also
make sure that anyone who's successfully called mmap can successfully call
munmap.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux