Re: hardened memory allocate port to linux-fedora system for secutiry

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 15, 2022 at 07:39:46PM -0700, John Reiser wrote:
> On 8/13/22, Demi Marie Obenour wrote:
> > On 8/13/22, Kevin Kofler via devel wrote:
> > > martin luther wrote:
> > > > should we implement https://github.com/GrapheneOS/hardened_malloc/
> > > > it is hardened memory allocate it will increase the security of fedora
> > > > according to the graphene os team it can be ported to linux as well need
> > > > to look at it
> > 
> > CCing Daniel Micay who wrote hardened_malloc.
> > 
> > > There are several questions that come up:  [[snip]]
> 
> It seems to me that hardened_malloc could increase working set and RAM
> desired by something like 10% compared to glibc for some important workloads,
> such as Fedora re-builds.  From page 22 of [1] (attached here; 203KB), the graph
> of number of requests versus requested size shows that blocks of size <= 128
> were requested tens to thousands of times more often than all the rest.

The lightweight configuration, hardened_malloc uses substantially less
memory for small allocations than glibc malloc.

None of the GrapheneOS or hardened_malloc developers or project members
has proposed that Fedora switch to hardened_malloc, but it would reduce
rather than increasing memory usage if you used in without the slab
quarantine features. Slab canaries use extra memory too, but the
overhead is lower than glibc metadata overhead. The sample lightweight
configuration still uses slab canaries.

If you bolted on a jemalloc-style array-based thread cache or a
problematic TCMalloc-style one as was copied for glibc, then you would
be able to get comparable performance and better scalability than glibc
malloc, but that is outside the scope of what hardened_malloc is
intended to provide. We aren't trying to serve that niche in
hardened_malloc. It does not mean that glibc malloc is well suited to
being the chosen allocator. That really can't be justified for any
technical reasons. If you replaced glibc malloc with jemalloc, the only
people who would be unhappy are people who care about the loss of ASLR
bits from chunk alignment, which if you make the chunks small enough and
configure ASLR properly really doesn't matter on 64-bit. I can't think
of a case where glibc malloc would be better than jemalloc with small
chunk sizes when using either 4k pages with a 48-bit address space or
larger pages. glibc malloc's overall design is simply not competitive
anymore, and it wastes tons of memory from both metadata overhead and
also fragmentation. I can't really understand what justification there
would be for not replacing it outright with a more modern design and
adding the necessary additional APIs required for that as we did
ourselves for our own security-focused allocator.

> For sizes from 0 through 128, the "Size classes" section of README.md of [2]
> documents worst-case internal fragmentation (in "slabs") of 93.75% to 11.72%.
> That seems too high.  Where are actual measurements for workloads such as
> Fedora re-builds?

The minimum alignment is 16 bytes. glibc malloc has far more metadata
overhead, internal and external fragmentation than hardened_malloc in
reality. It has headers on allocations, rounds to much less fine grained
bucket sizes and fragments all the memory with the traditional dlmalloc
style approach. There was a time when that approach was a massive
improvement over past ones but that time was the 90s, not 2022.

> (Also note that the important special case of malloc(0), which is analogous
> to (gensym) of Lisp and is implemented internally as malloc(1), consumes
> 16 bytes and has a fragmentation of 93.75% for both glibc and hardened_malloc.
> The worst fragmentation happens for *every* call to malloc(0), which occurred
> about 800,000 times in the sample.  Yikes!)

glibc malloc has headers giving it more than 100% pure overhead for a 16
byte allocation. It cannot do finer grained rounding than we do for 16
through 128 bytes, and sticking headers on allocations makes it far
worse. It also gets even worse with aligned allocations, such as common
64 byte aligned allocations, where slab allocation means any allocation
up to the page size already has their natural alignment such as 64 byte
for 64 byte, 128 byte for 128 byte, 256 byte for 256 byte, etc.

0 byte doesn't really make sense to compare because in hardened_malloc
it's a pointer to non-allocated pages with PROT_NONE memory protection.
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx
Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Users]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]

  Powered by Linux