Re: init_on_alloc digression: [LSF/MM/BPF TOPIC] Dropping page cache of individual fs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/16/24 15:38, John Hubbard wrote:
On 2/15/24 17:14, Adrian Vovk wrote:
...
Typical distro configuration is:

$ sudo dmesg |grep auto-init
[    0.018882] mem auto-init: stack:all(zero), heap alloc:on, heap free:off
$

So this kernel zeroes all stack memory, page and heap memory on
allocation, and does nothing on free...

I see. Thank you for all the information.

So ~5% performance penalty isn't trivial, especially to protect against

And it's more like 600% or more, on some systems. For example, imagine if
someone had a memory-coherent system that included both CPUs and GPUs,
each with their own NUMA memory nodes. The GPU has fast DMA engines that
can zero a lot of that memory very very quickly, order(s) of magnitude
faster than the CPU can clear it.

So, the GPU driver is going to clear that memory before handing it
out to user space, and all is well so far.

But init_on_alloc forces the CPU to clear the memory first, because of
the belief here that this is somehow required in order to get defense
in depth. (True, if you can convince yourself that some parts of the
kernel are in a different trust boundary than others. I lack faith
here and am not a believer in such make belief boundaries.)

As far as I can tell init_on_alloc isn't about drawing a trust boundary between parts of the kernel, but about hardening the kernel against mistakes made by developers, i.e. if they forget to initialize some memory. If the memory isn't zero'd and the developer forgets to initialize it, then potentially memory under user control (from page cache or so) can control flow of execution in the kernel. Thus, zeroing out the memory provides a second layer of defense even in situations where the first layer (not using uninitialized memory) failed. Thus, defense in depth.

Is this just an NVIDIA embedded thing (AFAIK your desktop/laptop cards don't share memory with the CPU), or would it affect something like Intel/AMD APUs as well?

If the GPU is so much faster at zeroing out blocks of memory in these systems, maybe the kernel should use the GPU's DMA engine whenever it needs to zero out some blocks of memory (I'm joking, mostly; I can imagine it's not quite so simple)

Anyway, this situation has wasted much time, and at this point, I
wish I could delete the whole init_on_alloc feature.

Just in case you wanted an alt perspective. :)

This is all good to know, thanks.

I'm not particularly interested in init_on_alloc since it doesn't help against cold-boot scenarios. Does init_on_free have similar performance issues on such systems? (i.e. are you often freeing memory and then immediately allocating the same memory in the GPU driver?)

Either way, I'd much prefer to have both turned off and only zero out free'd memory periodically / on user request. Not on every allocation/free.

thanks,

Best,
Adrian





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux