On Mon, Apr 17, 2023 at 03:08:34PM -0700, Luis Chamberlain wrote: > On Mon, Apr 17, 2023 at 05:33:49PM +0000, Edgecombe, Rick P wrote: > > On Sat, 2023-04-15 at 23:41 -0700, Luis Chamberlain wrote: > > > On Sat, Apr 15, 2023 at 11:04:12PM -0700, Christoph Hellwig wrote: > > > > On Thu, Apr 13, 2023 at 10:28:40PM -0700, Luis Chamberlain wrote: > > > > > With this we run into 0 wasted virtual memory bytes. > > > > > > > > Avoid what duplicates? > > > > > > David Hildenbrand had reported that with over 400 CPUs vmap space > > > runs out and it seems it was related to module loading. I took a > > > look and confirmed it. Module loading ends up requiring in the > > > worst case 3 vmalloc allocations, so typically at least twice > > > the size of the module size and in the worst case just add > > > the decompressed module size: > > > > > > a) initial kernel_read*() call > > > b) optional module decompression > > > c) the actual module data copy we will keep > > > > > > Duplicate module requests that come from userspace end up being > > > thrown > > > in the trash bin, as only one module will be allocated. Although > > > there > > > are checks for a module prior to requesting a module udev still > > > doesn't > > > do the best of a job to avoid that and so we end up with tons of > > > duplicate module requests. We're talking about gigabytes of vmalloc > > > bytes just lost because of this for large systems and megabytes for > > > average systems. So for example with just 255 CPUs we can loose about > > > 13.58 GiB, and for 8 CPUs about 226.53 MiB. > > > > > > I have patches to curtail 1/2 of that space by doing a check in > > > kernel > > > before we do the allocation in c) if the module is already present. > > > For > > > a) it is harder because userspace just passes a file descriptor. But > > > since we can get the file path without the vmalloc this RFC suggest > > > maybe we can add a new kernel_read*() for module loading where it > > > makes > > > sense to have only one read happen at a time. > > > > I'm wondering how difficult it would be to just try to remove the > > vmallocs in (a) and (b) and operate on a list of pages. > > Yes I think it's worth long term to do that, if possible with seq reads. OK here's what I suggest we do then: I'll resubmit the first patch which allows us to prove / disprove if module-autoloading is the culprit. With that in place folks can debug their setup and verify how udev is to blame. I'll drop the second kernel_read*() patch / effort and punt this as a userspace problem as this is also not extremely pressing. Long term should evaluate how we can avoid vmalloc for the kread and module decompression. If this really becomes a pressing issue we can revisit if we want an in kernel solution, but at this point that likely would be systems with over 400-500 CPUs with KASAN enabled. Without KASAN the issue should eventually trigger if you're enablig modules but its hard to say at what point you'd hit this issue. Luis