Re: [PATCH v4 bpf 0/4] vmalloc: bpf: introduce VM_ALLOW_HUGE_VMAP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2022-04-21 at 19:47 -0700, Linus Torvalds wrote:
> I don't disagree, but I think the real problem is that the whole "oen
> page_order per vmalloc() area" itself is a bit broken.

Yea. It is the main reason it has to round up to huge page sizes
AFAICT. I'd really like to see it use memory a little more efficiently
if it is going to be an opt-out thing again.

> 
> For example, AMD already does this "automatic TLB size" thing for
> when
> you have multiple contiguous PTE entries (shades of the old alpha
> "page size hint" thing, except it's automatic and doesn't have
> explicit hints).
> 
> And I'm hoping Intel will do something similar in the future.
> 
> End result? It would actually be really good to just map contiguous
> pages, but it doesn't have anything to do with the 2MB PMD size.
> 
> And there's no "fixed order" needed either. If you have mapping that
> is 17 pages in size, it would still be good to allocate them as a
> block of 16 pages ("page_order = 4") and as a single page, because
> just laying them out in the page tables that way will already allow
> AMD to use a 64kB TLB entry for that 16-page block.
> 
> But it would also work to just do the allocations as a set of 8, 4, 4
> and 1.

Hmm, that's neat.

> 
> But the whole "one page order for one vmalloc" means that doesn't
> work
> very well.
> 
> Where I disagree (violently) with Nick is his contention that (a)
> this
> is x86-specific and (b) this is somehow trivial to fix.
> 
> Let's face it - the current code is broken. I think the sub-page
> issue
> is not entirely trivial, and the current design isn't even very good
> for it.
> 
> But the *easy* cases are the ones that simply don't care - the ones
> that powerpc has actually been testing.
> 
> So for 5.18, I think it's quite likely reasonable to re-enable
> large-page vmalloc for the easy case (ie those big hash tables).
> 
> Re-enabling it *all*, considering how broken it has been, and how
> little testing it has clearly gotten? And potentially not enabling it
> on x86 because x86 is so much better at showing issues? That's not
> what I want to do.
> 
> If the code is so broken that it can't be used on x86, then it's too
> broken to be enabled on powerpc and s390 too. Never mind that those
> architectures might have so limited use that they never realized how
> broken they were..

I think there is another cross-arch issue here that we shouldn't lose
sight of. There are not enough warnings in the code about the
assumptions made on the arch's. The other issues are x86 specific in
terms of who gets affected in rc1, but I dug up this prophetic
assessment:

https://lore.kernel.org/lkml/4488d39f-0698-7bfd-b81c-1e609821818f@xxxxxxxxx/

That is pretty much what happened. Song came along and, in its current
state, took it as a knob that could just be flipped. Seems pretty
reasonable that it could happen again.

So IMHO, the other general issue is the lack of guard rails or warnings
for the next arch that comes along. Probably VM_FLUSH_RESET_PERMS
should get some warnings as well.

I kind of like the idea in that thread of making functions or configs
for arch's to be forced to declare they have specific properties. Does
it seem reasonable at this point? Probably not necessary as a 5.18 fix.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux