Re: [PATCH v4 bpf 0/4] vmalloc: bpf: introduce VM_ALLOW_HUGE_VMAP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 21, 2022 at 7:29 PM Edgecombe, Rick P
<rick.p.edgecombe@xxxxxxxxx> wrote:
>
> FWIW, I like this direction. I think it needs to free them differently
> though?

Very much so.

> Besides fixing the bisected issue (hopefully), it also more cleanly
> separates the mapping from the backing allocation logic. And then since
> all the pages are 4k (from the page allocator perspective), it would be
> easier to support non-huge page aligned sizes. i.e. not use up a whole
> additional 2MB page if you only need 4k more of allocation size.

I don't disagree, but I think the real problem is that the whole "oen
page_order per vmalloc() area" itself is a bit broken.

For example, AMD already does this "automatic TLB size" thing for when
you have multiple contiguous PTE entries (shades of the old alpha
"page size hint" thing, except it's automatic and doesn't have
explicit hints).

And I'm hoping Intel will do something similar in the future.

End result? It would actually be really good to just map contiguous
pages, but it doesn't have anything to do with the 2MB PMD size.

And there's no "fixed order" needed either. If you have mapping that
is 17 pages in size, it would still be good to allocate them as a
block of 16 pages ("page_order = 4") and as a single page, because
just laying them out in the page tables that way will already allow
AMD to use a 64kB TLB entry for that 16-page block.

But it would also work to just do the allocations as a set of 8, 4, 4 and 1.

But the whole "one page order for one vmalloc" means that doesn't work
very well.

Where I disagree (violently) with Nick is his contention that (a) this
is x86-specific and (b) this is somehow trivial to fix.

Let's face it - the current code is broken. I think the sub-page issue
is not entirely trivial, and the current design isn't even very good
for it.

But the *easy* cases are the ones that simply don't care - the ones
that powerpc has actually been testing.

So for 5.18, I think it's quite likely reasonable to re-enable
large-page vmalloc for the easy case (ie those big hash tables).

Re-enabling it *all*, considering how broken it has been, and how
little testing it has clearly gotten? And potentially not enabling it
on x86 because x86 is so much better at showing issues? That's not
what I want to do.

If the code is so broken that it can't be used on x86, then it's too
broken to be enabled on powerpc and s390 too. Never mind that those
architectures might have so limited use that they never realized how
broken they were..

                 Linus




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux