Re: [PATCH v3 0/3] A Solution to Re-enable hugetlb vmemmap optimize

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 10, 2024 at 10:51 AM Catalin Marinas
<catalin.marinas@xxxxxxx> wrote:
>
> On Fri, Jul 05, 2024 at 11:41:34AM -0600, Yu Zhao wrote:
> > On Fri, Jul 5, 2024 at 9:49 AM Catalin Marinas <catalin.marinas@xxxxxxx> wrote:
> > > If I did the maths right, for a 2MB hugetlb page, we have about 8
> > > vmemmap pages (32K). Once we split a 2MB vmemap range,
> >
> > Correct.
> >
> > > whatever else
> > > needs to be touched in this range won't require a stop_machine().
> >
> > There might be some misunderstandings here.
> >
> > To do HVO:
> > 1. we split a PMD into 512 PTEs;
> > 2. for every 8 PTEs:
> >   2a. we allocate an order-0 page for PTE #0;
> >   2b. we remap PTE #0 *RW* to this page;
> >   2c. we remap PTEs #1-7 *RO* to this page;
> >   2d. we free the original order-3 page.
>
> Thanks. I now remember why we reverted such support in 060a2c92d1b6
> ("arm64: mm: hugetlb: Disable HUGETLB_PAGE_OPTIMIZE_VMEMMAP"). The main
> problem is that point 2c also changes the output address of the PTE
> (and the content of the page slightly). The architecture requires a
> break-before-make in such scenario, though it would have been nice if it
> was more specific on what could go wrong.
>
> We can do point 1 safely if we have FEAT_BBM level 2. For point 2, I
> assume these 8 vmemmap pages may be accessed and that's why we can't do
> a break-before-make safely.

Correct

> I was wondering whether we could make the
> PTEs RO first and then change the output address but we have another
> rule that the content of the page should be the same. I don't think
> entries 1-7 are identical to entry 0 (though we could ask the architects
> for clarification here). Also, can we guarantee that nothing writes to
> entry 0 while we would do such remapping?

Yes, it's already guaranteed.

> We know entries 1-7 won't be
> written as we mapped them as RO but entry 0 contains the head page.
> Maybe it's ok to map it RO temporarily until the newly allocated hugetlb
> page is returned.

We can do that. I don't understand how this could elide BBM. After the
above, we would still need to do:
3. remap entry 0 from RO to RW, mapping the `struct page` page that
will be shared with entry 1-7.
4. remap entry 1-7 from their respective `struct page` pages to that
of entry 0, while they remain RO.

> If we could get the above work, it would be a lot simpler than thinking
> of stop_machine() or other locks to wait for such remapping.

Steps 3/4 would not require BBM somehow?

> > To do de-HVO:
> > 1. for every 8 PTEs:
> >   1a. we allocate 7 order-0 pages.
> >   1b. we remap PTEs #1-7 *RW* to those pages, respectively.
>
> Similar problem in 1.b, changing the output address. Here we could force
> the content to be the same

I don't follow the "the content to be the same" part. After HVO, we have:

Entry 0 -> `struct page` page A, RW
Entry 1 -> `struct page` page A, RO
...
Entry 7 -> `struct page` page A, RO

To de-HVO, we need to make them:

Entry 0 -> `struct page` page A, RW
Entry 1 -> `struct page` page B, RW
...
Entry 7 -> `struct page` page H, RW

I assume the same content means PTE_0 == PTE_1/.../7?

> and remap PTEs 1-7 RO first to the new page,
> turn them RW afterwards and it's all compliant with the architecture
> (even without FEAT_BBM).

It'd be great if we could do that. I don't fully understand it though,
at the moment.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux