Re: [PATCH v3 0/3] A Solution to Re-enable hugetlb vmemmap optimize

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 05, 2024 at 11:41:34AM -0600, Yu Zhao wrote:
> On Fri, Jul 5, 2024 at 9:49 AM Catalin Marinas <catalin.marinas@xxxxxxx> wrote:
> > If I did the maths right, for a 2MB hugetlb page, we have about 8
> > vmemmap pages (32K). Once we split a 2MB vmemap range,
> 
> Correct.
> 
> > whatever else
> > needs to be touched in this range won't require a stop_machine().
> 
> There might be some misunderstandings here.
> 
> To do HVO:
> 1. we split a PMD into 512 PTEs;
> 2. for every 8 PTEs:
>   2a. we allocate an order-0 page for PTE #0;
>   2b. we remap PTE #0 *RW* to this page;
>   2c. we remap PTEs #1-7 *RO* to this page;
>   2d. we free the original order-3 page.

Thanks. I now remember why we reverted such support in 060a2c92d1b6
("arm64: mm: hugetlb: Disable HUGETLB_PAGE_OPTIMIZE_VMEMMAP"). The main
problem is that point 2c also changes the output address of the PTE
(and the content of the page slightly). The architecture requires a
break-before-make in such scenario, though it would have been nice if it
was more specific on what could go wrong.

We can do point 1 safely if we have FEAT_BBM level 2. For point 2, I
assume these 8 vmemmap pages may be accessed and that's why we can't do
a break-before-make safely. I was wondering whether we could make the
PTEs RO first and then change the output address but we have another
rule that the content of the page should be the same. I don't think
entries 1-7 are identical to entry 0 (though we could ask the architects
for clarification here). Also, can we guarantee that nothing writes to
entry 0 while we would do such remapping? We know entries 1-7 won't be
written as we mapped them as RO but entry 0 contains the head page.
Maybe it's ok to map it RO temporarily until the newly allocated hugetlb
page is returned.

If we could get the above work, it would be a lot simpler than thinking
of stop_machine() or other locks to wait for such remapping.

> To do de-HVO:
> 1. for every 8 PTEs:
>   1a. we allocate 7 order-0 pages.
>   1b. we remap PTEs #1-7 *RW* to those pages, respectively.

Similar problem in 1.b, changing the output address. Here we could force
the content to be the same and remap PTEs 1-7 RO first to the new page,
turn them RW afterwards and it's all compliant with the architecture
(even without FEAT_BBM).

> > What I meant is that we can leave the vmemmap alias in place and just
> > reuse those pages via the linear map etc. The kernel should touch those
> > struct pages to corrupt the data. The only problem would be if we
> > physically unplug those pages but I don't think that's the case here.
> 
> Set the repercussions of memory corruption aside, we still can't do
> this because PTEs #1-7 need to map meaningful data, hence step 2c
> above.

Yeah, I missed this one.

-- 
Catalin




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux