Re: [PATCH v3 0/3] A Solution to Re-enable hugetlb vmemmap optimize

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 5, 2024 at 9:49 AM Catalin Marinas <catalin.marinas@xxxxxxx> wrote:
>
> On Thu, Jun 27, 2024 at 03:19:55PM -0600, Yu Zhao wrote:
> > On Wed, Feb 7, 2024 at 5:44 AM Catalin Marinas <catalin.marinas@xxxxxxx> wrote:
> > > On Sat, Jan 27, 2024 at 01:04:15PM +0800, Nanyong Sun wrote:
> > > > On 2024/1/26 2:06, Catalin Marinas wrote:
> > > > > On Sat, Jan 13, 2024 at 05:44:33PM +0800, Nanyong Sun wrote:
> > > > > > HVO was previously disabled on arm64 [1] due to the lack of necessary
> > > > > > BBM(break-before-make) logic when changing page tables.
> > > > > > This set of patches fix this by adding necessary BBM sequence when
> > > > > > changing page table, and supporting vmemmap page fault handling to
> > > > > > fixup kernel address translation fault if vmemmap is concurrently accessed.
> > > [...]
> > > > > How often is this code path called? I wonder whether a stop_machine()
> > > > > approach would be simpler.
> > > >
> > > > As long as allocating or releasing hugetlb is called.  We cannot
> > > > limit users to only allocate or release hugetlb when booting or
> > > > not running any workload on all other cpus, so if use
> > > > stop_machine(), it will be triggered 8 times every 2M and 4096
> > > > times every 1G, which is probably too expensive.
> > >
> > > I'm hoping this can be batched somehow and not do a stop_machine() (or
> > > 8) for every 2MB huge page.
> >
> > Theoretically, all hugeTLB vmemmap operations from a single user
> > request can be done in one batch. This would require the preallocation
> > of the new copy of vmemmap so that the old copy can be replaced with
> > one BBM.
>
> Do we ever re-create pmd block entries back for the vmmemap range that
> was split or do they remain pmd table + pte entries? If the latter, I
> guess we could do a stop_machine() only for a pmd, it should be self
> limiting after a while.

It's the latter for now, but it can change in the future: we do want
to restore the original mapping at the PMD level; instead, we do it at
the PTE level because high-order pages backing PMD entries are not as
easy to allocate, compared with order-0 pages backing PTEs.

> I don't want user-space to DoS the system by
> triggering stop_machine() when mapping/unmapping hugetlbfs pages.

The operations are privileged, and each HVO or de-HVO request would
require at least one stop_machine(). So in theory, a privileged user
still can cause DoS.

> If I did the maths right, for a 2MB hugetlb page, we have about 8
> vmemmap pages (32K). Once we split a 2MB vmemap range,

Correct.

> whatever else
> needs to be touched in this range won't require a stop_machine().

There might be some misunderstandings here.

To do HVO:
1. we split a PMD into 512 PTEs;
2. for every 8 PTEs:
  2a. we allocate an order-0 page for PTE #0;
  2b. we remap PTE #0 *RW* to this page;
  2c. we remap PTEs #1-7 *RO* to this page;
  2d. we free the original order-3 page.

To do de-HVO:
1. for every 8 PTEs:
  1a. we allocate 7 order-0 pages.
  1b. we remap PTEs #1-7 *RW* to those pages, respectively.

We can in theory restore the original PTE or even PMD mappings at an
acceptable success rate by making changes on the MM side, e.g., only
allow movable allocations in the area backing the original PMD. Again,
we don't do it for now because high-order pages are not as easy to
allocate.

> > > Just to make sure I understand - is the goal to be able to free struct
> > > pages corresponding to hugetlbfs pages?
> >
> > Correct, if you are referring to the pages holding struct page[].
> >
> > > Can we not leave the vmemmap in
> > > place and just release that memory to the page allocator?
> >
> > We cannot, since the goal is to reuse those pages for something else,
> > i.e., reduce the metadata overhead for hugeTLB.
>
> What I meant is that we can leave the vmemmap alias in place and just
> reuse those pages via the linear map etc. The kernel should touch those
> struct pages to corrupt the data. The only problem would be if we
> physically unplug those pages but I don't think that's the case here.

Set the repercussions of memory corruption aside, we still can't do
this because PTEs #1-7 need to map meaningful data, hence step 2c
above.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux