Re: [PATCH] x86/mm/ident_map: Use full gbpages in identity maps except on UV platform.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 28, 2024 at 12:05:02AM -0500, Eric W. Biederman wrote:
> Steve Wahl <steve.wahl@xxxxxxx> writes:
> 
> > On Wed, Mar 27, 2024 at 07:57:52AM -0500, Eric W. Biederman wrote:
> >> Steve Wahl <steve.wahl@xxxxxxx> writes:
> >> 
> >> > On Mon, Mar 25, 2024 at 10:04:41AM -0500, Eric W. Biederman wrote:
> >> >> Russ Anderson <rja@xxxxxxx> writes:
> >> >> > Steve can certainly merge his two patches and resubmit, to replace the
> >> >> > reverted original patch.  He should be on in the morning to speak for
> >> >> > himself.
> >> >> 
> >> >> I am going to push back and suggest that this is perhaps a bug in the
> >> >> HPE UV systems firmware not setting up the cpus memory type range
> >> >> registers correctly.
> >> >> 
> >> >> Unless those systems are using new fangled cpus that don't have 16bit
> >> >> and 32bit support, and don't implement memory type range registers,
> >> >> I don't see how something that only affects HPE UV systems could be
> >> >> anything except an HPE UV specific bug.
> >> >
> >> > Eric,
> >> >
> >> > I took the time to communicate with others in the company who know
> >> > this stuff better than I do before replying on this.
> >> >
> >> > One of the problems with using the MTRRs for this is that there are
> >> > simply not enough of them.  The MTRRs size/alignment requirements mean
> >> > that more than one entry would be required per reserved region, and we
> >> > need one reserved region per socket on systems that currently can go
> >> > up to 32 sockets.  (In case you would think to ask, the reserved
> >> > regions also cannot be made contiguous.)
> >> >
> >> > So MTRRs will not work to keep speculation out of our reserved memory
> >> > regions.
> >> >
> >> > Let me know if you need more information from us on this.
> >> 
> >> Thanks for this.
> >> 
> >> Do you know if there are enough MTRRs for the first 4GB?
> >
> > I don't personally know all the details of how BIOS chooses to place
> > things, but I suspect that might be true.  The restricted spaces
> > usually end up at the end of the address range for a particular node,
> > and 4GB would be in the early part of node 0.  If the conversation
> > develops further along these lines, I can find out more definitively.
> >
> >> I am curious if kexec should even consider going into 32bit mode without
> >> page tables or even into 16bit mode on such a system.  Or if such a
> >> system will always require using page tables.
> >
> > Unless I'm mistaken, wouldn't that put a pretty heavy restriction on
> > where the kdump kernel could be located?
> 
> If you are coming from 64bit EFI it adds restrictions.

We are.  :-)

> Most of my experience involves systems using a real mode BIOS and
> folks thought I was strange for wanting to be able to load the kernel
> above 4GB.
> 
> Having that experience, I am stuck wondering how all of the weird
> backwards compatibility cases are going to work. Hmm.
> 
> There is one concrete case where it matters that I think still exists.
> 
> x86_64 processors startup in 16bit real mode, then have to transition
> through 32bit protected mode, before transitioning to 64bit protected
> mode.  Only in 64bit protected mode are page tables enabled.
> 
> All this happens during early kernel startup when the bootstrap
> processor sends STARTUP IPIs to all of the secondary processors.
> 
> The startup IPI lets you pick where in the first 1MiB the secondary
> processors will start.
> 
> Assuming there isn't a new processor startup sequence on your cpus
> speculation before the processor loads it's first page table is a
> legitimate concern.

I believe the reserved memory that is problematic is at the end of
each socket's (NUMA node's) address space.  You have to get to 64 bit
execution before you can reach addresses outside of the first 4GB of
space I think.  External hardware uses this RAM, the processors are
not to access it at all.  MTRRs don't exactly have a entry type to
match this, at least from the document skimming I've done. (I have a
limited understanding, but I think this reserved space is used by our
hardware to keep track of cache line ownership for the rest of the
ram, so letting any other entity take even a read claim on these
addresses is a problem, in a catch-22 or circular reference sort of
way.)

> > Or the target region for KASLR?
> 
> As I recall the kernel is limited to the last 2GB of the virtual
> address space, as parts of the instruction 

>From what I recall, KASLR varies both the virtual an physical addresses,
and it's the physical that's of concern here.

arch/x86/boot/compressed/kaslr.c: "In theory, KASLR can put the kernel
anywhere in the range of [16M, MAXMEM) on 64-bit..."

I had to make a change in that area a few years ago for similar
reasons:

1869dbe87cb94d  x86/boot/64: Round memory hole size up to next PMD page

> >> If you don't have enough MTRRs on a big NUMA system I think it is
> >> perfectly understandable, to need to use the page tables.
> >> 
> >> Please include this the fact that splitting GBpages is necessary because
> >> of a lack of MTRRs in the change description.
> >
> > OK.
> >
> >> Given that it is the lack of MTRRs on a large NUMA system that make the
> >> change necessary.   The goes from a pure bug fix change to a change to
> >> accommodate systems without enough MTRRs.
> >> 
> >> That information makes it more understandable why older systems (at
> >> least in the case of kexec) might not be ok with the change.  As for
> >> older systems their MTRRs are sufficient and thus they can use fewer
> >> page table entries.  Allowing for use of larger TLB entries.
> >
> > That last paragraph doesn't match what I think is happening.
> >
> > At least from my point of view, that some systems aren't OK with the
> > change has nothing to do with MTRRs or TLB page size.  They simply
> > require the extra "slop" of GB pages, implicitly adding a full GB of
> > space around any smaller space requested by map_acpi_tables().
> >
> > The systems that failed with my original change also failed on earlier
> > kernels when nogbpages was added to the kernel command line.  That
> > creates the identity map using 2M pages for everything, with no GB
> > page "slop".   I'm pretty sure these systems will continue to fail with
> > "nogbpages" enabled.
> >
> > For one debug-kernel cycle on Pavin's system I added in hard-coded
> > requests to explicitly add back in the areas that not being sloppy had
> > excluded, and that brought kexec back to functioning; which further
> > proves my point.  
> >
> > I wanted to be sure you understood this in case it has any effect on
> > what you think should be done.
> 
> Sort of.
> 
> What kexec wants of an identity mapped page table really is to simulate
> disabling paging altogether.  There isn't enough memory in most systems
> to identity map the entire 48bit or 52bit physical address space so some
> compromises have to be made.  I seem to recall only mapping up to
> maxpfn, and using 1GB pages when I originally wrote the code.  It was
> later refactored to share the identity map page table building code with
> the rest of the kernel.
> 
> When you changed the page tables not to map everything, strictly
> speaking you created an ABI break of the kexec ABI.
> 
> Which is a long way of saying it isn't being sloppy it is deliberate,
> and that the problem from my perspective is that things have become too
> fine grained, too optimized.
> 
> Pavin's definitely proves the issue was not mapping enough pages, it is
> nice that we have that confirmation.
> 
> From my perspective the entire reason for wanting to be fine grained and
> precise in the kernel memory map is because the UV systems don't have
> enough MTRRs.  So you have to depend upon the cache-ability attributes
> for specific addresses of memory coming from the page tables instead of
> from the MTRRs.

It would be more accurate to say we depend upon the addresses not
being listed in the page tables at all.  We'd be OK with mapped but
not accessed, if it weren't for processor speculation.  There's no "no
access" setting within the existing MTRR definitions, though there may
be a setting that would rein in processor speculation enough to make
due.

> If you had enough MTRRs more defining the page tables to be precisely
> what is necessary would be simply an exercise in reducing kernel
> performance, because it is more efficient in both page table size, and
> in TLB usage to use 1GB pages instead of whatever smaller pages you have
> to use for oddball regions.
> 
> For systems without enough MTRRs the small performance hit in paging
> performance is the necessary trade off.
> 
> At least that is my perspective.  Does that make sense?

I think I'm begining to get your perspective.  From your point of
view, is kexec failing with "nogbpages" set a bug?  My point of view
is it likely is.  I think your view would say it isn't?

--> Steve

-- 
Steve Wahl, Hewlett Packard Enterprise




[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux