On Wed, 2022-11-23 at 14:57 -0800, Dave Hansen wrote: > On 11/20/22 16:26, Kai Huang wrote: > > The TDX module uses additional metadata to record things like which > > guest "owns" a given page of memory. This metadata, referred as > > Physical Address Metadata Table (PAMT), essentially serves as the > > 'struct page' for the TDX module. PAMTs are not reserved by hardware > > up front. They must be allocated by the kernel and then given to the > > TDX module. > > ... during module initialization. Thanks. > > > TDX supports 3 page sizes: 4K, 2M, and 1G. Each "TD Memory Region" > > (TDMR) has 3 PAMTs to track the 3 supported page sizes. Each PAMT must > > be a physically contiguous area from a Convertible Memory Region (CMR). > > However, the PAMTs which track pages in one TDMR do not need to reside > > within that TDMR but can be anywhere in CMRs. If one PAMT overlaps with > > any TDMR, the overlapping part must be reported as a reserved area in > > that particular TDMR. > > > > Use alloc_contig_pages() since PAMT must be a physically contiguous area > > and it may be potentially large (~1/256th of the size of the given TDMR). > > The downside is alloc_contig_pages() may fail at runtime. One (bad) > > mitigation is to launch a TD guest early during system boot to get those > > PAMTs allocated at early time, but the only way to fix is to add a boot > > option to allocate or reserve PAMTs during kernel boot. > > FWIW, we all agree that this is a bad permanent way to leave things. > You can call me out here as proposing that this wart be left in place > while this series is merged and is a detail we can work on afterword > with new module params, boot options, Kconfig or whatever. Sorry do you mean to call out in the cover letter, or in this changelog? > > TDX only supports a limited number of reserved areas per TDMR to cover > > both PAMTs and memory holes within the given TDMR. If many PAMTs are > > allocated within a single TDMR, the reserved areas may not be sufficient > > to cover all of them. > > > > Adopt the following policies when allocating PAMTs for a given TDMR: > > > > - Allocate three PAMTs of the TDMR in one contiguous chunk to minimize > > the total number of reserved areas consumed for PAMTs. > > - Try to first allocate PAMT from the local node of the TDMR for better > > NUMA locality. > > > > Also dump out how many pages are allocated for PAMTs when the TDX module > > is initialized successfully. > > ... this helps answer the eternal "where did all my memory go?" questions. Will add to the comment. [...] > > +/* > > + * Pick a NUMA node on which to allocate this TDMR's metadata. > > + * > > + * This is imprecise since TDMRs are 1G aligned and NUMA nodes might > > + * not be. If the TDMR covers more than one node, just use the _first_ > > + * one. This can lead to small areas of off-node metadata for some > > + * memory. > > + */ > > +static int tdmr_get_nid(struct tdmr_info *tdmr) > > +{ > > + struct tdx_memblock *tmb; > > + > > + /* Find the first memory region covered by the TDMR */ > > + list_for_each_entry(tmb, &tdx_memlist, list) { > > + if (tmb->end_pfn > (tdmr_start(tdmr) >> PAGE_SHIFT)) > > + return tmb->nid; > > + } > > Aha, the first use of tmb->nid! I wondered why that was there. As you suggested I'll introduce the nid member of 'tdx_memblock' in this patch. > > > + > > + /* > > + * Fall back to allocating the TDMR's metadata from node 0 when > > + * no TDX memory block can be found. This should never happen > > + * since TDMRs originate from TDX memory blocks. > > + */ > > + WARN_ON_ONCE(1); > > That's probably better a pr_warn() or something. A backtrace and all > that jazz seems a bit overly dramatic for this. How about below? pr_warn("TDMR [0x%llx, 0x%llx): unable to find local NUMA node for PAMT allocation, fallback to use node 0.\n");