Gregory Price wrote: > When physical address capacity is not aligned to the size of a memory > block managed size, the misaligned portion is not mapped - creating > an effective loss of capacity. > > This appears to be a calculated decision based on the fact that most > regions would generally be aligned, and the loss of capacity would be > relatively limited. With CXL devices, this is no longer the case. > > CXL exposes its memory for management through the ACPI CEDT (CXL Early > Detection Table) in a field called the CXL Fixed Memory Window. Per > the CXL specification, this memory must be aligned to at least 256MB. > > On X86, memory block capacity increases based on the overall capacity > of the machine - eventually reaching a maximum of 2GB per memory block. > When a CFMW aligns on 256MB, this causes a loss of at least 2GB of > capacity, and in some cases more. > > It is also possible for multiple CFMW to be exposed for a single device. > This can happen if a reserved region intersects with the target memory > location of the memory device. This happens on AMD x86 platforms. I'm not clear why you mention reserved regions here. IIUC CFMW's can overlap to describe different attributes which may be utilized based on the devices which are mapped within them. For this reason, all CFMW's must be scanned to find the lowest common denominator even if the HPA range has already been evaluated. Is that what you are trying to say? > > This patch set detects the alignments of all CFMW in the ACPI CEDT, > and changes the memory block size downward to meet the largest common > denomenator of the supported memory regions. > > To do this, we needed 3 changes: > 1) extern memory block management functions for the acpi driver > 2) modify x86 to update its cached block size value > 3) add code in acpi/numa/srat.c to do the alignment check > > Presently this only affects x86, since this is the only architecture > that implements set_memory_block_size_order. > > Presently this appears to only affect x86, and we only mitigated there > since it is the only arch to implement set_memory_block_size_order. NIT : duplicate statement Ira