icable. > > > >You mean the different memory ranges of a NUMA node may have different > >performance? I don't think that we can deal with this. > > Example Configuration: On a server that we are using now, four different > CXL cards are combined to form a single NUMA node and two other cards are > exposed as two individual numa nodes. > So if we have the ability to combine multiple CXL memory ranges to a > single NUMA node the number of NUMA nodes in the system would potentially > decrease even if we can't combine the entire range to form a single node. > If it's in control of the kernel, today for CXL NUMA nodes are defined by CXL Fixed Memory Windows rather than the individual characteristics of devices that might be accessed from those windows. That's a useful simplification to get things going and it's not clear how the QoS aspects of CFMWS will be used. So will we always have enough windows with fine enough granularity coming from the _DSM QTG magic that they don't end up with different performance devices (or topologies) within each one? No idea. It's a bunch of trade offs of where the complexity lies and how much memory is being provided over CXL vs physical address space exhaustion. Long term, my guess is we'll need to support something more sophisticated with dynamic 'creation' of NUMA nodes (or something that looks like that anyway) so we can always have a separate one for each significantly different set of memory access characteristics. If they are coming from ACPI that's already required by the specification. This space is going to continue getting more complex. Upshot is that I wouldn't focus too much on possibility of a NUMA node having devices with very different memory access characterstics in it. That's a quirk of today's world that we can and should look to fix. If your bios is setting this up for you and presenting them in SRAT / HMAT etc then it's not complying with the ACPI spec. Jonathan