"Huang, Ying" <ying.huang@xxxxxxxxx> writes: > "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxx> writes: > >> The current kernel has the basic memory tiering support: Inactive pages on a >> higher tier NUMA node can be migrated (demoted) to a lower tier NUMA node to >> make room for new allocations on the higher tier NUMA node. Frequently accessed >> pages on a lower tier NUMA node can be migrated (promoted) to a higher tier NUMA >> node to improve the performance. >> >> In the current kernel, memory tiers are defined implicitly via a demotion path >> relationship between NUMA nodes, which is created during the kernel >> initialization and updated when a NUMA node is hot-added or hot-removed. The >> current implementation puts all nodes with CPU into the top tier, and builds the >> tier hierarchy tier-by-tier by establishing the per-node demotion targets based >> on the distances between nodes. >> >> This current memory tier kernel interface needs to be improved for several >> important use cases: >> >> * The current tier initialization code always initializes each memory-only NUMA >> node into a lower tier. But a memory-only NUMA node may have a high >> performance memory device (e.g. a DRAM device attached via CXL.mem or a >> DRAM-backed memory-only node on a virtual machine) and should be put into a >> higher tier. >> >> * The current tier hierarchy always puts CPU nodes into the top tier. But on a >> system with HBM (e.g. GPU memory) devices, these memory-only HBM NUMA nodes >> should be in the top tier, and DRAM nodes with CPUs are better to be placed >> into the next lower tier. >> >> * Also because the current tier hierarchy always puts CPU nodes into the top >> tier, when a CPU is hot-added (or hot-removed) and triggers a memory node from >> CPU-less into a CPU node (or vice versa), the memory tier hierarchy gets >> changed, even though no memory node is added or removed. This can make the >> tier hierarchy unstable and make it difficult to support tier-based memory >> accounting. >> >> * A higher tier node can only be demoted to selected nodes on the next lower >> tier as defined by the demotion path, not any other node from any lower tier. >> This strict, hard-coded demotion order does not work in all use cases (e.g. >> some use cases may want to allow cross-socket demotion to another node in the >> same demotion tier as a fallback when the preferred demotion node is out of >> space), and has resulted in the feature request for an interface to override >> the system-wide, per-node demotion order from the userspace. This demotion >> order is also inconsistent with the page allocation fallback order when all >> the nodes in a higher tier are out of space: The page allocation can fall back >> to any node from any lower tier, whereas the demotion order doesn't allow >> that. >> >> This patch series make the creation of memory tiers explicit under >> the control of device driver. >> >> Memory Tier Initialization >> ========================== >> >> Linux kernel presents memory devices as NUMA nodes and each memory device is of >> a specific type. The memory type of a device is represented by its abstract >> distance. A memory tier corresponds to a range of abstract distance. This allows >> for classifying memory devices with a specific performance range into a memory >> tier. >> >> By default, all memory nodes are assigned to the default tier with >> abstract distance 512. >> >> A device driver can move its memory nodes from the default tier. For example, >> PMEM can move its memory nodes below the default tier, whereas GPU can move its >> memory nodes above the default tier. >> >> The kernel initialization code makes the decision on which exact tier a memory >> node should be assigned to based on the requests from the device drivers as well >> as the memory device hardware information provided by the firmware. >> >> Hot-adding/removing CPUs doesn't affect memory tier hierarchy. > > Some patch description of [0/8] is same as that of [1/8] originally. It > appears that you revised [1/8], but forget to revise [0/8] too. Please > do that. I just sent v12 making sure smaller value of abstract distance imply faster(higher) memory tier. I missed in that in v11. -aneesh