On Wed, 19 Mar 2025 19:30:15 +0000 Raghavendra K T <raghavendra.kt@xxxxxxx> wrote: > Introduction: > ============= > In the current hot page promotion, all the activities including the > process address space scanning, NUMA hint fault handling and page > migration is performed in the process context. i.e., scanning overhead is > borne by applications. > > This is RFC V1 patch series to do (slow tier) CXL page promotion. > The approach in this patchset assists/addresses the issue by adding PTE > Accessed bit scanning. > > Scanning is done by a global kernel thread which routinely scans all > the processes' address spaces and checks for accesses by reading the > PTE A bit. > > A separate migration thread migrates/promotes the pages to the toptier > node based on a simple heuristic that uses toptier scan/access information > of the mm. > > Additionally based on the feedback for RFC V0 [4], a prctl knob with > a scalar value is provided to control per task scanning. > > Initial results show promising number on a microbenchmark. Soon > will get numbers with real benchmarks and findings (tunings). > > Experiment: > ============ > Abench microbenchmark, > - Allocates 8GB/16GB/32GB/64GB of memory on CXL node > - 64 threads created, and each thread randomly accesses pages in 4K > granularity. So if I'm reading this right, this is a flat distribution and any estimate of what is hot is noise? That will put a positive spin on costs of migration as we will be moving something that isn't really all that hot and so is moderately unlikely to be accessed whilst migration is going on. Or is the point that the rest of the memory is also mapped but not being accessed? I'm not entirely sure I follow what this is bound by. Is it bandwidth bound? > - 512 iterations with a delay of 1 us between two successive iterations. > > SUT: 512 CPU, 2 node 256GB, AMD EPYC. > > 3 runs, command: abench -m 2 -d 1 -i 512 -s <size> > > Calculates how much time is taken to complete the task, lower is better. > Expectation is CXL node memory is expected to be migrated as fast as > possible. > > Base case: 6.14-rc6 w/ numab mode = 2 (hot page promotion is enabled). > patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled). > we expect daemon to do page promotion. > > Result: > ======== > base NUMAB2 patched NUMAB1 > time in sec (%stdev) time in sec (%stdev) %gain > 8GB 134.33 ( 0.19 ) 120.52 ( 0.21 ) 10.28 > 16GB 292.24 ( 0.60 ) 275.97 ( 0.18 ) 5.56 > 32GB 585.06 ( 0.24 ) 546.49 ( 0.35 ) 6.59 > 64GB 1278.98 ( 0.27 ) 1205.20 ( 2.29 ) 5.76 > > Base case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled). > patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled). > base NUMAB1 patched NUMAB1 > time in sec (%stdev) time in sec (%stdev) %gain > 8GB 186.71 ( 0.99 ) 120.52 ( 0.21 ) 35.45 > 16GB 376.09 ( 0.46 ) 275.97 ( 0.18 ) 26.62 > 32GB 744.37 ( 0.71 ) 546.49 ( 0.35 ) 26.58 > 64GB 1534.49 ( 0.09 ) 1205.20 ( 2.29 ) 21.45 Nice numbers, but maybe some more details on what they are showing? At what point in the workload has all the memory migrated to the fast node or does that never happen? I'm confused :( Jonathan