On Thu, Apr 21, 2022 at 03:49:21PM +0800, ying.huang@xxxxxxxxx wrote: > On Wed, 2022-04-20 at 16:33 +0800, Aaron Lu wrote: > > On Thu, Apr 07, 2022 at 10:36:54AM -0700, Yang Shi wrote: > > > On Thu, Apr 7, 2022 at 1:12 AM Aaron Lu <aaron.lu@xxxxxxxxx> wrote: > > > > > > > > On Wed, Apr 06, 2022 at 07:09:53PM -0700, Yang Shi wrote: > > > > > The swap devices are linked to per node priority lists, the swap device > > > > > closer to the node has higher priority on that node's priority list. > > > > > This is supposed to improve I/O latency, particularly for some fast > > > > > devices. But the current code gets nid by calling numa_node_id() which > > > > > actually returns the nid that the reclaimer is running on instead of the > > > > > nid that the page belongs to. > > > > > > > > > > > > > Right. > > > > > > > > > Pass the page's nid dow to get_swap_pages() in order to pick up the > > > > > right swap device. But it doesn't work for the swap slots cache which > > > > > is per cpu. We could skip swap slots cache if the current node is not > > > > > the page's node, but it may be overkilling. So keep using the current > > > > > node's swap slots cache. The issue was found by visual code inspection > > > > > so it is not sure how much improvement could be achieved due to lack of > > > > > suitable testing device. But anyway the current code does violate the > > > > > design. > > > > > > > > > > > > > I intentionally used the reclaimer's nid because I think when swapping > > > > out to a device, it is faster when the device is on the same node as > > > > the cpu. > > > > > > OK, the offline discussion with Huang Ying showed the design was to > > > have page's nid in order to achieve better I/O performance (more > > > noticeable on faster devices) since the reclaimer may be running on a > > > different node from the reclaimed page. > > > > > > > > > > > Anyway, I think I can make a test case where the workload allocates all > > > > its memory on the remote node and its workingset memory is larger then > > > > the available memory so swap is triggered, then we can see which way > > > > achieves better performance. Sounds reasonable to you? > > > > > > Yeah, definitely, thank you so much. I don't have a fast enough device > > > by hand to show the difference right now. If you could get some data > > > it would be perfect. > > > > > > > Failed to find a test box that has two NVMe disks attached to different > > nodes and since Shanghai is locked down right now, we couldn't install > > another NVMe on the box so I figured it might be OK to test on a box that > > has a single NVMe attached to node 0 like this: > > > > 1) restrict the test processes to run on node 0 and allocate on node 1; > > 2) restrict the test processes to run on node 1 and allocate on node 0. > > > > In case 1), the reclaimer's node id is the same as the swap device's so > > it's the same as current behaviour and in case 2), the page's node id is > > the same as the swap device's so it's what your patch proposed. > > > > The test I used is vm-scalability/case-swap-w-rand: > > https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-swap-w-seq > > which spawns $nr_task processes and each will mmap $size and then > > randomly write to that area. I set nr_task=32 and $size=4G, so a total > > of 128G memory will be needed and I used memory.limit_in_bytes to > > restrict the available memory to 64G, to make sure swap is triggered. > > > > The reason why cgroup is used is to avoid waking up the per-node kswapd > > which can trigger swapping with reclaimer/page/swap device all having the > > same node id. > > > > And I don't see a measuable difference from the result: > > case1(using reclaimer's node id) vm-scalability.throughput: 10574 KB/s > > case2(using page's node id) vm-scalability.throughput: 10567 KB/s > > > > My interpretation of the result is, when reclaiming a remote page, it > > doesn't matter much which swap device to use if the swap device is a IO > > device. > > > > Later Ying reminded me we have test box that has optane installed on > > different nodes so I also tested there: Icelake 2 sockets server with 2 > > optane installed on each node. I did the test there like this: > > 1) restrict the test processes to run on node 0 and allocate on node 1 > > and only swapon pmem0, which is the optane backed swap device on node 0; > > 2) restrict the test processes to run on node 0 and allocate on node 1 > > and only swapon pmem1, which is the optane backed swap device on node 1. > > > > So case 1) is current behaviour and case 2) is what your patch proposed. > > > > With the same test and the same nr_task/size, the result is: > > case1(using reclaimer's node id) vm-scalability.throughput: 71033 KB/s > > case2(using page's node id) vm-scalability.throughput: 58753 KB/s > > > > The per-node swap device support is more about swap-in latency than > swap-out throughput. I suspect the test case is more about swap-out > throughput. perf profiling can show this. > On another thought, swap out can very well affect swap in latency: since swap is involved, the available memory is in short supply and swap in may very likely need to reclaim a page and that reclaim can involve a swap out, so swap out performance can also affect swap in latency. > For swap-in latency, we can use pmbench, which can output latency > information. > > Best Regards, > Huang, Ying > > > [snip] >