On 21.11.22 04:03, kernel test robot wrote:
Greeting, FYI, we noticed a -6.5% regression of vm-scalability.throughput due to commit: commit: 088b8aa537c2c767765f1c19b555f21ffe555786 ("mm: fix PageAnonExclusive clearing racing with concurrent RCU GUP-fast") https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master in testcase: vm-scalability on test machine: 88 threads 2 sockets Intel(R) Xeon(R) Gold 6238M CPU @ 2.10GHz (Cascade Lake) with 128G memory with following parameters: thp_enabled: never thp_defrag: never nr_task: 1 nr_pmem: 2 priority: 1 test: swap-w-seq cpufreq_governor: performance test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us. test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
Yes, page_try_share_anon_rmap() might now be a bit more expensive now, turning try_to_unmap_one() a bit more expensive. However, that patch also changes the unconditional TLB flush into a conditional TLB flush, so results might vary heavily between machines/architectures.
smp_mb__after_atomic() is a NOP on x86, so the smp_mb() before the page_maybe_dma_pinned() check would have to be responsible.
While there might certainly be ways for optimizing that further (e.g., if the ptep_get_and_clear() already implies a smp_mb()), the facts that:
(1) It's a swap micro-benchmark (2) We have 3% stddev Don't make me get active now ;) -- Thanks, David / dhildenb