hi, Muchun Song, On Mon, Jul 15, 2024 at 10:40:43AM +0800, Muchun Song wrote: > > > > On Jul 14, 2024, at 20:26, Oliver Sang <oliver.sang@xxxxxxxxx> wrote: > > > > hi, Yu Zhao, > > > > On Wed, Jul 10, 2024 at 12:22:40AM -0600, Yu Zhao wrote: > >> On Mon, Jul 8, 2024 at 11:11 PM kernel test robot <oliver.sang@xxxxxxxxx> wrote: > >>> > >>> Hello, > >>> > >>> kernel test robot noticed a -34.3% regression of vm-scalability.throughput on: > >>> > >>> > >>> commit: 875fa64577da9bc8e9963ee14fef8433f20653e7 ("mm/hugetlb_vmemmap: fix race with speculative PFN walkers") > >> > >> This is likely caused by synchronize_rcu() wandering into the > >> allocation path. I'll patch that up soon. > >> > > > > we noticed this commit has already been merged into mainline > > > > [bd225530a4c717714722c3731442b78954c765b3] mm/hugetlb_vmemmap: fix race with speculative PFN walkers > > branch: linus/master > > Did you test with HVO enabled (there are two ways to enable HVO: 1) adding cmdline with "hugetlb_free_vmemmap=on" > or 2) write 1 to /proc/sys/vm/hugetlb_optimize_vmemmap)? I want to confirm if the regression is related > to HVO routine. we found a strange thing, after adding 'hugetlb_free_vmemmap=on', the data become unstable by run to run (we use kexec from previous job to next one). below is for 875fa64577 + 'hugetlb_free_vmemmap=on' "vm-scalability.throughput": [ 611622, 645261, 705923, 833589, 840140, 884010 ], as a comparison, without 'hugetlb_free_vmemmap=on', for 875fa64577: "vm-scalability.throughput": [ 4597606, 4357960, 4385331, 4631803, 4554570, 4462691 ], for 73236245e0 (parent of 875fa64577): "vm-scalability.throughput": [ 6866441, 6769773, 6942991, 6877124, 6785790, 6812001 ], > > Thanks. > > > > > and the regression still exists in our tests. do you want us to test your > > patch? Thanks! >