Hi Peter, On 21/03/2024 21:50, peterx@xxxxxxxxxx wrote: > From: Peter Xu <peterx@xxxxxxxxxx> > > The script calculates a mininum required size of hugetlb memories, but > it'll stop working with <1MB huge page sizes, reporting all zeros even if > huge pages are available. > > In reality, the calculation doesn't really need to be as comlicated either. > Make it simpler and work for KB-level hugepages too. > > Cc: Muhammad Usama Anjum <usama.anjum@xxxxxxxxxxxxx> > Cc: David Hildenbrand <david@xxxxxxxxxx> > Cc: Nico Pache <npache@xxxxxxxxxx> > Cc: Muchun Song <muchun.song@xxxxxxxxx> > Signed-off-by: Peter Xu <peterx@xxxxxxxxxx> > --- > tools/testing/selftests/mm/run_vmtests.sh | 10 +++++++--- > 1 file changed, 7 insertions(+), 3 deletions(-) > > diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh > index c2c542fe7b17..b1b78e45d613 100755 > --- a/tools/testing/selftests/mm/run_vmtests.sh > +++ b/tools/testing/selftests/mm/run_vmtests.sh > @@ -152,9 +152,13 @@ done < /proc/meminfo > # both of these requirements into account and attempt to increase > # number of huge pages available. > nr_cpus=$(nproc) > -hpgsize_MB=$((hpgsize_KB / 1024)) > -half_ufd_size_MB=$((((nr_cpus * hpgsize_MB + 127) / 128) * 128)) Removing this has broken the uffd-stress "hugetlb" and "hugetlb-private" tests (further down the file), which rely on $half_ufd_size_MB. Now that this is not defined, they are called with too few params: # # --------------------------------- # # running ./uffd-stress hugetlb 32 # # --------------------------------- # # ERROR: invalid MiB (errno=0, @uffd-stress.c:454) # # # # Usage: ./uffd-stress <test type> <MiB> <bounces> # # # # Supported <test type>: anon, hugetlb, hugetlb-private, shmem, shmem-private # # # # Examples: # # # # # Run anonymous memory test on 100MiB region with 99999 bounces: # # ./uffd-stress anon 100 99999 # # # # # Run share memory test on 1GiB region with 99 bounces: # # ./uffd-stress shmem 1000 99 # # # # # Run hugetlb memory test on 256MiB region with 50 bounces: # # ./uffd-stress hugetlb 256 50 # # # # # Run the same hugetlb test but using private file: # # ./uffd-stress hugetlb-private 256 50 # # # # # 10MiB-~6GiB 999 bounces anonymous test, continue forever unless an error triggers # # while ./uffd-stress anon $[RANDOM % 6000 + 10] 999; do true; done # # # # [FAIL] # not ok 16 uffd-stress hugetlb 32 # exit=1 # # ----------------------------------------- # # running ./uffd-stress hugetlb-private 32 # # ----------------------------------------- # # ERROR: invalid MiB (errno=0, @uffd-stress.c:454) # # # # Usage: ./uffd-stress <test type> <MiB> <bounces> # # # # Supported <test type>: anon, hugetlb, hugetlb-private, shmem, shmem-private # # # # Examples: # # # # # Run anonymous memory test on 100MiB region with 99999 bounces: # # ./uffd-stress anon 100 99999 # # # # # Run share memory test on 1GiB region with 99 bounces: # # ./uffd-stress shmem 1000 99 # # # # # Run hugetlb memory test on 256MiB region with 50 bounces: # # ./uffd-stress hugetlb 256 50 # # # # # Run the same hugetlb test but using private file: # # ./uffd-stress hugetlb-private 256 50 # # # # # 10MiB-~6GiB 999 bounces anonymous test, continue forever unless an error triggers # # while ./uffd-stress anon $[RANDOM % 6000 + 10] 999; do true; done # # # # [FAIL] # not ok 17 uffd-stress hugetlb-private 32 # exit=1 Thanks, Ryan > -needmem_KB=$((half_ufd_size_MB * 2 * 1024)) > +uffd_min_KB=$((hpgsize_KB * nr_cpus * 2)) > +hugetlb_min_KB=$((256 * 1024)) > +if [[ $uffd_min_KB -gt $hugetlb_min_KB ]]; then > + needmem_KB=$uffd_min_KB > +else > + needmem_KB=$hugetlb_min_KB > +fi > > # set proper nr_hugepages > if [ -n "$freepgs" ] && [ -n "$hpgsize_KB" ]; then