On Tue, Mar 30, 2021 at 08:59:42AM +0800, Zorro Lang wrote: > The ltp/fsstress always fails on io_uring_queue_init() by returnning > ENOMEM. Due to io_uring accounts memory it needs under the rlimit > memlocked option, which can be quite low on some setups, especially > on 64K pagesize machine. root isn't under this restriction, but > regular users are. So only g/233 and g/270 which use $qa_user to run > fsstress are failed. > > To avoid this failure, set max locked memory to unlimited before doing > fsstress, then restore it after test done. > > Signed-off-by: Zorro Lang <zlang@xxxxxxxxxx> > --- > > Hi, > > V2 removed `ulimit -l $lmem`, due to each case runs in child process, won't > affect other testing. > > Thanks, > Zorro > > tests/generic/233 | 6 ++++++ > tests/generic/270 | 6 ++++++ > 2 files changed, 12 insertions(+) > > diff --git a/tests/generic/233 b/tests/generic/233 > index 7eda5774..cc794c79 100755 > --- a/tests/generic/233 > +++ b/tests/generic/233 > @@ -43,6 +43,12 @@ _fsstress() > -f rename=10 -f fsync=2 -f write=15 -f dwrite=15 \ > -n $count -d $out -p 7` > > + # io_uring accounts memory it needs under the rlimit memlocked option, > + # which can be quite low on some setups (especially 64K pagesize). root > + # isn't under this restriction, but regular users are. To avoid the > + # io_uring_queue_init fail on ENOMEM, set max locked memory to unlimited > + # temporarily. > + ulimit -l unlimited > echo "fsstress $args" >> $seqres.full > if ! su $qa_user -c "$FSSTRESS_PROG $args" | tee -a $seqres.full | _filter_num /me kinda feels like this should be refactored into a common helper, but somehow when I try to picture that in my head all I can see is a screeching nightmare of bash goop so feel free to ignore me. :) --D > then > diff --git a/tests/generic/270 b/tests/generic/270 > index 3d8656d4..e93940ef 100755 > --- a/tests/generic/270 > +++ b/tests/generic/270 > @@ -37,6 +37,12 @@ _workout() > cp $FSSTRESS_PROG $tmp.fsstress.bin > $SETCAP_PROG cap_chown=epi $tmp.fsstress.bin > > + # io_uring accounts memory it needs under the rlimit memlocked option, > + # which can be quite low on some setups (especially 64K pagesize). root > + # isn't under this restriction, but regular users are. To avoid the > + # io_uring_queue_init fail on ENOMEM, set max locked memory to unlimited > + # temporarily. > + ulimit -l unlimited > (su $qa_user -c "$tmp.fsstress.bin $args" &) > /dev/null 2>&1 > > echo "Run dd writers in parallel" > -- > 2.30.2 >