On Tue, Jul 11, 2023 at 05:51:42PM +0200, Mikulas Patocka wrote: > When I run the test 558 on bcachefs, it works like a fork-bomb and kills > the machine. The reason is that the "while" loop spawns "create_file" > subprocesses faster than they are able to complete. > > This patch fixes the crash by limiting the number of subprocesses to 128. > > Signed-off-by: Mikulas Patocka <mpatocka@xxxxxxxxxx> > > --- > tests/generic/558 | 1 + > 1 file changed, 1 insertion(+) > > Index: xfstests-dev/tests/generic/558 > =================================================================== > --- xfstests-dev.orig/tests/generic/558 > +++ xfstests-dev/tests/generic/558 The generic/558 was shared/006, it was written for specific fs (e.g. xfs), then shared with other similar localfs. After we changed it to a generic test case, the `_scratch_mkfs_sized` helps to avoid running this case on nfs/cifs and any other fs which can't be mkfs sized. Originally we thought a fs has a small specific size generally has limited free inodes. It works for long time, but now bcachefs looks like an exception :) I think we must limit the number of processes, then let each process create more files if it need more inodes, that helps to avoid the forkbomb problem, and helps this case to work with bcachefs and other fs have lots of free inodes in 1G space. But we'd better to limit the number of free inodes too, we don't want to run this case too long time. If a fs shows too many free inodes, _notrun "The 1G $FSTYP has too many free inodes!". Thanks, Zorro > @@ -48,6 +48,7 @@ echo "Create $((loop * file_per_dir)) fi > while [ $i -lt $loop ]; do > create_file $SCRATCH_MNT/testdir $file_per_dir $i >>$seqres.full 2>&1 & > let i=$i+1 > + if [ $((i % 128)) = 0 ]; then wait; fi > done > wait > >