Re: [PATCH] generic/558: limit the number of spawned subprocesses

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]



On Wed, Jul 12, 2023 at 11:57:49AM +0200, Mikulas Patocka wrote:
> 
> 
> On Tue, 11 Jul 2023, Darrick J. Wong wrote:
> 
> > On Tue, Jul 11, 2023 at 05:51:42PM +0200, Mikulas Patocka wrote:
> > > When I run the test 558 on bcachefs, it works like a fork-bomb and kills
> > > the machine. The reason is that the "while" loop spawns "create_file"
> > > subprocesses faster than they are able to complete.
> > > 
> > > This patch fixes the crash by limiting the number of subprocesses to 128.
> > > 
> > > Signed-off-by: Mikulas Patocka <mpatocka@xxxxxxxxxx>
> > > 
> > > ---
> > >  tests/generic/558 |    1 +
> > >  1 file changed, 1 insertion(+)
> > > 
> > > Index: xfstests-dev/tests/generic/558
> > > ===================================================================
> > > --- xfstests-dev.orig/tests/generic/558
> > > +++ xfstests-dev/tests/generic/558
> > > @@ -48,6 +48,7 @@ echo "Create $((loop * file_per_dir)) fi
> > >  while [ $i -lt $loop ]; do
> > >  	create_file $SCRATCH_MNT/testdir $file_per_dir $i >>$seqres.full 2>&1 &
> > >  	let i=$i+1
> > > +	if [ $((i % 128)) = 0 ]; then wait; fi
> > 
> > Hm.  $loop is (roughly) the number of free inodes divided by 1000.  This
> > test completes nearly instantly on XFS; how many free inodes does
> > bcachefs report after _scratch_mount?
> > 
> > XFS reports ~570k inodes, so it's "only" starting 570 processes.
> > 
> > I think it's probably wise to clamp $loop to something sane, but let's
> > get to the bottom of how the math went wrong and we got a forkbomb.
> > 
> > --D
> 
> bcachefs reports 14509106 total inodes (for a 1GB filesystem)
> 
> As the test proceeds, the number of total inodes (as well as the number of 
> free inodes) decreases.

Aha, ok.  So XFS does a similar thing (includes free space in the free
inodes count), but XFS inodes are 256-2048 bytes, whereas bcachefs
inodes can be as small as a few dozen bytes.  That's why the number of
processes is big enough to forkbomb the system.

How about we restrict the number of subshells to something resembling
the CPU count?

free_inodes=$(_get_free_inode $SCRATCH_MNT)
nr_cpus=$(( $($here/src/feature -o) * LOAD_FACTOR ))

if ((free_inodes <= nr_cpus)); then
	nr_cpus=1
	files_per_dir=$free_inodes
else
	files_per_dir=$(( (free_inodes + nr_cpus - 1) / nr_cpus ))
fi
mkdir -p $SCRATCH_MNT/testdir

echo "Create $((loop * file_per_dir)) files in $SCRATCH_MNT/testdir" >>$seqres.full
for ((i = 0; i < nr_cpus; i++)); do
	create_file $SCRATCH_MNT/testdir $files_per_dir $i >>$seqres.full 2>&1 &
done
wait


--D

> Mikulas
> 



[Index of Archives]     [Linux Filesystems Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux