Re: [PATCH] generic/558: limit the number of spawned subprocesses

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]



On Wed, Jul 12, 2023 at 07:59:07PM +0200, Mikulas Patocka wrote:
> 
> 
> On Wed, 12 Jul 2023, Kent Overstreet wrote:
> 
> > On Wed, Jul 12, 2023 at 12:10:05PM +0200, Mikulas Patocka wrote:
> > > If we hit the limit of total open files, we already killed the system. At 
> > > this point the user can't execute any program because executing a programs 
> > > requires opening files.
> > > 
> > > I think that it is possible to setup cgroups so that a process inside a 
> > > cgroup can't kill the machine by exhausting resources. But distributions 
> > > don't do it. And they don't do it for a root user (the test runs under 
> > > root).
> > 
> > When I looked at this test before I missed the fork bomb aspect - was
> > just looking at the crazy numbers of pinned inodes (which is still a
> > significant fraction of system memory, looking again...)
> > 
> > If we change bcachefs to not report a maximum number of inodes, might
> > that be more in line with other filesystems? Or is it really just
> > because bcachefs inodes are tiny?
> 
> I think that it's OK to report as many free inodes as it fits on the 
> filesystem. I think it is not a bug - we should fix the test, not lie to 
> make the test pass.
> 
> There is one misbehavior though. As the test allocates the inodes on 
> bcachefs, the total number of inodes is decreasing. The other filesystems 
> don't behave in this way and I think that bcachefs shouldn't change the 
> total number of inodes too.

I don't think that's avoidable: bcachefs inodes are variable size (we
use varints for most fields), so the total number of inodes is only a
guess - it'll also decrease by just writing normal data.



[Index of Archives]     [Linux Filesystems Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux