Re: [PATCH] generic/558: limit the number of spawned subprocesses

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]



On Wed, Jul 12, 2023 at 12:10:05PM +0200, Mikulas Patocka wrote:
> If we hit the limit of total open files, we already killed the system. At 
> this point the user can't execute any program because executing a programs 
> requires opening files.
> 
> I think that it is possible to setup cgroups so that a process inside a 
> cgroup can't kill the machine by exhausting resources. But distributions 
> don't do it. And they don't do it for a root user (the test runs under 
> root).

When I looked at this test before I missed the fork bomb aspect - was
just looking at the crazy numbers of pinned inodes (which is still a
significant fraction of system memory, looking again...)

If we change bcachefs to not report a maximum number of inodes, might
that be more in line with other filesystems? Or is it really just
because bcachefs inodes are tiny?



[Index of Archives]     [Linux Filesystems Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux