Re: [PATCH] [RESEND] RFC: List per-process file descriptor consumption when hitting file-max

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2009/7/30 Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>:
> If there's some reason why the problem is particularly severe and
> particularly hard to resolve by other means then sure, perhaps explicit
> kernel support is justified.  But is that the case with this specific
> userspace bug?

Well, this can be figured by userspace by traversing procfs and
counting entries of fd/ for each, but that is likely to require more
available file descriptors and given we are at the point when the
limit is hit, this may not work. There is, of course, a good chance
that the process that tried to open the one-too-many descriptor is
going to crash upon failing to do so (and thus free a bunch of
descriptors), but that is going to create more confusion. Most of the
time, the application that crashes when file-max is reached is not the
one that ate them all.

So, all in all, in certain cases there's no other way to figure out
who was leaking descriptors.

Regards,
--
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux