Re: [PATCH v3 2/2] sysctl: handle overflow for file-max

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 17, 2018 at 12:33:22AM +0200, Christian Brauner wrote:
> Currently, when writing
> 
> echo 18446744073709551616 > /proc/sys/fs/file-max
> 
> /proc/sys/fs/file-max will overflow and be set to 0. That quickly
> crashes the system.
> This commit sets the max and min value for file-max and returns -EINVAL
> when a long int is exceeded. Any higher value cannot currently be used as
> the percpu counters are long ints and not unsigned integers. This behavior
> also aligns with other tuneables that return -EINVAL when their range is
> exceeded. See e.g. [1], [2] and others.

Mostly sane, but...  get_max_files() users are bloody odd.  The one in
file-max limit reporting looks like a half-arsed attempt in "[PATCH] fix
file counting".  The one in af_unix.c, though...  I don't remember how
that check had come to be - IIRC that was a strange fallout of a thread
with me, Andrea and ANK involved, circa 1999, but I don't remember details;
Andrea, any memories?  It might be worth reconsidering...  The change in
question is in 2.2.4pre6; what do we use unix_nr_socks for?  We try to
limit the number of PF_UNIX socks by 2 * max_files, but max_files can be
huge *and* non-constant (i.e. it can decrease).  What's more, unix_tot_inflight
is unsigned int and max_files might exceed 2^31 just fine since "fs: allow
for more than 2^31 files" back in 2010...  Something's fishy there...



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux