On Tue, Dec 24, 2024 at 01:20:15AM +0000, Nixiaoming wrote: > I always thought that RLIMIT_NOFILE limits the number of open files, but when I > read the code for alloc_fd(), I found that RLIMIT_NOFILE is the largest fd index? > Is this a mistake in my understanding, or is it a code implementation error? > > ----- > > alloc_fd code: > > diff --git a/fs/file.c b/fs/file.c > index fb1011c..e47ddac 100644 > --- a/fs/file.c > +++ b/fs/file.c > @@ -561,6 +561,7 @@ static int alloc_fd(unsigned start, unsigned end, unsigned flags) > */ > error = -EMFILE; > if (unlikely(fd >= end)) > + // There may be unclosed fd between [end, max]. the number of open files can be greater than RLIMIT_NOFILE. > goto out; > > if (unlikely(fd >= fdt->max_fds)) { > > ----- > > Test Procedure > 1. ulimit -n 1024. > 2. Create 1000 FDs. > 3. ulimit -n 100. > 4. Close all FDs less than 100 and continue to hold FDs greater than 100. > 5. Open() and check whether the FD is successfully created, > > If RLIMIT_NOFILE is the upper limit of the number of opened files, step 5 should fail, but step 5 returns success. > This is the expected behavior, albeit posix is a little sketchy about the description: https://pubs.opengroup.org/onlinepubs/009696699/functions/getrlimit.html > RLIMIT_NOFILE > This is a number one greater than the maximum value that the system > may assign to a newly-created descriptor. If this limit is > exceeded, functions that allocate a file descriptor shall fail with > errno set to [EMFILE]. This limit constrains the number of file > descriptors that a process may allocate. Since you freed up values in the range fitting the limit, allocation was allowed to succeed. Note other systems act the same way, nobody is explicitly counting used fds for NOFILE enforcement and per the above they should not. Ultimately it *does* constrain the number of file descriptors a process may allocate if you take a look at all values present during the lifetime of the process.