Re: Too Many Open Files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Sat, 29 May 2021 at 16:03, Gruher, Joseph R
<joseph.r.gruher@xxxxxxxxx> wrote:
>
> I have a 72-core server running CentOS 8, kernel 4.18.0-240, with FIO 3.27.  I have 12 NVMe drives, each with 12 partitions, and am trying to run multiple jobs for each partition, with numjobs=16 and iodepth=32.  I get a too many open files error from FIO.  Which is fine, that's a lot of threads.  If I change numjobs to 8, same error, if I change it to 4, FIO runs.  Is there a hard or dynamic limit in FIO, and if so, what is it, or how is it calculated?  Thanks!
>
>
> [global]
> ioengine=libaio
> direct=1
> randrepeat=0
> thread=1

In this case I don't think you're hitting an fio limit but your system
likely has limits (e.g. a global maximum and a seperate per process
limit). I'm guessing 12 (disks) * 12 (partitions) * 16 (jobs) = 2304
files. 12 * 12 * 8 = 1152 files. The fact that 4 worked makes me think
you have some per user or per process limit which is a number like
1024. Does the information in https://unix.stackexchange.com/q/84227
help?


--
Sitsofe




[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux