Re: Difficulties pushing FIO towards many small files and WORM-style use-case

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 3, 2020 at 4:43 PM Elliott, Robert (Servers)
<elliott@xxxxxxx> wrote:
>
>
>
> > -----Original Message-----
> > From: David Pineau <david.pineau@xxxxxxxxxxxxxxx>
> > Sent: Wednesday, December 02, 2020 8:32 AM
> > To: fio@xxxxxxxxxxxxxxx
> > Subject: Difficulties pushing FIO towards many small files and WORM-
> > style use-case
> >
> > Hello,
> >
> ...
> > As we have data on the actual usage of this current software, we know
> > the spread of accesses to various size ranges, and we rely on a huge
> > number of files accessed by the multi-threaded service. As the pieces
> > of data can live a long time on this service and are immutable, I'm
> > trying to go for a WORM-style workload with FIO.
> >
> > With this information in mind, I build the following FIO
> > configuration file:
> >
> > >>>>
> > [global]
> > # File-related config
> > directory=/mnt/test-mountpoint
> > nrfiles=3000
>
> I don't think most systems can handle 3000 open files at a time for
> one process. Try
>         ulimit -n
>
> which might report that the default is 1024 open files per process.
>
> The fio openfiles=<int> option can be used to limit how many files it
> keeps open at a time.

The number of open file descriptors is tuned on our servers (the
system is usually running between 30-40k open files constantly
according to our monitoring), so I'm not limited on that side, but I
had indeed noticed the "openfiles" option, thanks for hinting at that.

>
>



[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux