RE: Difficulties pushing FIO towards many small files and WORM-style use-case

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: David Pineau <david.pineau@xxxxxxxxxxxxxxx>
> Sent: Wednesday, December 02, 2020 8:32 AM
> To: fio@xxxxxxxxxxxxxxx
> Subject: Difficulties pushing FIO towards many small files and WORM-
> style use-case
> 
> Hello,
> 
...
> As we have data on the actual usage of this current software, we know
> the spread of accesses to various size ranges, and we rely on a huge
> number of files accessed by the multi-threaded service. As the pieces
> of data can live a long time on this service and are immutable, I'm
> trying to go for a WORM-style workload with FIO.
> 
> With this information in mind, I build the following FIO
> configuration file:
> 
> >>>>
> [global]
> # File-related config
> directory=/mnt/test-mountpoint
> nrfiles=3000

I don't think most systems can handle 3000 open files at a time for
one process. Try
	ulimit -n

which might report that the default is 1024 open files per process.

The fio openfiles=<int> option can be used to limit how many files it
keeps open at a time.






[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux