On Sat, Oct 08, 2011 at 02:29:34PM +0200, Bernhard Schmidt wrote: > >I'd suggest that for your workload, you need to allow at least 10GB > >of disk space per million inodes. Because of the number of small > >files, XFS is going to need a much larger amount of free space > >available to prevent aging related freespace fragmentation problems. > >The above ratio results in a maximum space usage of about 50%, which > >will avoid such issues. If you need to hold 2 million files, use a > >20GB filesystem... > > I don't need to hold 2 million files, 1 million might be enough, I > have to make sure I cannot run out of inodes way before I run out of > free space. > > Generally speaking I have the following problem: > > External nodes are submitting data (mails) to this system as fast as > they can. The mails can be between 800 bytes and several megabytes. > There are 50 receiver that write those mails as single files flat in > a single directory. > > There are 4 worker threads that process a _random_ file out of this > directory. To process it they need to be able to create a temporary > file on the same filesystem. Together they are slower than the 50 > receivers (they can process maybe 20% of the incoming rate), which > means that this incoming directory is going to fill. For the sake of > the argument lets assume that the amount of mails to be sent is > unlimited. > > The only knob the software knows to prevent this from going over is > free disk space. When free disk space is lower than 2 Gigabyte, the > acceptance of new mails is blocked gracefully until there is free > space again. You could increase this free space limit - that is likely to reduce the incidence of too-early ENOSPC. > It has, however, no way to deal with ENOSPC before that. When it > cannot create new files due no free inodes (ext4 with default > settings) or fragmentation in XFS, it breaks quite horribly and > cannot recover by itself. > > Can I avoid XFS giving ENOSPC due to inode shortage even in worst > case situations? I would be fine preallocating 1 GB for inode > storage if that would fix the problem. ext4 with bytes-per-inode = > blocksize does this fine. > > You mentioned an aging problem with XFS. I guess you mean that an > XFS filesystem will get slower/more fragmented by time with abuse > like this. These mail submission above will happen in bursts, during > normal times it will go down to << 1000 files on the entire > filesystem (empty incoming directory). Is this enough for XFS to > "fix itself"? In most cases, yes. > BTW, the software can hash the incoming directory in 16 or 16x16 > subdirectories. Would that help XFS in any way with those filesizes? Directory scalability is not affected by the size of the files they index. OTOH, concurrency of operations would be improved. That is, if you have all 2 million files in a single directory, only one process (incoming or processing) can be modifying the directory at a time. That will serialise a lot of the work that is being done. If you have those 2M files hashed across 16 directories, then modification/access collisions will be less likely hence operations are more likely to be done in parallel (and therefore faster). > At first glance I would have said yes, but due to the random access > in those directories it would still have the entire spool as > workload. Random directory lookups on large directories are pretty efficient on XFS due to the btree-based name hash indexing scheme they use. Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs