On 09/05/2012 03:00 AM, Dave Chinner wrote: > On Tue, Sep 04, 2012 at 10:10:57AM -0400, Brian Foster wrote: >> On 09/03/2012 01:28 AM, Dave Chinner wrote: >>> On Mon, Aug 27, 2012 at 03:51:51PM -0400, Brian Foster wrote: ... >> >> Any thoughts on having tunables for both values (time and min size?) on >> the background scanning? > > Well, my suggestion for timing is as per above (xfs_syncd_centisecs > * 100), but I don't really have any good rule of thumb for the > minimum size. What threshold do people start to notice this? > For the testing I've done so far, I'm hitting EDQUOT with 20-30GB of space left while sequentially writing to many large files. I'm really just trying to get used space before failure more in the ball park of the limit, so I'm not going to complain too much over leaving a few hundred MB or so around on an otherwise full quota. ;) From where I sit, the problem is more when we extend a file by 2, 4, 8GB and consume a large amount of limited available space. I suppose for the background scanning, it's more about just using a value that doesn't get in the way of general behavior/performance. I'll do some more testing in this area. > I'd SWAG that something like 32MB is a good size to start at because > most IO subsystems will still be able to reach full bandwidth with > extents of this size when reading files. > > Alternatively, if you can determine if the inode is still in use at > the time of the scan (e.g. elevated reference count due to an open > fd) and skip the truncation for those inodes, then a minimum size is > not really needed, right? > Hmm, good idea. Though perhaps I can use the min_size as a force parameter (i.e., trim anything over this size), and the inode in use check allows a more conservative default. I'll have to play around with the right time/size values some more to get a better feel for it. I'll probably include tunables at least for testing purposes, and they can always be removed later. Thanks. Brian > Cheers, > > Dave. > _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs