Sorry for the patchbomb, but I wanted to flush out my current working series now that is passing all the testing I've thrown at, including on a high IOPS flash device that I won't have access to for much longer. The series really is multiple ones, but we need all of it to make it to the end goal. 1) inode shrinkage. This has been posted and partially revieved before, and isn't absolutely nessecary but will make life easier later on. 2) log all inode updates and stop using the VFS dirty tracking for metadata. This has also been posted before, and it did get some updates. It still lacks support for optimizing fdatasync, and it still hasn't been updated to the suggestion by Dave to allocate log space from ->writepage. I have tried to implement the latter but run into various issues, more on that later. 3) the various quota updates, which mostly have been posted before. This version also has a few new patches that add a proper shrinker callout to the quota code. While the code looks fairly good the testing doesn't really stress much of this code at all. I will have to write a new testcase (or a few) that actually have a lot of dquots in memory, and create memory pressure before I feel confident enought about these changes. 4) stop writing back inodes from async reclaim. This is just a single patch, but a huge change in behaviour. 5) implement a way to completely empty the AIL and use it during freeze, umount and remount r/o. This remove a whole lot of nasty heuristics for flushing back all metadata to its regular, non-log place by using a single piece of well understood code. 6) remove xfsbufd and queue up buffers on on-stack list, with the only one during normal operation beeing the one in xfsaild. Besides greatly reducing the code this massively reduces calls to the buffercache that reduce scalability in highly parallel, metada-intensive loads. 7) a few cleanups the reduce and centralize log force trying to unpin buffers. Comments on all these are highly appreciated, and I will resubmit patches in here in smaller chunks as soon as were are confident about one or more of the sub-series. _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs