On Thu, Jan 20, 2011 at 12:33:46PM +1100, Dave Chinner wrote: | Given that XFS is aimed towards optimising for the large file/large | IO/high throughput type of application, I'm comfortable with saying | that avoiding sub-page writes for optimal throughput IO is an | application problem and going from there. Especially considering | that stuff like rsync and untarring kernel tarballs are all | appending writes so won't take any performance hit at all... I agree. I do not expect systems with 64K pages to be used for single bit manipulations. However, I do see a couple of potential problems. The one place where page size I/O may not work though is for an DMAPI/HSM (DMF) managed filesystem where some blocks are managed on near-line media. The DMAPI needs to be able to remove and restore extents on a filesystem block boundary, not a page boundary. The other downside is that for sparse files, we could end up allocating space for zero filled blocks. There may be some workloads where significant quantities of space are wasted. -- Geoffrey Wehrman _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs