On Fri, May 21, 2010 at 11:15:18AM -0400, Christoph Hellwig wrote: > Nick, what exactly is the problem with the reserve + allocate design? > > In a delalloc filesystem (which is all those that will care about high > performance large writes) the write path fundamentally consists of those > two operations. Getting rid of the get_blocks mess and replacing it > with a dedicated operations vector will simplify things a lot. Nothing wrong with it, I think it's a fine idea (although you may still need a per-bh call to connect the fs metadata to each page). I just much prefer to have operations after the copy not able to fail, otherwise you get into all those pagecache corner cases. BTW. when you say reserve + allocate, this is in the page-dirty path, right? I thought delalloc filesystems tend to do the actual allocation in the page-cleaning path? Or am I confused? > Punching holes is a rather problematic operation, and as mentioned not > actually implemented for most filesystems - just decrementing counters > on errors increases the chances that our error handling will actually > work massively. It's just harder for the pagecache. Invalidating and throwing out old pagecache and splicing in new pages seems a bit of a hack. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html