On 01/20/2011 08:27 AM, Andrew Morton wrote: > > Another way of doing all this would be to implement some sort of > lookaside cache at the vfs->block boundary. At boot time, load that > cache up with all the disk blocks which we know the boot will need (a > single ascending pass across the disk) and then when the vfs/fs goes to > read a disk block take a peek in that cache first and if it's a hit, > either steal the page or memcpy it. > Ha. this sounds very much like the cleancache project presented for inclusion so many times. It has even visited and left linux-next a few times. They solved all these problems with a few VFS hooks. > It has the obvious coherence problems which would be pretty simple to > solve by hooking into the block write path as well. See cleancache they solved it with a simple VFS hook. > The list of needed > blocks can be very simply generated with existing blktrace > infrastructure. It does add permanent runtime overhead - once the > cache is invalidated and disabled, every IO operation would incur a > test-n-not-taken-branch. Maybe not too bad. > > Need to handle small-memory systems somehow, where the cache simply > ooms the machine or becomes ineffective because it's causing eviction > elsewhere. > > It could perhaps all be implemented as an md or dm driver. > > Or even as an IO scheduler. I say this because IO schedulers can be > replaced on-the-fly, so the caching layer can be unloaded from the > stack once it is finished with. Or a cleancache driver Boaz -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html