On Monday 10 July 2006 19:59, Theodore Tso wrote: > On Mon, Jul 10, 2006 at 12:25:46PM +0200, Roger Larsson wrote: > > Not double since it is only the first read after a write that needs to be > > rewritten. My assumption is that most files are written fewer times than > > they are read. And the read for the copy is free since that was what > > triggered it. > > But there is a cost, and the question is how much does this buy you > compared to simply getting it right the first time, either via a > delayed allocation scheme, or where the application knows how big the > file is up front (as is often the case). But is the size of the file ALL that is needed? Isn't it interesting to make use of - What is it written together with. - What is it read together with. Lets take a look at a use case, downloading and compiling a kernel. Download goes to the write buffer, delayed enough to notice that it is big. tar to extract read triggers a copy operation -> copy can be made continuous since the size is known. This results in two continuous writes one for the copy operation. The other for extracted files put in 'write' part. make oldconfig config file related stuff will be copied from 'write'. make copy files together as they are used. object files are written to 'write' part - continously. link will move them together as they are used. (Yes, temporary files is a problem - delayed write?) Redoing a make should do quite well regarding where files are placed relative to each other... Does other allocation schemes really do better without later moving the files? /RogerL - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html