On Sat, Jun 11, 2011 at 07:01:36AM +0300, Amir G. wrote: > OK. Now I am convinced that there is no I/O ordering issue, > since you are never overwriting shared data in-place. > > Now I also convinced that the origin will be so heavily fragmented, > to the point that the solution will not be practical for performance > sensitive applications. Specifically, applications that use spinning > media storage and require consistent and predictable performance. I am also convinced multisnap wont be suitable for every use case. I want to be very careful to only advocate it for people with suitable tasks. Over time I'm sure we'll broaden the suitable apps, for example by tinkering with the allocator, or doing some preemptive defrag. It would be disappointing for everyone to write it off, just because it isn't suitable for say high performance database apps. The very simple allocator I'm using at the moment will try and place new blocks together. My hope is that past io patterns will be similar to future ones. So while the volumes will be fragmented, blocks for the typical io access patterns will still be together. Much more experimentation is needed. This is very early days for multisnap, the code is still changing. Only a few people have run it. For instance Lukas tested it on Thursday and got some unexpectedly poor results. I'm there'll be a quick fix for it (eg, wrong cache size, too much disk seeking due to the metadata and data volumes being at opposite ends of a spindle device), but this shows that I need more people to play with it. > I do have a crazy idea, though, how to combine the power of the > multisnap features with the speed of a raw ext4 fs. I need to think this through over the weekend. The metadata interface is pretty clean, so you could start by looking at that. However I do find this suggestion surprising. My priority is block level snapshots, if I can expose interfaces for you such that we share code then that would be great. - Joe -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html