On Tue, Jul 05, 2011 at 04:39:06PM +0300, Dor Laor wrote: > On 07/05/2011 03:58 PM, Marcelo Tosatti wrote: > >On Tue, Jul 05, 2011 at 01:40:08PM +0100, Stefan Hajnoczi wrote: > >>On Tue, Jul 5, 2011 at 9:01 AM, Dor Laor<dlaor@xxxxxxxxxx> wrote: > >>>I tried to re-arrange all of the requirements and use cases using this wiki > >>>page: http://wiki.qemu.org/Features/LiveBlockMigration > >>> > >>>It would be the best to agree upon the most interesting use cases (while we > >>>make sure we cover future ones) and agree to them. > >>>The next step is to set the interface for all the various verbs since the > >>>implementation seems to be converging. > >> > >>Live block copy was supposed to support snapshot merge. I think the > >>current favored approach is to make the source image a backing file to > >>the destination image and essentially do image streaming. > >> > >>Using this mechanism for snapshot merge is tricky. The COW file > >>already uses the read-only snapshot base image. So now we cannot > >>trivally copy the COW file contents back into the snapshot base image > >>using live block copy. > > > >It never did. Live copy creates a new image were both snapshot and > >"current" are copied to. > > > >This is similar with image streaming. > > Not sure I realize what's bad to do in-place merge: > > Let's suppose we have this COW chain: > > base <-- s1 <-- s2 > > Now a live snapshot is created over s2, s2 becomes RO and s3 is RW: > > base <-- s1 <-- s2 <-- s3 > > Now we've done with s2 (post backup) and like to merge s3 into s2. > > With your approach we use live copy of s3 into newSnap: > > base <-- s1 <-- s2 <-- s3 > base <-- s1 <-- newSnap > > When it is over s2 and s3 can be erased. > The down side is the IOs for copying s2 data and the temporary > storage. I guess temp storage is cheap but excessive IO are > expensive. > > My approach was to collapse s3 into s2 and erase s3 eventually: > > before: base <-- s1 <-- s2 <-- s3 > after: base <-- s1 <-- s2 > > If we use live block copy using mirror driver it should be safe as > long as we keep the ordering of new writes into s3 during the > execution. > Even a failure in the the middle won't cause harm since the > management will keep using s3 until it gets success event. Well, it is more complicated than simply streaming into a new image. I'm not entirely sure it is necessary. The common case is: base -> sn-1 -> sn-2 -> ... -> sn-n When n reaches a limit, you do: base -> merge-1 You're potentially copying similar amount of data when merging back into a single image (and you can't easily merge multiple snapshots). If the amount of data thats not in 'base' is large, you create leave a new external file around: base -> merge-1 -> sn-1 -> sn-2 ... -> sn-n to base -> merge-1 -> merge-2 > > > >>It seems like snapshot merge will require dedicated code that reads > >>the allocated clusters from the COW file and writes them back into the > >>base image. > >> > >>A very inefficient alternative would be to create a third image, the > >>"merge" image file, which has the COW file as its backing file: > >>snapshot (base) -> cow -> merge Remember there is a 'base' before snapshot, you don't copy the entire image. > >> > >>All data from snapshot and cow is copied into merge and then snapshot > >>and cow can be deleted. But this approach is results in full data > >>copying and uses potentially 3x space if cow is close to the size of > >>snapshot. > > > >Management can set a higher limit on the size of data that is merged, > >and create a new base once exceeded. This avoids copying excessive > >amounts of data. > > > >>Any other ideas that reuse live block copy for snapshot merge? > >> > >>Stefan > > > > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html