On Saturday 05 June 2010, Maxim Levitsky wrote: > On Sat, 2010-06-05 at 20:45 +0200, Rafael J. Wysocki wrote: > > On Saturday 05 June 2010, Nigel Cunningham wrote: > > > Hi again. > > > > > > As I think about this more, I reckon we could run into problems at > > > resume time with reloading the image. Even if some bits aren't modified > > > as we're writing the image, they still might need to be atomically > > > restored. If we make the atomic restore part too small, we might not be > > > able to do that. > > > > > > So perhaps the best thing would be to stick with the way TuxOnIce splits > > > the image at the moment (page cache / process pages vs 'rest'), but > > > using this faulting mechanism to ensure we do get all the pages that are > > > changed while writing the first part of the image. > > > > I still don't quite understand why you insist on saving the page cache data > > upfront and re-using the memory occupied by them for another purpose. If you > > dropped that requirement, I'd really have much less of a problem with the > > TuxOnIce's approach. > Because its the biggest advantage? It isn't in fact. > Really saving whole memory makes huge difference. You don't have to save the _whole_ memory to get the same speed (you don't do that anyway, but the amount of data you don't put into the image with TuxOnIce is smaller). Something like 80% would be just sufficient IMO and then (a) the level of complications involved would drop significantly and (2) you'd be able to use the image-reading code already in the kernel without any modifications. It really looks like a win-win to me, doesn't it? Rafael _______________________________________________ linux-pm mailing list linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/linux-pm