Hello, This is an attempt to implement support for memory snapshot for the the checkpoint-restore project (http://criu.org). To create a dump of an application(s) we save all the information about it to files. No surprise, the biggest part of such dump is the contents of tasks' memory. However, in some usage scenarios it's not required to get _all_ the task memory while creating a dump. For example, when doing periodical dumps it's only required to take full memory dump only at the first step and then take incremental changes of memory. Another example is live migration. In the simplest form it looks like -- create dump, copy it on the remote node then restore tasks from dump files. While all this dump-copy-restore thing goes all the process must be stopped. However, if we can monitor how tasks change their memory, we can dump and copy it in smaller chunks, periodically updating it and thus freezing tasks only at the very end for the very short time to pick up the recent changes. That said, some help from kernel to watch how processes modify the contents of their memory is required. I'd like to propose one possible solution of this task -- with the help of page-faults and trace events. Briefly the approach is -- remap some memory regions as read-only, get the #pf on task's attempt to modify the memory and issue a trace event of that. Since we're only interested in parts of memory of some tasks, make it possible to mark the vmas we're interested in and issue events for them only. Also, to be aware of tasks unmapping the vma-s being watched, also issue an event when the marked vma is removed (and for symmetry -- an event when a vma is marked). What do you think about this approach? Is this way of supporting mem snapshot OK for you, or should we invent some better one? Thanks, Pavel -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>