I am trying to develop a solution for quickly brining up 100's of identical VMs using KVM. The goal is to make heavy use of shared pages to lower the overall memory footprint. As it stands now the best way we have found to do this is to bring up batches of the VMs from the same snapshot in a stopped state with KSM set to aggressively claw back memory. Once the system settles down and KSM has finished its work the next batch of VMs is launched. This solution is far from optimal and KSM becomes painfully slow after a 100 or so VMs are up. We have modified KSM such that we can assign it specific PIDs to work on so that we can focus KSM’s effort to work only on the VMs at hand. This seems to help with the system load but it still very time consuming to wait for KSM to collapse the shared pages. I understand that once the VMs are running some of the memory will diverge causing the COW pages to balloon but this is to be expected. However since all the VMs are identical there is still a large subset of pages which are relatively static and can be safely shared among the systems. A possible solution to this problem would be to fork the running process so we can start the new VM instance in a COW memory state. I have successfully got a prototype of forking VMs working under qemu. I have looked at getting the same functionality working under KSM but have little success. Looking through the mailing list I have found references to problems with the fork system call itself as well as problems with mmu_notifiers that make this a difficult task. Any guidance on how to go about adding this capability to KVM or suggestions on different approaches to achieve similar results would greatly be appreciated. Thank you, Andrew -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html