> +static int mcopy_atomic_pte(struct mm_struct *dst_mm, > + pmd_t *dst_pmd, > + struct vm_area_struct *dst_vma, > + unsigned long dst_addr, > + unsigned long src_addr) > +{ > + struct mem_cgroup *memcg; > + pte_t _dst_pte, *dst_pte; > + spinlock_t *ptl; > + struct page *page; > + void *page_kaddr; > + int ret; > + > + ret = -ENOMEM; > + page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, dst_vma, dst_addr); > + if (!page) > + goto out; Not a fatal thing, but still quite inconvenient. If there are two tasks that have anonymous private VMAs that are still not COW-ed from each other, then it will be impossible to keep the pages shared with userfault. Thus if we do post-copy memory migration for tasks, then these guys will have their memory COW-ed. Thanks, Pavel -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html