> +static int mcopy_atomic_pte(struct mm_struct *dst_mm, > + pmd_t *dst_pmd, > + struct vm_area_struct *dst_vma, > + unsigned long dst_addr, > + unsigned long src_addr) > +{ > + struct mem_cgroup *memcg; > + pte_t _dst_pte, *dst_pte; > + spinlock_t *ptl; > + struct page *page; > + void *page_kaddr; > + int ret; > + > + ret = -ENOMEM; > + page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, dst_vma, dst_addr); > + if (!page) > + goto out; Not a fatal thing, but still quite inconvenient. If there are two tasks that have anonymous private VMAs that are still not COW-ed from each other, then it will be impossible to keep the pages shared with userfault. Thus if we do post-copy memory migration for tasks, then these guys will have their memory COW-ed. Thanks, Pavel -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>