On Fri, Jul 17, 2020 at 01:00:27AM -0700, Ram Pai wrote: > From: Laurent Dufour <ldufour@xxxxxxxxxxxxx> > > When a memory slot is hot plugged to a SVM, PFNs associated with the > GFNs in that slot must be migrated to secure-PFNs, aka device-PFNs. > > Call kvmppc_uv_migrate_mem_slot() to accomplish this. > Disable page-merge for all pages in the memory slot. > > Signed-off-by: Ram Pai <linuxram@xxxxxxxxxx> > [rearranged the code, and modified the commit log] > Signed-off-by: Laurent Dufour <ldufour@xxxxxxxxxxxxx> > --- > arch/powerpc/include/asm/kvm_book3s_uvmem.h | 10 ++++++++++ > arch/powerpc/kvm/book3s_hv.c | 10 ++-------- > arch/powerpc/kvm/book3s_hv_uvmem.c | 22 ++++++++++++++++++++++ > 3 files changed, 34 insertions(+), 8 deletions(-) > > diff --git a/arch/powerpc/include/asm/kvm_book3s_uvmem.h b/arch/powerpc/include/asm/kvm_book3s_uvmem.h > index f229ab5..6f7da00 100644 > --- a/arch/powerpc/include/asm/kvm_book3s_uvmem.h > +++ b/arch/powerpc/include/asm/kvm_book3s_uvmem.h > @@ -25,6 +25,9 @@ void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free, > struct kvm *kvm, bool skip_page_out); > int kvmppc_uv_migrate_mem_slot(struct kvm *kvm, > const struct kvm_memory_slot *memslot); > +void kvmppc_memslot_create(struct kvm *kvm, const struct kvm_memory_slot *new); > +void kvmppc_memslot_delete(struct kvm *kvm, const struct kvm_memory_slot *old); The names look a bit generic, but these functions are specific to secure guests. May be rename them to kvmppc_uvmem_memslot_[create/delele]? > + > #else > static inline int kvmppc_uvmem_init(void) > { > @@ -84,5 +87,12 @@ static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn) > static inline void > kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free, > struct kvm *kvm, bool skip_page_out) { } > + > +static inline void kvmppc_memslot_create(struct kvm *kvm, > + const struct kvm_memory_slot *new) { } > + > +static inline void kvmppc_memslot_delete(struct kvm *kvm, > + const struct kvm_memory_slot *old) { } > + > #endif /* CONFIG_PPC_UV */ > #endif /* __ASM_KVM_BOOK3S_UVMEM_H__ */ > diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c > index d331b46..bf3be3b 100644 > --- a/arch/powerpc/kvm/book3s_hv.c > +++ b/arch/powerpc/kvm/book3s_hv.c > @@ -4515,16 +4515,10 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm, > > switch (change) { > case KVM_MR_CREATE: > - if (kvmppc_uvmem_slot_init(kvm, new)) > - return; > - uv_register_mem_slot(kvm->arch.lpid, > - new->base_gfn << PAGE_SHIFT, > - new->npages * PAGE_SIZE, > - 0, new->id); > + kvmppc_memslot_create(kvm, new); > break; > case KVM_MR_DELETE: > - uv_unregister_mem_slot(kvm->arch.lpid, old->id); > - kvmppc_uvmem_slot_free(kvm, old); > + kvmppc_memslot_delete(kvm, old); > break; > default: > /* TODO: Handle KVM_MR_MOVE */ > diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c > index a206984..a2b4d25 100644 > --- a/arch/powerpc/kvm/book3s_hv_uvmem.c > +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c > @@ -1089,6 +1089,28 @@ int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn) > return (ret == U_SUCCESS) ? RESUME_GUEST : -EFAULT; > } > > +void kvmppc_memslot_create(struct kvm *kvm, const struct kvm_memory_slot *new) > +{ > + if (kvmppc_uvmem_slot_init(kvm, new)) > + return; > + > + if (kvmppc_memslot_page_merge(kvm, new, false)) > + return; > + > + if (uv_register_mem_slot(kvm->arch.lpid, new->base_gfn << PAGE_SHIFT, > + new->npages * PAGE_SIZE, 0, new->id)) > + return; > + > + kvmppc_uv_migrate_mem_slot(kvm, new); Quite a few things can return failure here including kvmppc_uv_migrate_mem_slot() and we are ignoring all of those. I am wondering if this should be called from prepare_memory_region callback instead of commit_memory_region. In the prepare phase, we have a way to back out in case of error. Can you check if moving this call to prepare callback is feasible? In the other case in 1/5, the code issues ksm unmerge request on error, but not here. Also check if the code for 1st three calls can be shared with similar code in 1/5. Regards, Bharata.