On Tue, Aug 20, 2019 at 04:22:15PM +1000, Suraj Jitindar Singh wrote: > On Fri, 2019-08-09 at 14:11 +0530, Bharata B Rao wrote: > > KVMPPC driver to manage page transitions of secure guest > > via H_SVM_PAGE_IN and H_SVM_PAGE_OUT hcalls. > > > > H_SVM_PAGE_IN: Move the content of a normal page to secure page > > H_SVM_PAGE_OUT: Move the content of a secure page to normal page > > > > Private ZONE_DEVICE memory equal to the amount of secure memory > > available in the platform for running secure guests is created > > via a char device. Whenever a page belonging to the guest becomes > > secure, a page from this private device memory is used to > > represent and track that secure page on the HV side. The movement > > of pages between normal and secure memory is done via > > migrate_vma_pages() using UV_PAGE_IN and UV_PAGE_OUT ucalls. > > Hi Bharata, > > please see my patch where I define the bits which define the type of > the rmap entry: > https://patchwork.ozlabs.org/patch/1149791/ > > Please add an entry for the devm pfn type like: > #define KVMPPC_RMAP_PFN_DEVM 0x0200000000000000 /* secure guest devm > pfn */ > > And the following in the appropriate header file > > static inline bool kvmppc_rmap_is_pfn_demv(unsigned long *rmapp) > { > return !!((*rmapp & KVMPPC_RMAP_TYPE_MASK) == > KVMPPC_RMAP_PFN_DEVM)); > } > Sure, I have the equivalents defined locally, will move to appropriate headers. > Also see comment below. > > > +static struct page *kvmppc_devm_get_page(unsigned long *rmap, > > + unsigned long gpa, unsigned > > int lpid) > > +{ > > + struct page *dpage = NULL; > > + unsigned long bit, devm_pfn; > > + unsigned long nr_pfns = kvmppc_devm.pfn_last - > > + kvmppc_devm.pfn_first; > > + unsigned long flags; > > + struct kvmppc_devm_page_pvt *pvt; > > + > > + if (kvmppc_is_devm_pfn(*rmap)) > > + return NULL; > > + > > + spin_lock_irqsave(&kvmppc_devm_lock, flags); > > + bit = find_first_zero_bit(kvmppc_devm.pfn_bitmap, nr_pfns); > > + if (bit >= nr_pfns) > > + goto out; > > + > > + bitmap_set(kvmppc_devm.pfn_bitmap, bit, 1); > > + devm_pfn = bit + kvmppc_devm.pfn_first; > > + dpage = pfn_to_page(devm_pfn); > > + > > + if (!trylock_page(dpage)) > > + goto out_clear; > > + > > + *rmap = devm_pfn | KVMPPC_PFN_DEVM; > > + pvt = kzalloc(sizeof(*pvt), GFP_ATOMIC); > > + if (!pvt) > > + goto out_unlock; > > + pvt->rmap = rmap; > > Am I missing something, why does the rmap need to be stored in pvt? > Given the gpa is already stored and this is enough to get back to the > rmap entry, right? I use rmap entry to note that this guest page is secure and is being represented by device memory page on the HV side. When the page becomes normal again, I need to undo that from dev_pagemap_ops.page_free() where I don't have gpa. Regards, Bharata.