On 2018年08月09日 16:32, Pankaj Gupta wrote: >> For device specific memory space, when we move these area of pfn to >> memory zone, we will set the page reserved flag at that time, some of >> these reserved for device mmio, and some of these are not, such as >> NVDIMM pmem. >> >> Now, we map these dev_dax or fs_dax pages to kvm for DIMM/NVDIMM >> backend, since these pages are reserved. the check of >> kvm_is_reserved_pfn() misconceives those pages as MMIO. Therefor, we >> introduce 2 page map types, MEMORY_DEVICE_FS_DAX/MEMORY_DEVICE_DEV_DAX, >> to indentify these pages are from NVDIMM pmem. and let kvm treat these > s/indentify/identify & remove '.' Thanks Pankaj, :-) > >> as normal pages. >> >> Without this patch, Many operations will be missed due to this >> mistreatment to pmem pages. For example, a page may not have chance to >> be unpinned for KVM guest(in kvm_release_pfn_clean); not able to be >> marked as dirty/accessed(in kvm_set_pfn_dirty/accessed) etc >> >> Signed-off-by: Zhang Yi <yi.z.zhang@xxxxxxxxxxxxxxx> >> --- >> virt/kvm/kvm_main.c | 8 ++++++-- >> 1 file changed, 6 insertions(+), 2 deletions(-) >> >> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c >> index c44c406..969b6ca 100644 >> --- a/virt/kvm/kvm_main.c >> +++ b/virt/kvm/kvm_main.c >> @@ -147,8 +147,12 @@ __weak void >> kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, >> >> bool kvm_is_reserved_pfn(kvm_pfn_t pfn) >> { >> - if (pfn_valid(pfn)) >> - return PageReserved(pfn_to_page(pfn)); >> + struct page *page; >> + >> + if (pfn_valid(pfn)) { >> + page = pfn_to_page(pfn); >> + return PageReserved(page) && !is_dax_page(page); >> + } >> >> return true; >> } >> -- >> 2.7.4 >> >>