> From: Jason Gunthorpe <jgg@xxxxxxxxxx> > Sent: Saturday, January 7, 2023 12:43 AM > > @@ -2368,7 +2372,7 @@ static int iommu_domain_identity_map(struct > dmar_domain *domain, > > return __domain_mapping(domain, first_vpfn, > first_vpfn, last_vpfn - first_vpfn + 1, > - DMA_PTE_READ|DMA_PTE_WRITE); > + DMA_PTE_READ|DMA_PTE_WRITE, > GFP_KERNEL); > } Baolu, can you help confirm whether switching from GFP_ATOMIC to GFP_KERNEL is OK in this path? it looks fine to me in a quick glance but want to be conservative here. > @@ -4333,7 +4337,8 @@ static size_t intel_iommu_unmap(struct > iommu_domain *domain, > > /* Cope with horrid API which requires us to unmap more than the > size argument if it happens to be a large-page mapping. */ > - BUG_ON(!pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT, > &level)); > + BUG_ON(!pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT, > &level, > + GFP_ATOMIC)); with level==0 it implies it's only lookup w/o pgtable allocation. From this angle it reads better to use a more relaxed gfp e.g. GFP_KERNEL here. > @@ -4392,7 +4397,8 @@ static phys_addr_t > intel_iommu_iova_to_phys(struct iommu_domain *domain, > int level = 0; > u64 phys = 0; > > - pte = pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT, > &level); > + pte = pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT, > &level, > + GFP_ATOMIC); ditto _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization