On Thu, Apr 8, 2021 at 11:26 AM Jason Wang <jasowang@xxxxxxxxxx> wrote: > > > 在 2021/3/31 下午4:05, Xie Yongji 写道: > > This implements an MMU-based IOMMU driver to support mapping > > kernel dma buffer into userspace. The basic idea behind it is > > treating MMU (VA->PA) as IOMMU (IOVA->PA). The driver will set > > up MMU mapping instead of IOMMU mapping for the DMA transfer so > > that the userspace process is able to use its virtual address to > > access the dma buffer in kernel. > > > > And to avoid security issue, a bounce-buffering mechanism is > > introduced to prevent userspace accessing the original buffer > > directly. > > > > Signed-off-by: Xie Yongji <xieyongji@xxxxxxxxxxxxx> > > > Acked-by: Jason Wang <jasowang@xxxxxxxxxx> > > With some nits: > > > > --- > > drivers/vdpa/vdpa_user/iova_domain.c | 521 +++++++++++++++++++++++++++++++++++ > > drivers/vdpa/vdpa_user/iova_domain.h | 70 +++++ > > 2 files changed, 591 insertions(+) > > create mode 100644 drivers/vdpa/vdpa_user/iova_domain.c > > create mode 100644 drivers/vdpa/vdpa_user/iova_domain.h > > > [...] > > > > +static void vduse_domain_bounce(struct vduse_iova_domain *domain, > > + dma_addr_t iova, size_t size, > > + enum dma_data_direction dir) > > +{ > > + struct vduse_bounce_map *map; > > + unsigned int offset; > > + void *addr; > > + size_t sz; > > + > > + while (size) { > > + map = &domain->bounce_maps[iova >> PAGE_SHIFT]; > > + offset = offset_in_page(iova); > > + sz = min_t(size_t, PAGE_SIZE - offset, size); > > + > > + if (WARN_ON(!map->bounce_page || > > + map->orig_phys == INVALID_PHYS_ADDR)) > > + return; > > + > > + addr = page_address(map->bounce_page) + offset; > > + do_bounce(map->orig_phys + offset, addr, sz, dir); > > + size -= sz; > > + iova += sz; > > + } > > +} > > + > > +static struct page * > > +vduse_domain_get_mapping_page(struct vduse_iova_domain *domain, u64 iova) > > > It's better to rename this as "vduse_domain_get_coherent_page?". > OK. > > > +{ > > + u64 start = iova & PAGE_MASK; > > + u64 last = start + PAGE_SIZE - 1; > > + struct vhost_iotlb_map *map; > > + struct page *page = NULL; > > + > > + spin_lock(&domain->iotlb_lock); > > + map = vhost_iotlb_itree_first(domain->iotlb, start, last); > > + if (!map) > > + goto out; > > + > > + page = pfn_to_page((map->addr + iova - map->start) >> PAGE_SHIFT); > > + get_page(page); > > +out: > > + spin_unlock(&domain->iotlb_lock); > > + > > + return page; > > +} > > + > > > [...] > > > > + > > +static dma_addr_t > > +vduse_domain_alloc_iova(struct iova_domain *iovad, > > + unsigned long size, unsigned long limit) > > +{ > > + unsigned long shift = iova_shift(iovad); > > + unsigned long iova_len = iova_align(iovad, size) >> shift; > > + unsigned long iova_pfn; > > + > > + if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1))) > > + iova_len = roundup_pow_of_two(iova_len); > > > Let's add a comment as what has been done in dma-iommu.c? > Fine. > (In the future, it looks to me it's better to move them to > alloc_iova_fast()). > Agree. Thanks, Yongji