Re: Re: [RFC v4 06/11] vduse: Implement an MMU-based IOMMU driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 8, 2021 at 11:52 AM Jason Wang <jasowang@xxxxxxxxxx> wrote:
>
>
> On 2021/3/8 11:45 上午, Yongji Xie wrote:
> > On Mon, Mar 8, 2021 at 11:17 AM Jason Wang <jasowang@xxxxxxxxxx> wrote:
> >>
> >> On 2021/3/5 3:59 下午, Yongji Xie wrote:
> >>> On Fri, Mar 5, 2021 at 3:27 PM Jason Wang <jasowang@xxxxxxxxxx> wrote:
> >>>> On 2021/3/5 3:13 下午, Yongji Xie wrote:
> >>>>> On Fri, Mar 5, 2021 at 2:52 PM Jason Wang <jasowang@xxxxxxxxxx> wrote:
> >>>>>> On 2021/3/5 2:15 下午, Yongji Xie wrote:
> >>>>>>
> >>>>>> Sorry if I've asked this before.
> >>>>>>
> >>>>>> But what's the reason for maintaing a dedicated IOTLB here? I think we
> >>>>>> could reuse vduse_dev->iommu since the device can not be used by both
> >>>>>> virtio and vhost in the same time or use vduse_iova_domain->iotlb for
> >>>>>> set_map().
> >>>>>>
> >>>>>> The main difference between domain->iotlb and dev->iotlb is the way to
> >>>>>> deal with bounce buffer. In the domain->iotlb case, bounce buffer
> >>>>>> needs to be mapped each DMA transfer because we need to get the bounce
> >>>>>> pages by an IOVA during DMA unmapping. In the dev->iotlb case, bounce
> >>>>>> buffer only needs to be mapped once during initialization, which will
> >>>>>> be used to tell userspace how to do mmap().
> >>>>>>
> >>>>>> Also, since vhost IOTLB support per mapping token (opauqe), can we use
> >>>>>> that instead of the bounce_pages *?
> >>>>>>
> >>>>>> Sorry, I didn't get you here. Which value do you mean to store in the
> >>>>>> opaque pointer?
> >>>>>>
> >>>>>> So I would like to have a way to use a single IOTLB for manage all kinds
> >>>>>> of mappings. Two possible ideas:
> >>>>>>
> >>>>>> 1) map bounce page one by one in vduse_dev_map_page(), in
> >>>>>> VDUSE_IOTLB_GET_FD, try to merge the result if we had the same fd. Then
> >>>>>> for bounce pages, userspace still only need to map it once and we can
> >>>>>> maintain the actual mapping by storing the page or pa in the opaque
> >>>>>> field of IOTLB entry.
> >>>>>>
> >>>>>> Looks like userspace still needs to unmap the old region and map a new
> >>>>>> region (size is changed) with the fd in each VDUSE_IOTLB_GET_FD ioctl.
> >>>>>>
> >>>>>>
> >>>>>> I don't get here. Can you give an example?
> >>>>>>
> >>>>> For example, userspace needs to process two I/O requests (one page per
> >>>>> request). To process the first request, userspace uses
> >>>>> VDUSE_IOTLB_GET_FD ioctl to query the iova region (0 ~ 4096) and mmap
> >>>>> it.
> >>>> I think in this case we should let VDUSE_IOTLB_GET_FD return the maximum
> >>>> range as far as they are backed by the same fd.
> >>>>
> >>> But now the bounce page is mapped one by one. The second page (4096 ~
> >>> 8192) might not be mapped when userspace is processing the first
> >>> request. So the maximum range is 0 ~ 4096 at that time.
> >>>
> >>> Thanks,
> >>> Yongji
> >>
> >> A question, if I read the code correctly, VDUSE_IOTLB_GET_FD will return
> >> the whole bounce map range which is setup in vduse_dev_map_page()? So my
> >> understanding is that usersapce may choose to map all its range via mmap().
> >>
> > Yes.
> >
> >> So if we 'map' bounce page one by one in vduse_dev_map_page(). (Here
> >> 'map' means using multiple itree entries instead of a single one). Then
> >> in the VDUSE_IOTLB_GET_FD we can keep traversing itree (dev->iommu)
> >> until the range is backed by a different file.
> >>
> >> With this, there's no userspace visible changes and there's no need for
> >> the domain->iotlb?
> >>
> > In this case, I wonder what range can be obtained if userspace calls
> > VDUSE_IOTLB_GET_FD when the first I/O (e.g. 4K) occurs. [0, 4K] or [0,
> > 64M]? In current implementation, userspace will map [0, 64M].
>
>
> It should still be [0, 64M). Do you see any issue?
>

Does it mean we still need to map the whole bounce buffer into itree
(dev->iommu) at initialization?

Thanks,
Yongji




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux