On Tue, Apr 13, 2021 at 11:22:16AM -0400, Tianyu Lan wrote: > From: Tianyu Lan <Tianyu.Lan@xxxxxxxxxxxxx> > > In Isolation VM, all shared memory with host needs to mark visible > to host via hvcall. vmbus_establish_gpadl() has already done it for > netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_ > pagebuffer() still need to handle. Use DMA API to map/umap these > memory during sending/receiving packet and Hyper-V DMA ops callback > will use swiotlb fucntion to allocate bounce buffer and copy data > from/to bounce buffer. > > Signed-off-by: Tianyu Lan <Tianyu.Lan@xxxxxxxxxxxxx> > --- > drivers/net/hyperv/hyperv_net.h | 11 +++ > drivers/net/hyperv/netvsc.c | 137 ++++++++++++++++++++++++++++-- > drivers/net/hyperv/rndis_filter.c | 3 + > 3 files changed, 144 insertions(+), 7 deletions(-) <...> > + packet->dma_range = kzalloc(sizeof(struct dma_range) * page_count, > + GFP_KERNEL); > + if (!packet->dma_range) > + return -ENOMEM; > + > + for (i = 0; i < page_count; i++) { > + char *src = phys_to_virt((pb[i].pfn << HV_HYP_PAGE_SHIFT) > + + pb[i].offset); > + u32 len = pb[i].len; > + > + dma = dma_map_single(&hv_dev->device, src, len, > + DMA_TO_DEVICE); > + if (dma_mapping_error(&hv_dev->device, dma)) > + return -ENOMEM; Don't you leak dma_range here? BTW, It will be easier if you CC all on all patches, so we will be able to get whole context. Thanks