From: Michael Kelley <mikelley@xxxxxxxxxxxxx> Sent: Wednesday, September 1, 2021 7:34 PM [snip] > > +int netvsc_dma_map(struct hv_device *hv_dev, > > + struct hv_netvsc_packet *packet, > > + struct hv_page_buffer *pb) > > +{ > > + u32 page_count = packet->cp_partial ? > > + packet->page_buf_cnt - packet->rmsg_pgcnt : > > + packet->page_buf_cnt; > > + dma_addr_t dma; > > + int i; > > + > > + if (!hv_is_isolation_supported()) > > + return 0; > > + > > + packet->dma_range = kcalloc(page_count, > > + sizeof(*packet->dma_range), > > + GFP_KERNEL); > > + if (!packet->dma_range) > > + return -ENOMEM; > > + > > + for (i = 0; i < page_count; i++) { > > + char *src = phys_to_virt((pb[i].pfn << HV_HYP_PAGE_SHIFT) > > + + pb[i].offset); > > + u32 len = pb[i].len; > > + > > + dma = dma_map_single(&hv_dev->device, src, len, > > + DMA_TO_DEVICE); > > + if (dma_mapping_error(&hv_dev->device, dma)) { > > + kfree(packet->dma_range); > > + return -ENOMEM; > > + } > > + > > + packet->dma_range[i].dma = dma; > > + packet->dma_range[i].mapping_size = len; > > + pb[i].pfn = dma >> HV_HYP_PAGE_SHIFT; > > + pb[i].offset = offset_in_hvpage(dma); > > + pb[i].len = len; > > + } > > Just to confirm, this driver does *not* set the DMA min_align_mask > like storvsc does. So after the call to dma_map_single(), the offset > in the page could be different. That's why you are updating > the pb[i].offset value. Alternatively, you could set the DMA > min_align_mask, which would ensure the offset is unchanged. > I'm OK with either approach, though perhaps a comment is > warranted to explain, as this is a subtle issue. > On second thought, I don't think either approach is OK. The default alignment in the swiotlb is 2K, and if the length of the data in the buffer was 3K, the data could cross a page boundary in the bounce buffer when it originally did not. This would break the above code which can only deal with one page at a time. So I think the netvsc driver also must set the DMA min_align_mask to 4K, which will preserve the offset. Michael