On 6/14/2021 11:33 PM, Christoph Hellwig wrote:
On Mon, Jun 14, 2021 at 10:04:06PM +0800, Tianyu Lan wrote:
The pages in the hv_page_buffer array here are in the kernel linear
mapping. The packet sent to host will contain an array which contains
transaction data. In the isolation VM, data in the these pages needs to be
copied to bounce buffer and so call dma_map_single() here to map these data
pages with bounce buffer. The vmbus has ring buffer where the send/receive
packets are copied to/from. The ring buffer has been remapped to the extra
space above shared gpa boundary/vTom during probing Netvsc driver and so
not call dma map function for vmbus ring
buffer.
So why do we have all that PFN magic instead of using struct page or
the usual kernel I/O buffers that contain a page pointer?
These PFNs originally is part of Hyper-V protocol data and will be sent
to host. Host accepts these GFN and copy data from/to guest memory. The
translation from va to pa is done by caller that populates the
hv_page_buffer array. I will try calling dma map function before
populating struct hv_page_buffer and this can avoid redundant
translation between PA and VA.