Hi Sunil:
Thanks for your review.
On 3/2/2021 3:45 AM, Sunil Muthuswamy wrote:
Hi Christoph:
Thanks a lot for your review. There are some reasons.
1) Vmbus drivers don't use DMA API now.
What is blocking us from making the Hyper-V drivers use the DMA API's? They
will be a null-op generally, when there is no bounce buffer support needed.
2) Hyper-V Vmbus channel ring buffer already play bounce buffer
role for most vmbus drivers. Just two kinds of packets from
netvsc/storvsc are uncovered.
How does this make a difference here?
3) In AMD SEV-SNP based Hyper-V guest, the access physical address
of shared memory should be bounce buffer memory physical address plus
with a shared memory boundary(e.g, 48bit) reported Hyper-V CPUID. It's
called virtual top of memory(vTom) in AMD spec and works as a watermark.
So it needs to ioremap/memremap the associated physical address above
the share memory boundary before accessing them. swiotlb_bounce() uses
low end physical address to access bounce buffer and this doesn't work
in this senario. If something wrong, please help me correct me.
There are alternative implementations of swiotlb on top of the core swiotlb
API's. One option is to have Hyper-V specific swiotlb wrapper DMA API's with
the custom logic above.
Agree. Hyper-V should have its own DMA ops and put Hyper-V bounce buffer
code in DMA API callback. For vmbus channel ring buffer, it doesn't need
additional bounce buffer and there are two options. 1) Not call DMA API
around them 2) pass a flag in DMA API to notify Hyper-V DMA callback
and not allocate bounce buffer for them.
Thanks.
On 3/1/2021 2:54 PM, Christoph Hellwig wrote:
This should be handled by the DMA mapping layer, just like for native
SEV support.
I agree with Christoph's comment that in principle, this should be handled using
the DMA API's