> Hi Christoph: > Thanks a lot for your review. There are some reasons. > 1) Vmbus drivers don't use DMA API now. What is blocking us from making the Hyper-V drivers use the DMA API's? They will be a null-op generally, when there is no bounce buffer support needed. > 2) Hyper-V Vmbus channel ring buffer already play bounce buffer > role for most vmbus drivers. Just two kinds of packets from > netvsc/storvsc are uncovered. How does this make a difference here? > 3) In AMD SEV-SNP based Hyper-V guest, the access physical address > of shared memory should be bounce buffer memory physical address plus > with a shared memory boundary(e.g, 48bit) reported Hyper-V CPUID. It's > called virtual top of memory(vTom) in AMD spec and works as a watermark. > So it needs to ioremap/memremap the associated physical address above > the share memory boundary before accessing them. swiotlb_bounce() uses > low end physical address to access bounce buffer and this doesn't work > in this senario. If something wrong, please help me correct me. > There are alternative implementations of swiotlb on top of the core swiotlb API's. One option is to have Hyper-V specific swiotlb wrapper DMA API's with the custom logic above. > Thanks. > > > On 3/1/2021 2:54 PM, Christoph Hellwig wrote: > > This should be handled by the DMA mapping layer, just like for native > > SEV support. I agree with Christoph's comment that in principle, this should be handled using the DMA API's