On Fri, Feb 23, 2024 at 03:05:59PM -0800, Cong Wang wrote: Hi Cong, This is a good topic ! We have proposed another solution to accelerate Inter-VM tcp/ip communication transparently within the same host based on SMC-D + virtio-ism https://lists.oasis-open.org/archives/virtio-comment/202212/msg00030.html I don't know, can we do better with your proposal ? Best regards, Dust >Hi, all > >We would like to discuss our inter-VM shared memory communications >proposal with the BPF community. > >First, VMM (virtual machine monitor) offers significant advantages >over native machines when VMs co-resident on the same physical host >are non-competing in terms of network and computing resources. >However, the performance of VMs is significantly degraded compared to >that of native machines when co-resident VMs are competing for >resources under high workload demands due to high overheads of >switches and events in host/guest domain and VMM. Second, the >communication overhead between co-resident VMs can be as high as the >communication cost between VMs located on separate physical machines. >This is because the abstraction of VMs supported by VMM technology >does not differentiate whether the data request is coming from >co-resident VMs or not. More importantly, when using TCP/IP as the >communication method, the overhead of the Linux networking stack >itself is also significant. > >Although vsock already offers an optimized alternative of inter-VM >communications, we argue that lack of transparency to applications is >the reason why vsock is not yet widely adopted. Instead of introducing >more socket families, we propose a novel solution using shared memory >with eBPF to bypass the TCP/IP stack completely and transparently to >bring co-resident VM communications to optimal. > >We would like to discuss: >- How to design a new eBPF map based on IVSHMEM (Inter-VM Shared Memory)? >- How to reuse the existing eBPF ring buffer? >- How to leverage the socket map to replace tcp_sendmsg() and >tcp_recvmsg() with shared memory logic? > > >Thanks. >Cong