RFC -> v1: * added 'netns' module param to vsock.ko to enable the network namespace support (disabled by default) * added 'vsock_net_eq()' to check the "net" assigned to a socket only when 'netns' support is enabled RFC: https://patchwork.ozlabs.org/cover/1202235/ Now that we have multi-transport upstream, I started to take a look to support network namespace in vsock. As we partially discussed in the multi-transport proposal [1], it could be nice to support network namespace in vsock to reach the following goals: - isolate host applications from guest applications using the same ports with CID_ANY - assign the same CID of VMs running in different network namespaces - partition VMs between VMMs or at finer granularity This new feature is disabled by default, because it changes vsock's behavior with network namespaces and could break existing applications. It can be enabled with the new 'netns' module parameter of vsock.ko. This implementation provides the following behavior: - packets received from the host (received by G2H transports) are assigned to the default netns (init_net) - packets received from the guest (received by H2G - vhost-vsock) are assigned to the netns of the process that opens /dev/vhost-vsock (usually the VMM, qemu in my tests, opens the /dev/vhost-vsock) - for vmci I need some suggestions, because I don't know how to do and test the same in the vmci driver, for now vmci uses the init_net - loopback packets are exchanged only in the same netns I tested the series in this way: l0_host$ qemu-system-x86_64 -m 4G -M accel=kvm -smp 4 \ -drive file=/tmp/vsockvm0.img,if=virtio --nographic \ -device vhost-vsock-pci,guest-cid=3 l1_vm$ echo 1 > /sys/module/vsock/parameters/netns l1_vm$ ip netns add ns1 l1_vm$ ip netns add ns2 # same CID on different netns l1_vm$ ip netns exec ns1 qemu-system-x86_64 -m 1G -M accel=kvm -smp 2 \ -drive file=/tmp/vsockvm1.img,if=virtio --nographic \ -device vhost-vsock-pci,guest-cid=4 l1_vm$ ip netns exec ns2 qemu-system-x86_64 -m 1G -M accel=kvm -smp 2 \ -drive file=/tmp/vsockvm2.img,if=virtio --nographic \ -device vhost-vsock-pci,guest-cid=4 # all iperf3 listen on CID_ANY and port 5201, but in different netns l1_vm$ ./iperf3 --vsock -s # connection from l0 or guests started # on default netns (init_net) l1_vm$ ip netns exec ns1 ./iperf3 --vsock -s l1_vm$ ip netns exec ns1 ./iperf3 --vsock -s l0_host$ ./iperf3 --vsock -c 3 l2_vm1$ ./iperf3 --vsock -c 2 l2_vm2$ ./iperf3 --vsock -c 2 [1] https://www.spinics.net/lists/netdev/msg575792.html Stefano Garzarella (3): vsock: add network namespace support vsock/virtio_transport_common: handle netns of received packets vhost/vsock: use netns of process that opens the vhost-vsock device drivers/vhost/vsock.c | 29 ++++++++++++----- include/linux/virtio_vsock.h | 2 ++ include/net/af_vsock.h | 7 +++-- net/vmw_vsock/af_vsock.c | 41 +++++++++++++++++++------ net/vmw_vsock/hyperv_transport.c | 5 +-- net/vmw_vsock/virtio_transport.c | 2 ++ net/vmw_vsock/virtio_transport_common.c | 12 ++++++-- net/vmw_vsock/vmci_transport.c | 5 +-- 8 files changed, 78 insertions(+), 25 deletions(-) -- 2.24.1 _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization