On Mon, Sep 25, 2023 at 8:26 PM Jason Gunthorpe <jgg@xxxxxxxxxx> wrote: > > On Mon, Sep 25, 2023 at 10:34:54AM +0800, Jason Wang wrote: > > > > Cloud vendors will similarly use DPUs to create a PCI functions that > > > meet the cloud vendor's internal specification. > > > > This can only work if: > > > > 1) the internal specification has finer garin than virtio spec > > 2) so it can define what is not implemented in the virtio spec (like > > migration and compatibility) > > Yes, and that is what is happening. Realistically the "spec" isjust a > piece of software that the Cloud vendor owns which is simply ported to > multiple DPU vendors. > > It is the same as VDPA. If VDPA can make multiple NIC vendors > consistent then why do you have a hard time believing we can do the > same thing just on the ARM side of a DPU? I don't. We all know vDPA can do more than virtio. > > > All of the above doesn't seem to be possible or realistic now, and it > > actually has a risk to be not compatible with virtio spec. In the > > future when virtio has live migration supported, they want to be able > > to migrate between virtio and vDPA. > > Well, that is for the spec to design. Right, so if we'd consider migration from virtio to vDPA, it needs to be designed in a way that allows more involvement from hypervisor other than coupling it with a specific interface (like admin virtqueues). > > > > So, as I keep saying, in this scenario the goal is no mediation in the > > > hypervisor. > > > > That's pretty fine, but I don't think trapping + relying is not > > mediation. Does it really matter what happens after trapping? > > It is not mediation in the sense that the kernel driver does not in > any way make decisions on the behavior of the device. It simply > transforms an IO operation into a device command and relays it to the > device. The device still fully controls its own behavior. > > VDPA is very different from this. You might call them both mediation, > sure, but then you need another word to describe the additional > changes VPDA is doing. > > > > It is pointless, everything you think you need to do there > > > is actually already being done in the DPU. > > > > Well, migration or even Qemu could be offloaded to DPU as well. If > > that's the direction that's pretty fine. > > That's silly, of course qemu/kvm can't run in the DPU. KVM can't for sure but part of Qemu could. This model has been used. > > However, we can empty qemu and the hypervisor out so all it does is > run kvm and run vfio. In this model the DPU does all the OVS, storage, > "VPDA", etc. qemu is just a passive relay of the DPU PCI functions > into VM's vPCI functions. > > So, everything VDPA was doing in the environment is migrated into the > DPU. It really depends on the use cases. For example, in the case of DPU what if we want to provide multiple virtio devices through a single VF? > > In this model the DPU is an extension of the hypervisor/qemu > environment and we shift code from x86 side to arm side to increase > security, save power and increase total system performance. That's pretty fine. Thanks > > Jason >