Re: [PATCH vfio 11/11] vfio/virtio: Introduce a vfio driver over virtio devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 25, 2023 at 10:34:54AM +0800, Jason Wang wrote:

> > Cloud vendors will similarly use DPUs to create a PCI functions that
> > meet the cloud vendor's internal specification.
> 
> This can only work if:
> 
> 1) the internal specification has finer garin than virtio spec
> 2) so it can define what is not implemented in the virtio spec (like
> migration and compatibility)

Yes, and that is what is happening. Realistically the "spec" isjust a
piece of software that the Cloud vendor owns which is simply ported to
multiple DPU vendors.

It is the same as VDPA. If VDPA can make multiple NIC vendors
consistent then why do you have a hard time believing we can do the
same thing just on the ARM side of a DPU?

> All of the above doesn't seem to be possible or realistic now, and it
> actually has a risk to be not compatible with virtio spec. In the
> future when virtio has live migration supported, they want to be able
> to migrate between virtio and vDPA.

Well, that is for the spec to design. 

> > So, as I keep saying, in this scenario the goal is no mediation in the
> > hypervisor.
> 
> That's pretty fine, but I don't think trapping + relying is not
> mediation. Does it really matter what happens after trapping?

It is not mediation in the sense that the kernel driver does not in
any way make decisions on the behavior of the device. It simply
transforms an IO operation into a device command and relays it to the
device. The device still fully controls its own behavior.

VDPA is very different from this. You might call them both mediation,
sure, but then you need another word to describe the additional
changes VPDA is doing.

> > It is pointless, everything you think you need to do there
> > is actually already being done in the DPU.
> 
> Well, migration or even Qemu could be offloaded to DPU as well. If
> that's the direction that's pretty fine.

That's silly, of course qemu/kvm can't run in the DPU.

However, we can empty qemu and the hypervisor out so all it does is
run kvm and run vfio. In this model the DPU does all the OVS, storage,
"VPDA", etc. qemu is just a passive relay of the DPU PCI functions
into VM's vPCI functions.

So, everything VDPA was doing in the environment is migrated into the
DPU.

In this model the DPU is an extension of the hypervisor/qemu
environment and we shift code from x86 side to arm side to increase
security, save power and increase total system performance.

Jason



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux