On 2020/7/16 下午12:13, Zhu, Lingshan wrote:
On 7/16/2020 12:02 PM, Jason Wang wrote:
On 2020/7/16 上午11:59, Zhu, Lingshan wrote:
On 7/16/2020 10:59 AM, Jason Wang wrote:
On 2020/7/16 上午9:39, Zhu, Lingshan wrote:
On 7/15/2020 9:43 PM, Jason Wang wrote:
On 2020/7/12 下午10:52, Zhu Lingshan wrote:
Hi All,
This series intends to implement IRQ offloading for
vhost_vdpa.
By the feat of irq forwarding facilities like posted
interrupt on X86, irq bypass can help deliver
interrupts to vCPU directly.
vDPA devices have dedicated hardware backends like VFIO
pass-throughed devices. So it would be possible to setup
irq offloading(irq bypass) for vDPA devices and gain
performance improvements.
In my testing, with this feature, we can save 0.1ms
in a ping between two VFs on average.
Hi Lingshan:
During the virtio-networking meeting, Michael spots two possible
issues:
1) do we need an new uAPI to stop the irq offloading?
2) can interrupt lost during the eventfd ctx?
For 1) I think we probably not, we can allocate an independent
eventfd which does not map to MSIX. So the consumer can't match
the producer and we fallback to eventfd based irq.
Hi Jason,
I wonder why we need to stop irq offloading, but if we need to do
so, maybe a new uAPI would be more intuitive to me,
but why and who(user? qemu?) shall initialize this process, based
on what kinda of basis to make the decision?
The reason is we may want to fallback to software datapath for some
reason (e.g software assisted live migration). In this case we need
intercept device write to used ring so we can not offloading
virtqueue interrupt in this case.
so add a VHOST_VDPA_STOP_IRQ_OFFLOADING? Then do we need a
VHOST_VDPA_START_IRQ_OFFLOADING, then let userspace fully control
this? Or any better approaches?
Probably not, it's as simple as allocating another eventfd (but not
irqfd), and pass it to vhost-vdpa. Then the offloading is disabled
since it doesn't have a consumer.
OK, sounds like QEMU work, no need to take care in this series, right?
That's my understanding.
Thanks
Thanks,
BR
Zhu Lingshan
Thanks