Hi, On 2/4/25 12:46 PM, Eric Auger wrote: > Hi, > > On 2/3/25 3:48 PM, Michael S. Tsirkin wrote: >> On Fri, Jan 31, 2025 at 10:53:15AM +0100, Eric Auger wrote: >>> Hi Kirill, Michael >>> >>> On 8/8/24 9:51 AM, Kirill A. Shutemov wrote: >>>> Hongyu reported a hang on kexec in a VM. QEMU reported invalid memory >>>> accesses during the hang. >>>> >>>> Invalid read at addr 0x102877002, size 2, region '(null)', reason: rejected >>>> Invalid write at addr 0x102877A44, size 2, region '(null)', reason: rejected >>>> ... >>>> >>>> It was traced down to virtio-console. Kexec works fine if virtio-console >>>> is not in use. >>>> >>>> Looks like virtio-console continues to write to the MMIO even after >>>> underlying virtio-pci device is removed. >>>> >>>> The problem can be mitigated by removing all virtio devices on virtio >>>> bus shutdown. >>>> >>>> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> >>>> Reported-by: Hongyu Ning <hongyu.ning@xxxxxxxxxxxxxxx> >>> >>> Gentle ping on that patch that seems to have fallen though the cracks. >>> >>> I think this fix is really needed. I have another test case with a >>> rebooting guest exposed with virtio-net (backed by vhost-net) and >>> viommu. Since there is currently no shutdown for the virtio-net, on >>> reboot, the IOMMU is disabled through the native_machine_shutdown()/ >>> x86_platform.iommu_shutdown() while the virtio-net is still alive. >>> >>> Normally device_shutdown() should call virtio-net shutdown before the >>> IOMMU tear down and we wouldn't see any spurious transactions after >>> iommu shutdown. >>> >>> With that fix, the above test case is fixed and I do not see spurious >>> vhost IOTLB miss spurious requests. >>> >>> For more details, see qemu thread ([PATCH] hw/virtio/vhost: Disable >>> IOTLB callbacks when IOMMU gets disabled, >>> https://lore.kernel.org/all/20250120173339.865681-1-eric.auger@xxxxxxxxxx/) >>> >>> >>> Reviewed-by: Eric Auger <eric.auger@xxxxxxxxxx> >>> Tested-by: Eric Auger <eric.auger@xxxxxxxxxx> >>> >>> Thanks >>> >>> Eric >>> >>>> --- >>>> drivers/virtio/virtio.c | 10 ++++++++++ >>>> 1 file changed, 10 insertions(+) >>>> >>>> diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c >>>> index a9b93e99c23a..6c2f908eb22c 100644 >>>> --- a/drivers/virtio/virtio.c >>>> +++ b/drivers/virtio/virtio.c >>>> @@ -356,6 +356,15 @@ static void virtio_dev_remove(struct device *_d) >>>> of_node_put(dev->dev.of_node); >>>> } >>>> >>>> +static void virtio_dev_shutdown(struct device *_d) >>>> +{ >>>> + struct virtio_device *dev = dev_to_virtio(_d); >>>> + struct virtio_driver *drv = drv_to_virtio(dev->dev.driver); >>>> + >>>> + if (drv && drv->remove) >>>> + drv->remove(dev); >> >> >> I am concerned that full remove is a heavyweight operation. >> Do not want to slow down reboots even more. >> How about just doing a reset, instead? > > I tested with > > static void virtio_dev_shutdown(struct device *_d) > { > struct virtio_device *dev = dev_to_virtio(_d); > > virtio_reset_device(dev); > } > > > and it fixes my issue. > > Kirill, would that fix you issue too? gentle ping. this also fixes another issue with qemu vSMMU + virtio-scsi-pci. With the above addition I get rid of spurious warning in qemu on guest reboot. qemu-system-aarch64: virtio: zero sized buffers are not allowed qemu-system-aarch64: vhost vring error in virtqueue 0: Invalid argument (22) Would you mind if I respin? Thanks Eric > > Thanks > > Eric >> >>>> +} >>>> + >>>> static const struct bus_type virtio_bus = { >>>> .name = "virtio", >>>> .match = virtio_dev_match, >>>> @@ -363,6 +372,7 @@ static const struct bus_type virtio_bus = { >>>> .uevent = virtio_uevent, >>>> .probe = virtio_dev_probe, >>>> .remove = virtio_dev_remove, >>>> + .shutdown = virtio_dev_shutdown, >>>> }; >>>> >>>> int __register_virtio_driver(struct virtio_driver *driver, struct module *owner) >> >