Hello, Attaching PCI devices to the virt model works fine with TCG, but fails once KVM is enabled. For instance, with this command line: ./qemu-system-arm -m 512 -machine type=virt \ -enable-kvm -cpu host \ -nographic \ -kernel zImage \ -drive if=none,file=ubuntu.img,id=fs,format=raw \ -device virtio-blk-device,drive=fs \ -netdev type=user,id=net0 -device e1000,netdev=net0 \ -drive if=scsi,file=disk.img,format=raw \ -device lsi53c895a \ -usb -device usb-ehci,id=ehci \ -device usb-tablet,bus=ehci.0 \ -device usb-host,hostbus=3,hostport=1,bus=ehci.0 \ -append "console=ttyAMA0 root=/dev/vda rw" The e1000 once up is failing, when issuing a ping command, with the following kernel messages: [...] e1000 0000:00:02.0 eth0: Detected Tx Unit Hang Tx Queue <0> TDH <3> TDT <3> next_to_use <3> next_to_clean <2> buffer_info[next_to_clean] time_stamp <ffff94db> next_to_watch <2> jiffies <ffff9568> next_to_watch.status <0> [...] The guest kernel driver of the lsi device fails to enable it correctly with a cache error: [...] sym53c8xx 0000:00:01.0: enabling device (0100 -> 0103) sym0: <895a> rev 0x0 at pci 0000:00:01.0 irq 54 sym0: No NVRAM, ID 7, Fast-40, LVD, parity checking CACHE TEST FAILED: chip wrote 2, host read 1. sym0: CACHE INCORRECTLY CONFIGURED. sym0: giving up ... [...] And finally the USB controller fails to assign addresses to the devices (emulated or host passthrough): [...] usb 1-1: new high-speed USB device number 2 using ehci-pci usb 1-1: device not accepting address 2, error -110 usb 1-1: new high-speed USB device number 3 using ehci-pci usb 1-1: device not accepting address 3, error -110 [...] Tested with last QEMU version (v2.3.0-rc3), Linux kernel v4.0 (for both guest and host) on Samsung Chromebook, Vexpress TC2 and OMAP5-UEVM. After investigation, it turns out that some memory writes performed by qemu emulated devices are not seen by the guest kernel, because the cache and the system memory are inconsistent and the cache is never flushed. I managed to solve these issues by flushing the cache in the host kernel each time we enter into the guest, only if there is a mmio write (see below), but maybe this is not an acceptable/efficient solution. Any suggestions/advice to tackle this problem? diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c index 5d3bfc0..4c51099 100644 --- a/arch/arm/kvm/mmio.c +++ b/arch/arm/kvm/mmio.c @@ -20,6 +20,7 @@ #include <asm/kvm_mmio.h> #include <asm/kvm_emulate.h> #include <trace/events/kvm.h> +#include <asm/cacheflush.h> #include "trace.h" @@ -116,6 +117,8 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run) data); data = vcpu_data_host_to_guest(vcpu, data, len); *vcpu_reg(vcpu, vcpu->arch.mmio_decode.rt) = data; + } else { + flush_cache_all(); } return 0; Regards, Jérémy _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm