Seeing a problem in multi cpu runs where memory mapped pcie device register reads are returning incorrect values

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Background
==========

I have a test environment which runs QEMU 4.2 with a plugin that runs two
copies of a PCIE device simulator on a CentOS 7.5 host with an Ubuntu 18.04
guest. When running with a single QEMU CPU using:

     -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device
intel-iommu,intremap=on

Our tests run fine. 

But when running with multiple cpu?s:

    -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device
intel-iommu,intremap=on -smp 2,sockets=1,cores=2

Some mmio reads to the simulated devices BAR 0 registers by our device
driver running on the guest are returning are returning incorrect values. 

Running QEMU under gdb I see that the read request is reaching our simulated
device correctly and that the correct result is being returned by the
simulator. Using gdb I have tracked the return value all the way back up the
call stack and the correct value is arriving in KVM_EXIT_MMIO
in kvm_cpu_exec (qemu-4.2.0/accel/kvm/kvm-all.c:2365)  but the value
returned to the device driver which initiated the read is 0.

Question
========

Is anyone else running QEMU 4.2 in multi cpu mode? Is anyone getting
incorrect reads from memory mapped device registers  when running in this
mode? I would appreciate any pointers on how best to debug the flow from
KVM_EXIT_MMIO back to the device driver running on the guest
              





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux