KVM/ARM Call Minutes - Jan 14th, 2014

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Present:
--------
 - Christoffer
 - Maria
 - Antonios
 - Stuart Yoder
 - Way (Samsung)
 - Victor Kamensky
 - Andrew Jones
 - Marc
 - Janner from Freescale
 - Peter

Minutes:
--------
 - We should have some shared document for minutes in the future.
 - PSCI 0.2 support in KVM
    * Add the PSCI 0.2 function IDs to KVM and SYSTEM ops as part of that
    * Anup would add the PSCI 0.2 support and a capability accordingly
    * QEMU and kvmtools need to adjust the device tree for guests to add
      psci 0.2 nodes on supported systems
    * For this to work we need the guest bindings in place.
 - kvm-unit-tests status and discussion for future directions.
    * Drew will post a v3 soon that supports DT directly in our 'test
      guests' instead of the host-tools approach.
    * We need to move on to apply Christoffer's work
    * virtio-testdev needs to be reviewed and merged on the QEMU side,
      no reviews yet.
    * Next on Drew's side is aarch64 support.
 - Cache coherency on guest startup (guest runs with MMU disabled).
   This is a common problem that has also been observed on X-Gene and
   with Xen.  It is prevalant with kvm-unit-tests because of the small
   payload.  Suggested solutions include user space flushing, enabling
   the DC bit, flushing at fault time, ...  I think we need to come up
   with some plan here.

   Possible solutions (Marc, please improve this):
    * DC enable bit will make all accesses cacheables, and disable the
      MMU on Stage-1, but if you have directly assigned devices and a
      bootloader that does uncached incoherent DMA accesses, this will
      not work.
    * At page fault time, if you're doing non-cacheable accesses, do
      cache clean before mapping in pages.  But this breaks for code
      patching when turning the MMU on in the guest.  This can be solved
      by having the TVM bit set when the MMU is off, and invalidate when
      MMU is enabled.  This still breaks when the guest disables the MMU
      due to potential speculative loads.  The final alternative is a
      paravirtualized MMU off operation.
    * Cleaning caches from user space only deals with the initial
      payload.  It is easy to do this on v8, but not necessarily on v7,
      needs to be investigated.  This may prevent the need to flush on
      non-cacheable accesses on a per-page basis.
    * Someone needs to look into whether user space caching is 'coherent
      enough' when done from user space in v7 and v8.
    * Marc is going to send out a patch seris that addresses the initial
      payload issue soon.
    * Investigate if I-cache is coherent when restoring a VM with MMU
      enabled/disabled.  Verify it.

 - Dummy hypercall for kvm unit tests
 - Reading perf counters in kvm unit tests

 - Big Endian host support
    * Victor has posted v7 BE host support patches.
    * Victor is working on the v8 BE host support patches.
    * We discussed the vcpu_data_to_host and vice versa patch.  Marc and
      Christoffer to take another look at the code and patches.
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm




[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux