Re: [PATCH v7 1/4] KVM: stats: Separate generic stats from architecture specific ones

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 11.06.21 12:51, Paolo Bonzini wrote:
On 11/06/21 08:57, Christian Borntraeger wrote:
@@ -755,12 +750,12 @@ struct kvm_vcpu_arch {
  };
  struct kvm_vm_stat {
+    struct kvm_vm_stat_generic generic;

s390 does not have remote_tlb_flush. I guess this does not hurt?

It would have to be accounted in gmap_flush_tlb, but there is no struct kvm in there.  A slightly hackish possibility would be to include the gmap by value (instead of by pointer) in struct kvm, and then use container_of.

What is the semantics of remote_tlb_flush?
For the host:
We usually do not do remote TLB flushes in the "IPI with a flush executed on the remote CPU" sense.
We do have instructions that invalidates table entries and it will take care of remote TLBs as well (IPTE and IDTE and CRDTE).
This is nice, but on the other side an operating system MUST use these instruction if the page table might be in use by any CPU. If not, you can get a delayed access exception machine check if the hardware detect a TLB/page table incosistency.
Only if you can guarantee that nobody has this page table attached you can also use a normal store + tlb flush instruction.

For the guest (and I guess thats what we care about here?) TLB flushes are almost completely handled by hardware. There is only the set prefix instruction that is handled by KVM and this flushes the cpu local cache.

This reminds me that I have never asked you why the gmap code is not in arch/s390/kvm,

Because we share the last level of the page tables with userspace so the KVM address space is somewhat tied to the user address space.
This is partly because Martin wanted to have control over this due to some oddities about our page tables and partly because of the rule from above. Using a IPTE of such a page table entry will take care of the TLB entries for both (user and guest) mappings in an atomic fashion when the page table changes.


and also that there is no code in QEMU that uses KVM_VM_S390_UCONTROL or KVM_S390_VCPU_FAULT.  It would be nice to have some testcases for that, and also for KVM_S390_VCPU_FAULT with regular virtual machines... or to remove the code if it's unused.

This is used by an internal firmware test tool that uses KVM to speed up simulation of hardware instructions.
Search for CECSIM to get an idea (the existing papers still talk about the same approach using z/VM).
I will check what we can do regarding regression tests.




[Index of Archives]     [KVM Development]     [KVM ARM]     [KVM ia64]     [Linux Virtualization]     [Linux USB Devel]     [Linux Video]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Big List of Linux Books]

  Powered by Linux