On 4/27/2023 8:38 PM, zhuangel570 wrote:
Hi
We found some latency issue in high-density and high-concurrency scenarios, we
are using cloud hypervisor as vmm for lightweight VM, using VIRTIO net and
block for VM. In our test, we got about 50ms to 100ms+ latency in creating VM
and register irqfd, after trace with funclatency (a tool of bcc-tools,
https://github.com/iovisor/bcc), we found the latency introduced by following
functions:
- irq_bypass_register_consumer introduce more than 60ms per VM.
This function was called when registering irqfd, the function will register
irqfd as consumer to irqbypass, wait for connecting from irqbypass producers,
like VFIO or VDPA. In our test, one irqfd register will get about 4ms
latency, and 5 devices with total 16 irqfd will introduce more than 60ms
latency.
- kvm_vm_create_worker_thread introduce tail latency more than 100ms.
This function was called when create "kvm-nx-lpage-recovery" kthread when
create a new VM, this patch was introduced to recovery large page to relief
performance loss caused by software mitigation of ITLB_MULTIHIT, see
b8e8c8303ff2 ("kvm: mmu: ITLB_MULTIHIT mitigation") and 1aa9b9572b10
("kvm: x86: mmu: Recovery of shattered NX large pages").
Yes, this kthread is for NX-HugePage feature and NX-HugePage in turn is to
SW mitigate itlb-multihit issue.
However, HW level mitigation has been available for quite a while, you can
check "/sys/devices/system/cpu/vulnerabilities/itlb_multihit" for your
system's mitigation status.
I believe most recent Intel CPUs have this HW mitigated (check
MSR_ARCH_CAPABILITIES::IF_PSCHANGE_MC_NO), let alone non-Intel CPUs.
But, the kvm_vm_create_worker_thread is still created anyway, nonsense I
think. I previously had a internal patch getting rid of it but didn't get a
chance to send out.
As more and more old CPUs retires, I think NX-HugePage code will become
more and more minority code path/situation, and be refactored out
eventually one day.