On 26/11/2024 00:06, Sean Christopherson wrote:
On Mon, Nov 25, 2024, Nikita Kalyazin wrote:
On 21/11/2024 21:05, Sean Christopherson wrote:
On Thu, Nov 21, 2024, Nikita Kalyazin wrote:
On 19/11/2024 13:24, Sean Christopherson wrote:
None of this justifies breaking host-side, non-paravirt async page faults. If a
vCPU hits a missing page, KVM can schedule out the vCPU and let something else
run on the pCPU, or enter idle and let the SMT sibling get more cycles, or maybe
even enter a low enough sleep state to let other cores turbo a wee bit.
I have no objection to disabling host async page faults, e.g. it's probably a net
negative for 1:1 vCPU:pCPU pinned setups, but such disabling needs an opt-in from
userspace.
That's a good point, I didn't think about it. The async work would still
need to execute somewhere in that case (or sleep in GUP until the page is
available).
The "async work" is often an I/O operation, e.g. to pull in the page from disk,
or over the network from the source. The *CPU* doesn't need to actively do
anything for those operations. The I/O is initiated, so the CPU can do something
else, or go idle if there's no other work to be done.
If processing the fault synchronously, the vCPU thread can also sleep in the
same way freeing the pCPU for something else,
If and only if the vCPU can handle a PV async #PF. E.g. if the guest kernel flat
out doesn't support PV async #PF, or the fault happened while the guest was in an
incompatible mode, etc.
If KVM doesn't do async #PFs of any kind, the vCPU will spin on the fault until
the I/O completes and the page is ready.
I ran a little experiment to see that by backing guest memory by a file on
FUSE and delaying response to one of the read operations to emulate a delay
in fault processing.
...
In both cases the fault handling code is blocked and the pCPU is free for
other tasks. I can't see the vCPU spinning on the IO to get completed if
the async task isn't created. I tried that with and without async PF
enabled by the guest (MSR_KVM_ASYNC_PF_EN).
What am I missing?
Ah, I was wrong about the vCPU spinning.
The goal is specifically to schedule() from KVM context, i.e. from kvm_vcpu_block(),
so that if a virtual interrupt arrives for the guest, KVM can wake the vCPU and
deliver the IRQ, e.g. to reduce latency for interrupt delivery, and possible even
to let the guest schedule in a different task if the IRQ is the guest's tick.
Letting mm/ or fs/ do schedule() means the only wake event even for the vCPU task
is the completion of the I/O (or whatever the fault is waiting on).
Ok, great, then that's how I understood it last time. The only thing
that is not entirely clear to me is like Vitaly says,
KVM_ASYNC_PF_SEND_ALWAYS is no longer set, because we don't want to
inject IRQs into the guest when it's in kernel mode, but the "host async
PF" case would still allow IRQs (eg ticks like you said). Why is it
safe to deliver them?
I have no objection to disabling host async page faults,
e.g. it's probably a net>>>>> negative for 1:1 vCPU:pCPU pinned setups, but such disabling
needs an opt-in from>>>>> userspace.
Back to this, I couldn't see a significant effect of this optimisation
with the original async PF so happy to give it up, but it does make a
difference when applied to async PF user [2] in my setup. Would a new
cap be a good way for users to express their opt-in for it?
[1]:
https://lore.kernel.org/kvm/20241118130403.23184-1-kalyazin@xxxxxxxxxx/T/#ma719a9cb3e036e24ea8512abf9a625ddeaccfc96
[2]:
https://lore.kernel.org/kvm/20241118123948.4796-1-kalyazin@xxxxxxxxxx/T/