On 22/09/2023 16:44, David Woodhouse wrote:
On Fri, 2023-09-22 at 15:00 +0000, Paul Durrant wrote:
From: Paul Durrant <pdurrant@xxxxxxxxxx>
If the guest does not explicitly set the GPA of vcpu_info structure in
memory then, for guests with 32 vCPUs or fewer, the vcpu_info embedded
in the shared_info page may be used. As described in a previous commit,
the shared_info page is an overlay at a fixed HVA within the VMM, so in
this case it also more optimal to activate the vcpu_info cache with a
fixed HVA to avoid unnecessary invalidation if the guest memory layout
is modified.
Signed-off-by: Paul Durrant <pdurrant@xxxxxxxxxx>
But it should *only* be defined as an HVA in the case where it's the
one in the shinfo. Otherwise, it's defined by its GPA.
Which almost makes me want to see a sanity check that it precisely
equals &shinfo->vcpu_info[vcpu->arch.xen.vcpu_id].
Which brings me back around the circle again to wonder why we don't
just *default* to it.... you hate me, don't you?
Your previous set of patches did that, and it did end requiring that
the VMM restore both VCPU_INFO and VCPU_ID for each vCPU *before*
restoring the SHARED_INFO_HVA on resume, but wasn't that OK?
No. It was painful and overly complex, with too many corner cases that
were making my brain hurt. Given the pain of the last week, leaving the
default handling in the VMM is preferable. It means I don't need to
impose rules about attribute ordering and hence get caught by needing
one thread to wait for another inside the VMM. It's better and more
robust this way.
Paul