Re: [PATCH v3 17/17] KVM: x86/xen: Add event channel interrupt vector upcall

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/14/20 2:57 PM, David Woodhouse wrote:
> On Mon, 2020-12-14 at 13:19 +0000, Joao Martins wrote:
>> But I think there's a flaw here. That is handling the case where you don't have a
>> vcpu_info registered, and only shared info. The vcpu_info is then placed elsewhere, i.e.
>> another offset out of shared_info -- which is *I think* the case for PVHVM Windows guests.
> 
> There is no such case any more. In my v3 patch set I *stopped* the
> kernel from attempting to use the vcpu_info embedded in the shinfo, and
> went to *requiring* that the VMM explicitly tell the kernel where it
> is.
> 
Sigh yes, I forgot about that -- and you did mentioned it in earlier posts.

> $ git diff xenpv-post-2..xenpv-post-3 -- Documentation
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index d98c2ff90880..d1c30105e6fd 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -4834,7 +4834,7 @@ experience inconsistent filtering behavior on MSR accesses.
>                         __u64 gfn;
>                 } shared_info;
>                 struct {
> -                       __u32 vcpu;
> +                       __u32 vcpu_id;
>                         __u64 gpa;
>                 } vcpu_attr;
>                 __u64 pad[4];
> @@ -4849,9 +4849,13 @@ KVM_XEN_ATTR_TYPE_LONG_MODE
>  
>  KVM_XEN_ATTR_TYPE_SHARED_INFO
>    Sets the guest physical frame number at which the Xen "shared info"
> -  page resides. It is the default location for the vcpu_info for the
> -  first 32 vCPUs, and contains other VM-wide data structures shared
> -  between the VM and the host.
> +  page resides. Note that although Xen places vcpu_info for the first
> +  32 vCPUs in the shared_info page, KVM does not automatically do so
> +  and requires that KVM_XEN_ATTR_TYPE_VCPU_INFO be used explicitly
> +  even when the vcpu_info for a given vCPU resides at the "default"
> +  location in the shared_info page. This is because KVM is not aware
> +  of the Xen CPU id which is used as the index into the vcpu_info[]
> +  array, so cannot know the correct default location.
>  
/me nods

>  KVM_XEN_ATTR_TYPE_VCPU_INFO
>    Sets the guest physical address of the vcpu_info for a given vCPU.
> 
>> Perhaps introducing a helper which adds xen_vcpu_info() and returns you the hva (picking
>> the right cache) similar to the RFC patch. Albeit that was with page pinning, but
>> borrowing an older version I had with hva_to_gfn_cache incantation would probably look like:
>>
>>
>>         if (v->arch.xen.vcpu_info_set) {
>>                 ghc = &v->arch.xen.vcpu_info_cache;
>>         } else {
>>                 ghc = &v->arch.xen.vcpu_info_cache;
>>                 offset += offsetof(struct shared_info, vcpu_info);
>>                 offset += (v - kvm_get_vcpu_by_id(0)) * sizeof(struct vcpu_info);
>>         }
> 
> The problem is that we *can't* just use shared_info->vcpu_info[N]
> because we don't know what N is.
> 
> This is what I spent half of last week chasing down, because whatever I
> tried, the original version just ended up delivering interrupts to the
> wrong CPUs.
> 
> The N we want there is actually the *XEN* vcpu_id, which Xen puts into
> CPUID leaf 0x40000004.ebx and which is also the ACPI ID. (Those two had
> better match up since Linux guests use CPUID 0x40000004 for the BSP and
> ACPI for the APs).
> 
> The kernel knows nothing of those numbers. In particular they do *not*
> match up to the indices in kvm->vcpus[M] (which is vcpu->vcpu_idx and
> means nothing except the chronological order in which each vCPU's
> userspace thread happened to create it during startup), and they do not
> match up to vcpu->vcpu_id which is the APIC (not ACPI) ID.
> 
> The kernel doesn't know. Let userspace tell it, since we already needed
> that API for the separate vcpu_info case anyway.
> 

All this time, I was only considering the guest hypercalls in XENMEM_populate_physmap()
for shared_info, and register vcpu_info as the VMM usage for setting the corresponding
attributes. So you register the vcpu_info regardless of guest placement, and I didn't
correlate that and your earlier comment (and also forgot about it) that you used to remove
that complexity.

	Joao



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux