Re: [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 






On Mon, Apr 14, 2014 at 3:45 PM, Marc Zyngier <marc.zyngier@xxxxxxx> wrote:
On 11/04/14 12:09, Antonios Motakis wrote:
> On Thu, Apr 10, 2014 at 12:51 PM, Peter Maydell
> <peter.maydell@xxxxxxxxxx> wrote:
>>
>> On 10 April 2014 09:58, Antonios Motakis
>> <a.motakis@xxxxxxxxxxxxxxxxxxxxxx> wrote:
>>> Though in this case, what makes IRQ routing support useful is not any
>>> particular feature it enables, but how it is used as a standard
>>> interface towards in-kernel IRQ chips for KVM. The eventfd support in
>>> KVM makes heavy use of that, so IRQ routing gives us IRQFDs without
>>> having to completely butcher all the eventfd and irqfd code.
>>
>> I think you should propose a concrete API and give examples
>> of how userspace would be using it; these abstract discussions
>> aren't really coming together in my head. Can the kernel
>> just set up the initial routing mapping as 1:1 so userspace
>> can ignore the pointless extra level of indirection?
>>
>
> Yes, this is what the user gets by default. Unless KVM_SET_GSI_ROUTING
> is used, userspace should not be able to tell the difference.
>
> KVM_IRQ_LINE as used to inject an IRQ, and based on the provided irq
> field the right VGIC pin will be stimulated. The mapping of the irq
> field to a VGIC pin would be as it is already documented today:
>
>>   bits:  | 31 ... 24 | 23  ... 16 | 15    ...    0 |
>>   field: | irq_type  | vcpu_index |     irq_id     |
>>
>> The irq_type field has the following values:
>> - irq_type[0]: out-of-kernel GIC: irq_id 0 is IRQ, irq_id 1 is FIQ
>> - irq_type[1]: in-kernel GIC: SPI, irq_id between 32 and 1019 (incl.)
>>                (the vcpu_index field is ignored)
>> - irq_type[2]: in-kernel GIC: PPI, irq_id between 16 and 31 (incl.)
>
> This should be still valid, by default. The only thing that routing
> adds, is the capability to use KVM_SET_GSI_ROUTING to change this
> mapping to something else (and towards the pins of multiple IRQ chips,
> if that need comes up).
>
> Though the part that is of interest to IRQFDs is not the new API to
> change the routing. The neat point is that we get an abstraction in
> the kernel that allows us to interact with the IRQ chip without having
> to deal with the semantics of how that IRQ should be interpreted on
> that platform, and the IRQFD code makes use of that.
>
> With KVM_SET_GSI_ROUTING one can provide an array of struct
> kvm_irq_routing_entry entries:
>
> struct kvm_irq_routing_entry {
>     __u32 gsi;
>     __u32 type;
>     __u32 flags;
>     __u32 pad;
>     union {
>         struct kvm_irq_routing_irqchip irqchip;
>         struct kvm_irq_routing_msi msi;
>         __u32 pad[8];
>     } u;
> };
>
> struct kvm_irq_routing_irqchip {
>     __u32 irqchip;
>     __u32 pin;
> };
>
> struct kvm_irq_routing_msi {
>     __u32 address_lo;
>     __u32 address_hi;
>     __u32 data;
>     __u32 pad;
> };
>
> __u32 gsi is the global interrupt that we want to match to an IRQ pin.
> We map this to an __u32 irqchip and __u32 pin.
>
> For VGIC we just need to define what pins we will expose. For VGICv2
> that would be 8 CPUs times 16 PPIs plus the SPIs.

Note that this will somehow change for GICv3, which supports up to 2^32
CPUs, and up to 2^32 interrupt IDs. We could decide to limit ourselves
to, let's say, 256 CPUs, and 16bits of ID space, but that would be a
rather massive limitation.

Hm, that limitation is pretty interesting actually... KVM_SET_GSI_ROUTING is a vm ioctl, so to do this properly we need to set some GSIs at the vcpu level... Seems we either limit ourselves, or we find a neat way to change the API.

KVM_IRQ_LINE however is already limited to 256 CPUs. So a way to encode more than 256 target CPUs with KVM_SET_GSI_ROUTING would actually enable us to use more than 256 VCPUs without breaking KVM_IRQ_LINE in the future, since the existing limit in that case would be just a default that we can change.


> Another difference from other platforms is that we would accept to
> reroute only based on the 24 least significant bits; the 8 most
> significant bits we already have defined that we need to distinguish
> between in kernel and out of kernel IRQs. We would only support
> routing for the in-kernel GIC. Asking to reroute an out of kernel GIC,
> should return an error to userspace.

Why do we have to be tied to the current representation that userspace
uses? It seems to me like an unnecessary limitation.


I guess it is a matter of taste. By allowing that, we would lock out userspace from using an out of kernel GIC as soon as it decided to change the routing. Of course, if userspace decides to do that it almost certainly plans to use the in kernel implementation anyway.
 
        M.
--
Jazz is not dead. It just smells funny...



--
Antonios Motakis
Virtual Open Systems
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux