On 02/15/2013 08:56:14 PM, Paul Mackerras wrote:
I have no particular objection to the device control API per se, but
I have two objections to using it as the primary interface to the XICS
emulation.
First, I dislike the magical side-effect where creating a device of a
particular type (e.g. MPIC or XICS) automatically attaches it to the
interrupt lines of the vcpus. I prefer an explicit request to do
in-kernel interrupt control.
OK. This is device-specific behavior, so you could define it
differently for XICS than MPIC. I suppose we could change it for MPIC
as well, to leave an opening for the unlikely case where we'd want to
model an MPIC that isn't directly connected to the CPUs.
How is the explicit request made in this patchset?
Secondly, it means that we are completely abandoning any attempt to
define an abstract or generic interface to in-kernel interrupt
controller emulations. Each device will have its own unique set of
attribute groups and its own unique userspace code to drive it, with
no commonality between them.
Yes. I am unconvinced that such an abstraction is well-advised
(especially after seeing existing "generic" interfaces that are clearly
APIC-oriented). This isn't like normal driver interfaces where we're
abstracting away hardware differences to let generic code use a
device. Userspace knows what kind of device it wants, and how it wants
it to integrate with the rest of the emulated system. We'd have to go
out of our way to apply the abstraction on *both* ends. What do we get
from that other than a chance that the abstraction leaks? What
significant code actually becomes common? kvm_set_irq() is just a thin
wrapper around the ioctl.
> >We have live migration working in qemu for
> >pSeries guests with in-kernel XICS emulation using this interface.
> >If you're not doing live migration,
>
> We don't yet, but would prefer not to assume that it'll never
happen.
>
> >> for interrupt injection, what if there's a race with the user
> >changing
> >> other flags via MMIO? Maybe this isn't an issue with XICS, but
> >this is
> >> being presented as a generic API.
> >
> >They're not used while the guest is running, as I said, but even if
> >they were, there is appropriate locking in there to handle any
races.
>
> OK, KVM_IRQ_LINE is still used for interrupt injection. I was
> hoping to avoid going through a standardized interface that forces a
> global interrupt numberspace.
Why?
The standardized interface doesn't make things any easier (as noted
above, the caller is already mpic-specific code), and we'd have to come
up with a scheme for flattening our interrupt numberspace (rather than
introduce new attribute groups for things like IPI and timer
interrupts). It may still be necessary when it comes to irqfd,
though...
> How do MSIs get injected?
Just like other interrupts - from the point of view of the interrupt
controller they're edge-triggered interrupt sources.
Ah right, I guess this is all set up via hcalls for XICS.
With MPIC exposing its registers via the device control api, everything
just works -- the PCI device generates a write to the MPIC's memory
region, the QEMU MPIC stub sends the write to the kernel as for any
other MMIO access (this passthrough is also useful for debugging), the
in-kernel MPIC sees the write to the "generate an MSI" register and
does its thing. Compare that to all special the MSI code for APIC...
:-)
> BTW, do you have any plans regarding irqfd?
I'm going to look at that next.
Likewise... We should probably coordinate our efforts so that at least
the de-APICization of the code only has to get done once.
> What about interrupt controllers that allow multiple destinations?
The destination can be an identifier for a group of vcpus, or even a
bitmap -- that's why I made it 32 bits.
So you can have single delivery, or be limited to 32 vcpus, or have to
implement some destination ID allocation scheme (which is more state
that needs to be accessible somehow).
> More than 256 priorities? Different "levels" of output (normal,
> critical, machine check)? Programmable vector numbers? Active
> high/low control?
There are plenty of bits free in the 64 bits per source that I have
allowed. We can accommodate those things.
MPIC vector numbers take up 16 of the bits. The architected interrupt
level field is 8 bits, though only a handful of values are actually
needed. Add a couple binary flags, and it gets pretty tight if a third
type of interrupt controller starts wanting something new.
(BTW, I think having more
than 256 priorities would be insane - do you know of any actual
example that does?)
No, but hardware designers have been known to do insane things.
> The per-vcpu state isn't even part of this AFAICT. It's an
> XICS-specific ONE_REG -- which is fine, but all that's left of the
> "generic" API is the get/set sources which is an imperfect match to
> our per-IRQ state and it's not clear how an implementation should
> extend it.
Yes, the names of the bitfields in the ICP state word are
XICS-specific, but the concepts are pretty generic - current processor
priority, identifier for interrupt awaiting service, pending IPI
request priority, pending interrupt request priority.
We don't have separate concepts of "pending IPI request priority" and
"pending interrupt request priority". There can be multiple interrupts
awaiting service (or even in service, if different priorities). We
have both "current task priority" (which is a user-set mask-by-priority
register) and the priority of the highest-prio in-service interrupt --
which would "current processor priorty" be? Etc.
-Scott
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html