Re: [PATCH 06/19] KVM: PPC: Book3S HV: add a GET_ESB_FD control to the XIVE native device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 07, 2019 at 10:03:15AM +0100, Cédric Le Goater wrote:
> On 2/7/19 3:49 AM, David Gibson wrote:
> > On Wed, Feb 06, 2019 at 08:21:10AM +0100, Cédric Le Goater wrote:
> >> On 2/6/19 2:23 AM, David Gibson wrote:
> >>> On Tue, Feb 05, 2019 at 01:55:40PM +0100, Cédric Le Goater wrote:
> >>>> On 2/5/19 6:28 AM, David Gibson wrote:
> >>>>> On Mon, Feb 04, 2019 at 12:30:39PM +0100, Cédric Le Goater wrote:
> >>>>>> On 2/4/19 5:45 AM, David Gibson wrote:
> >>>>>>> On Mon, Jan 07, 2019 at 07:43:18PM +0100, Cédric Le Goater wrote:
> >>>>>>>> This will let the guest create a memory mapping to expose the ESB MMIO
> >>>>>>>> regions used to control the interrupt sources, to trigger events, to
> >>>>>>>> EOI or to turn off the sources.
> >>>>>>>>
> >>>>>>>> Signed-off-by: Cédric Le Goater <clg@xxxxxxxx>
> >>>>>>>> ---
> >>>>>>>>  arch/powerpc/include/uapi/asm/kvm.h   |  4 ++
> >>>>>>>>  arch/powerpc/kvm/book3s_xive_native.c | 97 +++++++++++++++++++++++++++
> >>>>>>>>  2 files changed, 101 insertions(+)
> >>>>>>>>
> >>>>>>>> diff --git a/arch/powerpc/include/uapi/asm/kvm.h b/arch/powerpc/include/uapi/asm/kvm.h
> >>>>>>>> index 8c876c166ef2..6bb61ba141c2 100644
> >>>>>>>> --- a/arch/powerpc/include/uapi/asm/kvm.h
> >>>>>>>> +++ b/arch/powerpc/include/uapi/asm/kvm.h
> >>>>>>>> @@ -675,4 +675,8 @@ struct kvm_ppc_cpu_char {
> >>>>>>>>  #define  KVM_XICS_PRESENTED		(1ULL << 43)
> >>>>>>>>  #define  KVM_XICS_QUEUED		(1ULL << 44)
> >>>>>>>>  
> >>>>>>>> +/* POWER9 XIVE Native Interrupt Controller */
> >>>>>>>> +#define KVM_DEV_XIVE_GRP_CTRL		1
> >>>>>>>> +#define   KVM_DEV_XIVE_GET_ESB_FD	1
> >>>>>>>
> >>>>>>> Introducing a new FD for ESB and TIMA seems overkill.  Can't you get
> >>>>>>> to both with an mmap() directly on the xive device fd?  Using the
> >>>>>>> offset to distinguish which one to map, obviously.
> >>>>>>
> >>>>>> The page offset would define some sort of user API. It seems feasible.
> >>>>>> But I am not sure this would be practical in the future if we need to 
> >>>>>> tune the length.
> >>>>>
> >>>>> Um.. why not?  I mean, yes the XIVE supports rather a lot of
> >>>>> interrupts, but we have 64-bits of offset we can play with - we can
> >>>>> leave room for billions of ESB slots and still have room for billions
> >>>>> of VPs.
> >>>>
> >>>> So the first 4 pages could be the TIMA pages and then would come  
> >>>> the pages for the interrupt ESBs. I think that we can have different 
> >>>> vm_fault handler for each mapping.
> >>>
> >>> Um.. no, I'm saying you don't need to tightly pack them.  So you could
> >>> have the ESB pages at 0, the TIMA at, say offset 2^60.
> >>
> >> Well, we know that the TIMA is 4 pages wide and is "directly" related
> >> with the KVM interrupt device. So being at offset 0 seems a good idea.
> >> While the ESB segment is of a variable size depending on the number
> >> of IRQs and it can come after I think.
> >>
> >>>> I wonder how this will work out with pass-through. As Paul said in 
> >>>> a previous email, it would be better to let QEMU request a new 
> >>>> mapping to handle the ESB pages of the device being passed through.
> >>>> I guess this is not a special case, just another offset and length.
> >>>
> >>> Right, if we need multiple "chunks" of ESB pages we can given them
> >>> each their own terabyte or several.  No need to be stingy with address
> >>> space.
> >>
> >> You can not put them anywhere. They should map the same interrupt range
> >> of ESB pages, overlapping with the underlying segment of IPI ESB pages. 
> > 
> > I don't really follow what you're saying here.
> 
> 
> What we want the guest to access in terms of ESB pages is something like 
> below, VMA0 being the initial mapping done by QEMU at offset 0x0, the IPI 
> ESB pages being populated on the demand with the loads and the stores from 
> the guest :
> 
> 
>                  0x0                   0x1100  0x1200    0x1300     
>       
>          ranges   |       CPU IPIs   .. |  VIO  | PCI LSI |  MSIs
>        	  
>                   +-+-+-+-+-+-+-...-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- ....
>  VMA0    IPI ESB  | | | | | | |     | | | | | | | | | | | | | | | | | |
>           pages   +-+-+-+-+-+-+-...-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- ....
> 
> 
> 
> A device is passed through and the driver requests MSIs. 
> 
> We now want the guest to access the HW ESB pages for the requested IRQs 
> but still the initial IPI ESB pages for the others. Something like below : 
> 
> 
>                  0x0                   0x1100  0x1200    0x1300     
>       
>          ranges   |       CPU IPIs   .. |  VIO  | PCI LSI |  MSIs
> 
>                   +-+-+-+-+-+-+-...-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- ....
>  VMA0    IPI ESB  | | | | | | |     | | | | | | | | | | | | | | | | | |
>           pages   +-+-+-+-+-+-+-...-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- ....
>                                                                   
>  VMA1    PHB ESB                                          +-------+
>           pages                                           | | | | | 
>                                                           +-------+

Right, except of course VMA0 will be split into two pieces by
performing the mmap() over it.

> The VMA1 is the result of a new mmap() being done at an offset depending on 
> the first IRQ number requested by the driver.

Right... that's one way we could do it.  But the irq numbers are all
dynamically allocated here, so could we instead just put the
passthrough MSIs in a separate range?  We'd still need a separate
mmap() for them, but we wouldn't have to deal with mapping over and
unmapping if the device is removed or whatever.

> This is because the vm_fault handler uses the page offset to find the 
> associated KVM IRQ struct containing the addresses of the EOI and trigger 
> pages in the underlying hardware, which will be the PHB in case of a 
> passthrough device.  
> 
> >From there, the VMA1 mmap() pointer will be used to create a 'ram device'
> memory region which will be mapped on top of the initial ESB memory region 
> in QEMU. This will override the initial IPI ESB pages with the PHB ESB pages 
> in the guest ESB address space.

Um.. what?  If that qemu memory range is already mapped into the guest
we don't need to create new RAM devices or anything for the
overmapping.  If we overmap in qemu that will just get carried into
the guest.

> That's the plan I have in mind as suggested by Paul if I understood it well.
> The mechanics are more complex than the patch zapping the PTEs from the VMA
> but it's also safer.

Well, yes, where "safer" means "has the possibility to be correct".

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux