On 11/13/2014 04:31 PM, Andre Przywara wrote: > > > On 13/11/14 15:02, Nikolay Nikolaev wrote: >> On Thu, Nov 13, 2014 at 4:23 PM, Eric Auger <eric.auger@xxxxxxxxxx> wrote: >>> On 11/13/2014 03:16 PM, Eric Auger wrote: >>>> On 11/13/2014 11:45 AM, Nikolay Nikolaev wrote: >>>>> On Mon, Nov 10, 2014 at 6:27 PM, Christoffer Dall >>>>> <christoffer.dall@xxxxxxxxxx> wrote: >>>>>> On Mon, Nov 10, 2014 at 05:09:07PM +0200, Nikolay Nikolaev wrote: >>>>>>> Hello, >>>>>>> >>>>>>> On Fri, Mar 28, 2014 at 9:09 PM, Christoffer Dall >>>>>>> <christoffer.dall@xxxxxxxxxx> wrote: >>>>>>>> >>>>>>>> On Thu, Mar 13, 2014 at 04:57:26PM +0100, Antonios Motakis wrote: >>>>>>>>> On an unhandled IO memory abort, use the kvm_io_bus_* API in order to >>>>>>>>> handle the MMIO access through any registered read/write callbacks. This >>>>>>>>> is a dependency for eventfd support (ioeventfd and irqfd). >>>>>>>>> >>>>>>>>> However, accesses to the VGIC are still left implemented independently, >>>>>>>>> since the kvm_io_bus_* API doesn't pass the VCPU pointer doing the access. >>>>>>>>> >>>>>>>>> Signed-off-by: Antonios Motakis <a.motakis@xxxxxxxxxxxxxxxxxxxxxx> >>>>>>>>> Signed-off-by: Nikolay Nikolaev <n.nikolaev@xxxxxxxxxxxxxxxxxxxxxx> >>>>>>>>> --- >>>>>>>>> arch/arm/kvm/mmio.c | 32 ++++++++++++++++++++++++++++++++ >>>>>>>>> virt/kvm/arm/vgic.c | 5 ++++- >>>>>>>>> 2 files changed, 36 insertions(+), 1 deletion(-) >>>>>>>>> >>>>>>>>> diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c >>>>>>>>> index 4cb5a93..1d17831 100644 >>>>>>>>> --- a/arch/arm/kvm/mmio.c >>>>>>>>> +++ b/arch/arm/kvm/mmio.c >>>>>>>>> @@ -162,6 +162,35 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, >>>>>>>>> return 0; >>>>>>>>> } >>>>>>>>> >>>>>>>>> +/** >>>>>>>>> + * kvm_handle_mmio - handle an in-kernel MMIO access >>>>>>>>> + * @vcpu: pointer to the vcpu performing the access >>>>>>>>> + * @run: pointer to the kvm_run structure >>>>>>>>> + * @mmio: pointer to the data describing the access >>>>>>>>> + * >>>>>>>>> + * returns true if the MMIO access has been performed in kernel space, >>>>>>>>> + * and false if it needs to be emulated in user space. >>>>>>>>> + */ >>>>>>>>> +static bool handle_kernel_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run, >>>>>>>>> + struct kvm_exit_mmio *mmio) >>>>>>>>> +{ >>>>>>>>> + int ret; >>>>>>>>> + if (mmio->is_write) { >>>>>>>>> + ret = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr, >>>>>>>>> + mmio->len, &mmio->data); >>>>>>>>> + >>>>>>>>> + } else { >>>>>>>>> + ret = kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr, >>>>>>>>> + mmio->len, &mmio->data); >>>>>>>>> + } >>>>>>>>> + if (!ret) { >>>>>>>>> + kvm_prepare_mmio(run, mmio); >>>>>>>>> + kvm_handle_mmio_return(vcpu, run); >>>>>>>>> + } >>>>>>>>> + >>>>>>>>> + return !ret; >>>>>>>>> +} >>>>>>>>> + >>>>>>>>> int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run, >>>>>>>>> phys_addr_t fault_ipa) >>>>>>>>> { >>>>>>>>> @@ -200,6 +229,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run, >>>>>>>>> if (vgic_handle_mmio(vcpu, run, &mmio)) >>>>>>>>> return 1; >>>>>>>>> >>>>>>>>> + if (handle_kernel_mmio(vcpu, run, &mmio)) >>>>>>>>> + return 1; >>>>>>>>> + >>>>>>> >>>>>>> >>>>>>> We're reconsidering ioeventfds patchseries and we tried to evaluate >>>>>>> what you suggested here. >>>>>>> >>>>>>>> >>>>>>>> this special-casing of the vgic is now really terrible. Is there >>>>>>>> anything holding you back from doing the necessary restructure of the >>>>>>>> kvm_bus_io_*() API instead? >>>>>>> >>>>>>> Restructuring the kvm_io_bus_ API is not a big thing (we actually did >>>>>>> it), but is not directly related to the these patches. >>>>>>> Of course it can be justified if we do it in the context of removing >>>>>>> vgic_handle_mmio and leaving only handle_kernel_mmio. >>>>>>> >>>>>>>> >>>>>>>> That would allow us to get rid of the ugly >>>>>>>> Fix it! in the vgic driver as well. >>>>>>> >>>>>>> Going through the vgic_handle_mmio we see that it will require large >>>>>>> refactoring: >>>>>>> - there are 15 MMIO ranges for the vgic now - each should be >>>>>>> registered as a separate device >>> Re-correcting Andre's address, sorry: >>> Hi Nikolay, Andre, >>> >>> what does mandate to register 15 devices? Isn't possible to register a >>> single kvm_io_device covering the whole distributor range [base, base + >>> KVM_VGIC_V2_DIST_SIZE] (current code) and in associated >>> kvm_io_device_ops read/write locate the addressed range and do the same >>> as what is done in current vgic_handle_mmio? Isn't it done that way for >> >> Well, then we'll actually get slower mmio processing. Instead of calling >> vgic_handle_mmio in io_mem_abort, we'll be calling kvm_io_bus_write. >> This just adds another level of translation (i.e. find the kvm_io_ device) >> and the underlying vgic code will remain almost the same. > > Agreed. That was one possibility I came around also, but I think it > defeats the purpose of the rework, which is mostly to get rid of the > GIC's private MMIO dispatching code, right? Hi Andre, Nikolay, Well my feeling is one of your primary goal was to split in a clean way v2 and v3 MMIO regions. This may be achieved by creating either io_device. Also in the future don't we have ITS & redistributors showing up that could match io_devices too - but all that stuff still are very fuzzy for me ;-) - ? with respect to vcpu, we could add an opaque void * in kvm_io_device and pass the vcpu when calling kvm_iodevice_init(). I also think we must resume integration of ioeventfd, whatever the chosen solution all the more so there seems to be some momentum on this now. Best Regards Eric > > But honestly I would happily sacrifice "performance" for easier VGIC > code - especially if one thinks about security for instance. Though I > think that another memory reference doesn't really matter in this > context ;-) > >>> the ioapic? what do I miss? >> I looked quickly in the ioapic code, and if I get it right there are no "ranges' >> like what we have with the GIC. They have this regselect/regwindow concept >> and they seem to have much less "registers" to handle. GIC seems a lot more >> complex in terms of MMIO interface. > > Right, that was my impression, too. IOAPIC isn't really comparable to > the GIC in this respect. That's why I was going away from this rework, > since I thought that the kvm_io_bus API wasn't really meant for such > beasts as the GIC. > > Cheers, > Andre. > >> >> regards, >> Nikolay Nikolaev >> >>> >>> Thanks >>> >>> Best Regards >>> >>> Eric >>>>>>> - the handler of each range should be split into read and write >>>>>>> - all handlers take 'struct kvm_exit_mmio', and pass it to >>>>>>> 'vgic_reg_access', 'mmio_data_read' and 'mmio_data_read' >>>>>>> >>>>>>> To sum up - if we do this refactoring of vgic's MMIO handling + >>>>>>> kvm_io_bus_ API getting 'vcpu" argument we'll get a 'much' cleaner >>>>>>> vgic code and as a bonus we'll get 'ioeventfd' capabilities. >>>>>>> >>>>>>> We have 3 questions: >>>>>>> - is the kvm_io_bus_ getting 'vcpu' argument acceptable for the other >>>>>>> architectures too? >>>>>>> - is this huge vgic MMIO handling redesign acceptable/desired (it >>>>>>> touches a lot of code)? >>>>>>> - is there a way that ioeventfd is accepted leaving vgic.c in it's >>>>>>> current state? >>>>>>> >>>>>> Not sure how the latter question is relevant to this, but check with >>>>>> Andre who recently looked at this as well and decided that for GICv3 the >>>>>> only sane thing was to remove that comment for the gic. >>>>> @Andre - what's your experience with the GICv3 and MMIO handling, >>>>> anything specific? >>>>>> >>>>>> I don't recall the details of what you were trying to accomplish here >>>>>> (it's been 8 months or so) but the surely the vgic handling code should >>>>>> *somehow* be integrated into the handle_kernel_mmio (like Paolo >>>>>> suggested), unless you come back and tell me that that would involve a >>>>>> complete rewrite of the vgic code. >>>>> I'm experimenting now - it's not exactly rewrite of whole vgic code, >>>>> but it will touch a lot of it - all MMIO access handlers and the >>>>> supporting functions. >>>>> We're ready to spend the effort. My question is - is this acceptable? >>>>> >>>>> regards, >>>>> Nikolay Nikolaev >>>>> Virtual Open Systems >>>>>> >>>>>> -Christoffer >>>>> _______________________________________________ >>>>> kvmarm mailing list >>>>> kvmarm@xxxxxxxxxxxxxxxxxxxxx >>>>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm >>>>> >>>> >>> >> -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html