On Sun, Aug 19, 2012 at 3:22 PM, Peter Maydell <peter.maydell@xxxxxxxxxx> wrote: > On 19 August 2012 20:12, Christoffer Dall <c.dall@xxxxxxxxxxxxxxxxxxxxxx> wrote: >> On Sun, Aug 19, 2012 at 1:01 PM, Peter Maydell <peter.maydell@xxxxxxxxxx> wrote: >>> On 18 August 2012 20:25, Christoffer Dall <c.dall@xxxxxxxxxxxxxxxxxxxxxx> wrote: >>>> I don't think we should have a vcpu ioctl for this purpose at all for >>>> two reasons 1) this happens asynchronously to the vcpu thread and 2) >>>> even a PPI is semantically not a vcpu operation but an operation on >>>> the gic distributor, which belongs to the VM. >>> >>> I'm not at all convinced by (1) as a reason but (2) is more compelling. >> >> I am not saying (1) is something written in stone, but if that's the >> consensus for things now (as Avi indicates) I don't see why we should >> come and change it when we can easily build on what is there. > > My point is more that if there are operations that are really cpu > specific I don't see why we should arbitrarily claim that you can't > do them with a vcpu ioctl even if the actual implementation of that > vcpu ioctl would be perfectly OK with doing it from some other thread > than the vcpu thread. > well if you want to grab locks in the arch-generic piece of the vcpu handler assuming that the vcpus are always called from the vcpu thread for efficiency then that's a sensible enough scheme in my ears, but then again, I didn't come up with this 'rule'. >>>> Can't we simply build on what we have and use this scheme: >>>> >>>> | 31 30 29 ... 18 17 16 | 15 | 14 13 12 ... 3 2 1 | 0 | >>>> | vcpu index | 0 | SPI / PPI | I=0/F=1 | >>> >>>> For user space gic and PPI: >>>> vcpu_idx = val >> 16; >>>> if (vcpu_idx > online_vcpus) >>>> return -EINVAL; >>>> >>>> For user space gic: >>>> bool irq = !(val & 1); >>>> bool fiq = val & 1' >>>> >>>> For in-kernel gic: >>>> irq = (val >> 1) & 0x7fff; >>>> if (irq < 16) >>>> return -EINVAL; >>> >>> Rather than cramming all of these into a single format, maybe it >>> would be better to have a 'target' field indicating which array >>> of input lines you are attempting to assert and an 'index' for >>> which signal within that array you are asserting. You could then >>> make the arrays match up with the h/w A15's external signals, which >>> have the following arrays of inputs: >>> IRQS[] : external interrupts to the GIC >>> nIRQ[] : IRQ line for each core >>> nFIQ[] : FIQ line for each core >>> PPI[] : per-core private peripheral interrupts (not actually >>> an external signal on A15 h/w since it's signalling >>> between two different bits of the processor) >>> >>> Then you don't have to worry about later additions (if any) not >>> being fittable into the existing scheme. >>> >> >> I'm fine with it either way, and I like that we can expand on it in >> the future, I just want to reach a decision, so let's do what you >> suggest: >> >> IRQS[992] >> nIRQ[256] >> nFIQ[256] >> PPI[4096] >> >> e.g. index = 993 means IRQ line raised on vcpu with index 1 > > I'm confused about what you mean here. What I had in mind was > that we'd have a field (eg 16 bits) for "which array" and > the bottom 16 bits for "index". So (assuming we order the arrays > as above) raising IRQ for VCPU 1 would be array 1, index 1 > so 0x10001. > oh, now I see, yes that works too, as I said I honestly don't care. What I liked about my original approach was that it was easy to do the shifts and mask, but as long as it is well documented, those bits can be packed however you like:) _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm