On Wed, 9 Jan 2019 15:49:56 +0100 Pierre Morel <pmorel@xxxxxxxxxxxxx> wrote: > On 09/01/2019 14:10, Halil Pasic wrote: > > On Wed, 9 Jan 2019 13:14:17 +0100 > > Pierre Morel <pmorel@xxxxxxxxxxxxx> wrote: > > > >> On 08/01/2019 16:21, Michael Mueller wrote: > >>> > >>> > >>> On 08.01.19 13:59, Halil Pasic wrote: > >>>> On Wed, 19 Dec 2018 20:17:54 +0100 > >>>> Michael Mueller <mimu@xxxxxxxxxxxxx> wrote: > >>>> > >>>>> This function processes the Gib Alert List (GAL). It is required > >>>>> to run when either a gib alert interruption has been received or > >>>>> a gisa that is in the alert list is cleared or dropped. > >>>>> > >>>>> The GAL is build up by millicode, when the respective ISC bit is > >>>>> set in the Interruption Alert Mask (IAM) and an interruption of > >>>>> that class is observed. > >>>>> > >>>>> Signed-off-by: Michael Mueller <mimu@xxxxxxxxxxxxx> > >>>>> --- > >>>>> arch/s390/kvm/interrupt.c | 140 > >>>>> ++++++++++++++++++++++++++++++++++++++++++++++ > >>>>> 1 file changed, 140 insertions(+) > >>>>> > >>>>> diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c > >>>>> index 48a93f5e5333..03e7ba4f215a 100644 > >>>>> --- a/arch/s390/kvm/interrupt.c > >>>>> +++ b/arch/s390/kvm/interrupt.c > >>>>> @@ -2941,6 +2941,146 @@ int kvm_s390_get_irq_state(struct kvm_vcpu > >>>>> *vcpu, __u8 __user *buf, int len) > >>>>> return n; > >>>>> } > >>>>> +static int __try_airqs_kick(struct kvm *kvm, u8 ipm) > >>>>> +{ > >>>>> + struct kvm_s390_float_interrupt *fi = &kvm->arch.float_int; > >>>>> + struct kvm_vcpu *vcpu = NULL, *kick_vcpu[MAX_ISC + 1]; > >>>>> + int online_vcpus = atomic_read(&kvm->online_vcpus); > >>>>> + u8 ioint_mask, isc_mask, kick_mask = 0x00; > >>>>> + int vcpu_id, kicked = 0; > >>>>> + > >>>>> + /* Loop over vcpus in WAIT state. */ > >>>>> + for (vcpu_id = find_first_bit(fi->idle_mask, online_vcpus); > >>>>> + /* Until all pending ISCs have a vcpu open for airqs. */ > >>>>> + (~kick_mask & ipm) && vcpu_id < online_vcpus; > >>>>> + vcpu_id = find_next_bit(fi->idle_mask, online_vcpus, > >>>>> vcpu_id)) { > >>>>> + vcpu = kvm_get_vcpu(kvm, vcpu_id); > >>>>> + if (psw_ioint_disabled(vcpu)) > >>>>> + continue; > >>>>> + ioint_mask = (u8)(vcpu->arch.sie_block->gcr[6] >> 24); > >>>>> + for (isc_mask = 0x80; isc_mask; isc_mask >>= 1) { > >>>>> + /* ISC pending in IPM ? */ > >>>>> + if (!(ipm & isc_mask)) > >>>>> + continue; > >>>>> + /* vcpu for this ISC already found ? */ > >>>>> + if (kick_mask & isc_mask) > >>>>> + continue; > >>>>> + /* vcpu open for airq of this ISC ? */ > >>>>> + if (!(ioint_mask & isc_mask)) > >>>>> + continue; > >>>>> + /* use this vcpu (for all ISCs in ioint_mask) */ > >>>>> + kick_mask |= ioint_mask; > >>>>> + kick_vcpu[kicked++] = vcpu; > >>>> > >>>> Assuming that the vcpu can/will take all ISCs it's currently open for > >>>> does not seem right. We kind of rely on this assumption here, or? > >> > >> why does it not seem right? > >> > > > > When an interrupt is delivered a psw-swap takes place. The new-psw > > may fence IO interrupts. Thus for example if we have the vcpu open for > > all ISCs and 0, 1 and 2 pending, we may end up only delivering 0, if the > > psw-swap corresponding to delivering 0 closes the vcpu for IO > > interrupts. After guest has control, we don't have control over the rest > > of the story. > > OK I think I understand your concern, waking up a single waiting vCPU > per ISC is not enough. > We must wake all vCPU in wait state having at least one matching ISC bit. > That is not what I was trying to say, and IMHO generally it also ain't true that we must. But I may be missing something. Regards, Halil