Re: [RFC PATCH v1 2/6] KVM: s390: Reject SIGP when destination CPU is busy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11.10.21 09:52, Thomas Huth wrote:
On 11/10/2021 09.43, Christian Borntraeger wrote:
Am 11.10.21 um 09:27 schrieb Thomas Huth:
On 08/10/2021 22.31, Eric Farman wrote:
With KVM_CAP_USER_SIGP enabled, most orders are handled by userspace.
However, some orders (such as STOP or STOP AND STORE STATUS) end up
injecting work back into the kernel. Userspace itself should (and QEMU
does) look for this conflict, and reject additional (non-reset) orders
until this work completes.

But there's no need to delay that. If the kernel knows about the STOP
IRQ that is in process, the newly-requested SIGP order can be rejected
with a BUSY condition right up front.

Signed-off-by: Eric Farman <farman@xxxxxxxxxxxxx>
Reviewed-by: Christian Borntraeger <borntraeger@xxxxxxxxxx>
---
   arch/s390/kvm/sigp.c | 43 +++++++++++++++++++++++++++++++++++++++++++
   1 file changed, 43 insertions(+)

diff --git a/arch/s390/kvm/sigp.c b/arch/s390/kvm/sigp.c
index cf4de80bd541..6ca01bbc72cf 100644
--- a/arch/s390/kvm/sigp.c
+++ b/arch/s390/kvm/sigp.c
@@ -394,6 +394,45 @@ static int handle_sigp_order_in_user_space(struct
kvm_vcpu *vcpu, u8 order_code,
       return 1;
   }
+static int handle_sigp_order_is_blocked(struct kvm_vcpu *vcpu, u8
order_code,
+                    u16 cpu_addr)
+{
+    struct kvm_vcpu *dst_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, cpu_addr);
+    int rc = 0;
+
+    /*
+     * SIGP orders directed at invalid vcpus are not blocking,
+     * and should not return busy here. The code that handles
+     * the actual SIGP order will generate the "not operational"
+     * response for such a vcpu.
+     */
+    if (!dst_vcpu)
+        return 0;
+
+    /*
+     * SIGP orders that process a flavor of reset would not be
+     * blocked through another SIGP on the destination CPU.
+     */
+    if (order_code == SIGP_CPU_RESET ||
+        order_code == SIGP_INITIAL_CPU_RESET)
+        return 0;
+
+    /*
+     * Any other SIGP order could race with an existing SIGP order
+     * on the destination CPU, and thus encounter a busy condition
+     * on the CPU processing the SIGP order. Reject the order at
+     * this point, rather than racing with the STOP IRQ injection.
+     */
+    spin_lock(&dst_vcpu->arch.local_int.lock);
+    if (kvm_s390_is_stop_irq_pending(dst_vcpu)) {
+        kvm_s390_set_psw_cc(vcpu, SIGP_CC_BUSY);
+        rc = 1;
+    }
+    spin_unlock(&dst_vcpu->arch.local_int.lock);
+
+    return rc;
+}
+
   int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu)
   {
       int r1 = (vcpu->arch.sie_block->ipa & 0x00f0) >> 4;
@@ -408,6 +447,10 @@ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu)
           return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);
       order_code = kvm_s390_get_base_disp_rs(vcpu, NULL);
+
+    if (handle_sigp_order_is_blocked(vcpu, order_code, cpu_addr))
+        return 0;
+
       if (handle_sigp_order_in_user_space(vcpu, order_code, cpu_addr))
           return -EOPNOTSUPP;

We've been bitten quite a bit of times in the past already by doing too
much control logic in the kernel instead of doing it in QEMU, where we
should have a more complete view of the state ... so I'm feeling quite a
bit uneasy of adding this in front of the "return -EOPNOTSUPP" here ...
Did you see any performance issues that would justify this change?

It does at least handle this case for simple userspaces without
KVM_CAP_S390_USER_SIGP .

For that case, I'd prefer to swap the order here by doing the "if
handle_sigp_order_in_user_space return -EOPNOTSUPP" first, and doing the "if
handle_sigp_order_is_blocked return 0" afterwards.

... unless we feel really, really sure that it always ok to do it like in
this patch ... but as I said, we've been bitten by such things a couple of
times already, so I'd suggest to better play safe...

As raised in the QEMU series, I wonder if it's cleaner for user space to set the target CPU as busy/!busy for SIGP while processing an order. We'll need a new VCPU IOCTL, but it conceptually sounds cleaner to me ...
--
Thanks,

David / dhildenb




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Kernel Development]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite Info]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Linux Media]     [Device Mapper]

  Powered by Linux