Re: [kvm-unit-tests PATCH v3 3/3] lib: s390x: better smp interrupt checks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 7/13/22 12:45, Claudio Imbrenda wrote:
Use per-CPU flags and callbacks for Program and Extern interrupts,
instead of global variables.

This allows for more accurate error handling; a CPU waiting for an
interrupt will not have it "stolen" by a different CPU that was not
supposed to wait for one, and now two CPUs can wait for interrupts at
the same time.

This will significantly improve error reporting and debugging when
things go wrong.

Both program interrupts and external interrupts are now CPU-bound, even
though some external interrupts are floating (notably, the SCLP
interrupt). In those cases, the testcases should mask interrupts and/or
expect them appropriately according to need.

Signed-off-by: Claudio Imbrenda <imbrenda@xxxxxxxxxxxxx>
---
  lib/s390x/asm/arch_def.h | 16 ++++++++++-
  lib/s390x/smp.h          |  8 +-----
  lib/s390x/interrupt.c    | 57 +++++++++++++++++++++++++++++-----------
  lib/s390x/smp.c          | 11 ++++++++
  4 files changed, 69 insertions(+), 23 deletions(-)

diff --git a/lib/s390x/asm/arch_def.h b/lib/s390x/asm/arch_def.h
index b3282367..03578277 100644
--- a/lib/s390x/asm/arch_def.h
+++ b/lib/s390x/asm/arch_def.h
@@ -41,6 +41,17 @@ struct psw {
  	uint64_t	addr;
  };
+struct cpu {
+	struct lowcore *lowcore;
+	uint64_t *stack;
+	void (*pgm_cleanup_func)(void);

We should change the parameter to include the stack frame for easier manipulation of the pre-exception registers, especially the CRs.

+	uint16_t addr;
+	uint16_t idx;
+	bool active;
+	bool pgm_int_expected;
+	bool ext_int_expected;
+};

And I'd opt for also integrating the io handling function and getting rid of the unset function to make them all look the same.

Looking at Nico's patches the external handler will follow soon anyway.


I'm not 100% happy with having this struct in this file, what kept you from including smp.h?

+struct lowcore *smp_get_lowcore(uint16_t idx)
+{
+	if (THIS_CPU->idx == idx)
+		return &lowcore;
+
+	check_idx(idx);
+	return cpus[idx].lowcore;
+}

I'm waiting for the moment where we need locking in the struct cpu.

+
  int smp_sigp(uint16_t idx, uint8_t order, unsigned long parm, uint32_t *status)
  {
  	check_idx(idx);
@@ -253,6 +262,7 @@ static int smp_cpu_setup_nolock(uint16_t idx, struct psw psw)
/* Copy all exception psws. */
  	memcpy(lc, cpus[0].lowcore, 512);
+	lc->this_cpu = &cpus[idx];
/* Setup stack */
  	cpus[idx].stack = (uint64_t *)alloc_pages(2);
@@ -325,6 +335,7 @@ void smp_setup(void)
  	for (i = 0; i < num; i++) {
  		cpus[i].addr = entry[i].address;
  		cpus[i].active = false;
+		cpus[i].idx = i;
  		/*
  		 * Fill in the boot CPU. If the boot CPU is not at index 0,
  		 * swap it with the one at index 0. This guarantees that the




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux