Re: [PATCH v4 08/20] irqchip/gic-v4.1: Plumb get/set_irqchip_state SGI callbacks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020-02-20 03:11, Zenghui Yu wrote:
Hi Marc,

On 2020/2/18 23:31, Marc Zyngier wrote:
diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index 7656b353a95f..0ed286dba827 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -144,7 +144,7 @@ struct event_lpi_map {
      u16            *col_map;
      irq_hw_number_t        lpi_base;
      int            nr_lpis;
-    raw_spinlock_t        vlpi_lock;
+    raw_spinlock_t        map_lock;

So we use map_lock to protect both LPI's and VLPI's mapping affinity of
a device, and use vpe_lock to protect vPE's affinity, OK.

      struct its_vm        *vm;
      struct its_vlpi_map    *vlpi_maps;
      int            nr_vlpis;
@@ -240,15 +240,33 @@ static struct its_vlpi_map *get_vlpi_map(struct irq_data *d)
      return NULL;
  }

-static int irq_to_cpuid(struct irq_data *d)
+static int irq_to_cpuid_lock(struct irq_data *d, unsigned long *flags)
  {
-    struct its_device *its_dev = irq_data_get_irq_chip_data(d);
      struct its_vlpi_map *map = get_vlpi_map(d);
+    int cpu;

-    if (map)
-        return map->vpe->col_idx;
+    if (map) {
+        raw_spin_lock_irqsave(&map->vpe->vpe_lock, *flags);
+        cpu = map->vpe->col_idx;
+    } else {
+        struct its_device *its_dev = irq_data_get_irq_chip_data(d);
+        raw_spin_lock_irqsave(&its_dev->event_map.map_lock, *flags);
+        cpu = its_dev->event_map.col_map[its_get_event_id(d)];
+    }

-    return its_dev->event_map.col_map[its_get_event_id(d)];
+    return cpu;
+}

This helper is correct for normal LPIs and VLPIs, but wrong for per-vPE
IRQ (doorbell) and vSGIs. irq_data_get_irq_chip_data() gets confused by
both of them.

Yes, I've fixed that in the current state of the tree last week. Do have a
look if you can, but it seems to survive on both the model with v4.1 and
my D05.

[...]

-        rdbase = per_cpu_ptr(gic_rdists->rdist, vpe->col_idx)->rd_base;
+        cpu = irq_to_cpuid_lock(d, &flags);
+        rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
         gic_write_lpir(d->parent_data->hwirq, rdbase + GICR_INVLPIR);
          wait_for_syncr(rdbase);
+        irq_to_cpuid_unlock(d, flags);
      } else {
          its_vpe_send_cmd(vpe, its_send_inv);
      }

Do we really need to grab the vpe_lock for those which are belong to
the same irqchip with its_vpe_set_affinity()? The IRQ core code should
already ensure the mutual exclusion among them, wrong?

I've been trying to think about that, but jet-lag keeps getting in the way. I empirically think that you are right, but I need to go and check the various code paths to be sure. Hopefully I'll have a bit more brain space next week.

For sure this patch tries to do too many things at once...

        M.
--
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm




[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux