On 8/30/2022 5:25 PM, Manivannan Sadhasivam wrote:
<SNIP>...
diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
index 21b3ac2a29d2..042afec1cf9d 100644
--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
@@ -487,8 +487,9 @@ static int alloc_descs(unsigned int start, unsigned int
cnt, int node,
if (affinity) {
if (affinity->is_managed) {
- flags = IRQD_AFFINITY_MANAGED |
- IRQD_MANAGED_SHUTDOWN;
+// flags = IRQD_AFFINITY_MANAGED |
+// IRQD_MANAGED_SHUTDOWN;
+ flags = 0;//IRQD_AFFINITY_MANAGED |
}
mask = &affinity->mask;
node = cpu_to_node(cpumask_first(mask));
The only solution I can think of is keeping the clocks related to DBI access
active or switch to another clock source that consumes less power if available
during suspend.
But limiting the DBI access using hacks doesn't look good.
Why not just define "irq_startup and irq_shutdown" callbacks for dw_pcie_msi_irq_chip?
So when the cpu is offlined and irq_shutdown is called for that irqchip in migrate_one_irq(),
you would mask the irq and then disable the clocks. Similarly, on CPU onlining, you would
enable the clocks and unmask the irq. This way XO is still achieved as you are turning off
the clocks before suspend and back on after resume.
Thanks,
Sai