On 22/03/2024 23:22, Sean Anderson wrote:
On 3/22/24 14:09, Tomi Valkeinen wrote:
On 22/03/2024 18:18, Sean Anderson wrote:
On 3/22/24 01:32, Tomi Valkeinen wrote:
On 21/03/2024 21:17, Sean Anderson wrote:
On 3/21/24 15:08, Tomi Valkeinen wrote:
On 21/03/2024 20:01, Sean Anderson wrote:
On 3/21/24 13:25, Tomi Valkeinen wrote:
On 21/03/2024 17:52, Sean Anderson wrote:
On 3/20/24 02:53, Tomi Valkeinen wrote:
On 20/03/2024 00:51, Sean Anderson wrote:
Do we need to handle interrupts while either delayed work is being done?
Probably not.
If we do need a delayed work, would just one work be enough which
handles both HPD_EVENT and HPD_IRQ, instead of two?
Maybe, but then we need to determine which pending events we need to
handle. I think since we have only two events it will be easier to just
have separate workqueues.
The less concurrency, the better...Which is why it would be nice to do it all in the threaded irq.
Yeah, but we can use a mutex for this which means there is not too much
interesting going on.
Ok. Yep, if we get (hopefully) a single mutex with clearly defined fields that it protects, I'm ok with workqueues.
I'd still prefer just one workqueue, though...
Yeah, but then we need a spinlock or something to tell the workqueue what it should do.
Yep. We could also always look at the HPD (if we drop the big sleeps) in the wq, and have a flag for the HPD IRQ, which would reduce the state to a single bit.
How about something like
zynqmp_dp_irq_handler(...)
{
/* Read status and handle underflow/overflow/vblank */
status &= ZYNQMP_DP_INT_HPD_EVENT | ZYNQMP_DP_INT_HPD_IRQ;
if (status) {
atomic_or(status, &dp->status);
return IRQ_WAKE_THREAD;
}
return IRQ_HANDLED;
}
zynqmp_dp_thread_handler(...)
{
status = atomic_xchg(&dp->status, 0);
/* process HPD stuff */
}
which gets rid of the workqueue too.
I like it. We can't use IRQF_ONESHOT, as that would keep the irq masked while the threaded handler is being ran. I don't think that's a problem, but just something to keep in mind that both handlers can run concurrently.
Actually, I'm not sure we can do it like this. Imagine we have something
like
CPU 0 CPU 1
zynqmp_dp_thread_handler()
atomic_xchg()
__handle_irq_event_percpu
zynqmp_dp_irq_handler()
atomic_or()
return IRQ_WAIT_THREAD
__irq_wake_thread()
test_and_set_bit(IRQTF_RUNTHREAD, ...)
return
return IRQ_HANDLED
and whoops we now have bits set in dp->status but the thread isn't
running. I don't think there's a way to fix this without locking (or two
In your example above, the IRQTF_RUNTHREAD has been cleared by the
threaded-irq before calling zynqmp_dp_thread_handler. So the hard-irq
will set that flag before the zynqmp_dp_thread_handler() returns.
When zynqmp_dp_thread_handler() returns, the execution will go to
irq_wait_for_interrupt(). That function will notice the IRQTF_RUNTHREAD
flag (and clear it), and run the zynqmp_dp_thread_handler() again.
So if I'm not mistaken, when the hard-irq function returns
IRQ_WAKE_THREAD, it's always guaranteed that a "fresh" run of the
threaded handler will be ran.
I think that makes sense, as I'm not sure how threaded handlers without
IRQF_ONESHOT could be used if that were not the case. I hope I'm right
in my analysis =).
Tomi