Hi, On Sun, Oct 20, 2019 at 05:37:19PM +0900, Takashi Sakamoto wrote: > On Sat, Oct 19, 2019 at 09:22:16AM +0200, Takashi Iwai wrote: > > Although the preempt handling in AMDTP looks a bit suspicious, it > > should be OK as long as the code has been tested, so I took as is. > > I can understand your concern but it works well as long as I tested. > In this time I use preemption-disable during processing queued packet in > process context but before I had used irq-disable instead. > > I depict a call graph to process isoc packet in process context below: > > On pcm.ack() or pcm.pointer() callbacks: > fw_iso_context_flush_completions() > ->struct fw_card_driver.flush_iso_completions() > = ohci_flush_iso_completions() > ->tasklet_disable() > ->context_tasklet() > (read registers on 1394 OHCI controller to get info for queued packet) > ->flush_iso_completions() > ->struct fw_iso_context.callback.sc() > = amdtp_master_callback() > ->out_stream_callback() (for irq-target) > ->fw_iso_context_flush_completions() > ->... > ->out_stream_callback() or in_stream_callback() (for non irq-target) > ->tasklet_enable() > > The tasklet of IT context for irq-target AMDTP stream is temporarily > disabled when flushing isoc packets queued for the context. The tasklet > doesn't run even if it's queued in hw IRQ context. In a point to > processing queued packet for several isoc contexts in the same domain, > I have no concern without any irq-flag handling for local cpu. > > 1394 OHCI controller still continue isoc cycle during the above call > graph (= 125 usec interval according to 24.576 MHz clock). Actually the > number of queued packets for non-irq-target AMDTP stream can be slightly > different from the number for irq-target AMDTP stream by one or two > packets without any interruption. In a case that any interruption occurs > after processing queued packets for the irq-target stream, it's likely to > process largely different number of queued packets for the rest of AMDTP > streams in the same domain after resumed. It's desirable not to make such > count gap between streams in the same domain and I leave it for my future > work. > > In this time the count gap is allowed. I use kernel preemption to avoid > the interruption but to catch hw IRQ from 1394 OHCI controller (and the > other hardware). > > > In another point, there's a race condition against flushing queued packet > in runtime between several PCM substreams for AMDTP streams in the same > domain. ALSA PCM core executes pcm.pointer and pcm.ack callback under spin > lock of runtime of PCM substream, thus the race is controlled for operations > to single PCM substream. However some PCM substreams are associated to the > same domain via AMDTP streams. At present I never add any solution for this > race. I realize that this race is managed as well, by a call of test_and_set_bit_lock(). When operations for several PCM substream call pcm.pointer or pcm.ack simultaneously, one of them can flush queued packets. I refine the above call graph: On pcm.ack() or pcm.pointer() callbacks: fw_iso_context_flush_completions() ->struct fw_card_driver.flush_iso_completions() = ohci_flush_iso_completions() ->tasklet_disable() if (!test_and_set_bit_lock()) ->context_tasklet() (read registers on 1394 OHCI controller to get info for queued packet) ->flush_iso_completions() ->struct fw_iso_context.callback.sc() = amdtp_master_callback() ->out_stream_callback() (for irq-target) ->fw_iso_context_flush_completions() ->... ->out_stream_callback() or in_stream_callback() (for non irq-target) ->clear_bit_unlock() ->tasklet_enable() Regards Takashi Sakamoto _______________________________________________ Alsa-devel mailing list Alsa-devel@xxxxxxxxxxxxxxxx https://mailman.alsa-project.org/mailman/listinfo/alsa-devel