On Fri, Mar 20, 2009 at 6:54 AM, Hiroshi DOYU <Hiroshi.DOYU@xxxxxxxxx> wrote: > From: ext Felipe Contreras <felipe.contreras@xxxxxxxxx> > Subject: Re: [PATCH B 3/3] tidspbridge: decreate timeout to a saner value > Date: Fri, 20 Mar 2009 01:06:16 +0100 > >> On Fri, Mar 20, 2009 at 2:00 AM, Guzman Lugo, Fernando <x0095840@xxxxxx> wrote: >> > >> > >> > >> > This is a stress test, it creates 4 processes and each process will do 1000 transfers using streams, so the trace is: >> > >> > STRM_Issue -> WMD_CHNL_AddIOReq -> IO_Schedule >> > >> > IO_Schedule schedules a call to IO_DPC using task let. >> > >> > IO_DPC -> IO_DispatchChnl -> InputChnl -> CHNLSM_InterruptDSP2 >> > >> > Also IO_DispatchChnl -> OutputChnl -> CHNLSM_InterruptDSP2. >> > >> > >> > As we can call a lot CHNLSM_InterruptDSP2 in this test, there is a problem with the timeout. However running other tests, videos and mp3 there no problems. I think we should change to 10ms, only to make sure there is no problem when CHNLSM_InterruptDSP2 is called a lot. >> > >> > Let me know if you are agreed. Or have some comments about it. >> >> Well again, the best way to implement the wait for a slot in the >> mailbox is with interrupts, not busy-looping. If we busy-loop too much >> that would increase the CPU usage, and that would be bad. > > I think that s/w queuing of messages would be more efficient to allow > multiple senders to continue thier work anyway, especially in the case > of having these streamings. Indeed. But what would happen if the application is sending messages way too fast for the DSP to handle? For example, some encoding algorithm might be too heavy, and if we are in a live situation, like video call, then it's ok to drop messages, but user-space needs to be notified of these and adjust the quality-of-service. But of course some other messages, like control messages (start, stop, etc.) should not be dropped, ever, so they must be queued. -- Felipe Contreras -- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html