2012/4/12 Mario Raffin <raffin at ermes-cctv.com> > Dear All, > I trying to use pjsip with an ARM9 architecture with sound device and > co-processor for echo cancellation and a-law and u-law. > > In order to use the co-processor features (echo, a-law, u-law) I see > different choices: > > 1) use audio driver and modify the Sound Device Port to use my echo lib > and pjmedia for the codecs; > > 2) work at the Audio device API abstraction level as for the APS and VAS; > > 3) work at the Sound Device port level implementing my own port with > get_frame and put_frame functions which provide echo cancelled PCM audio > frames to the conference bridge obtained through the co-processor lib I > have. Eventually I can the modify pjmedia perform u-law and a-law with the > coprocessor library; > > This last solution appears the simplest but I am not sure about the > timings. Since the timings are defined by the audio device, how can I make > sure that the Conference bridge receives frames with a correct cadence? Is > it sufficent to make a blocking function "get_frame" which for instance > provides 160 samples each 20ms (at 8Khz)? > > Yes that should be sufficient. But the problem with solution 1 and 3 is they all require modification to pjmedia, with more modifications in solution 3 than solution 1. This is not good in the long run and I always discourage this. Solution 2 is then the cleanest, you can bunde the audio_dev code with your app and register the devices to the audio_dev manager at run-time. -Benny -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.pjsip.org/pipermail/pjsip_lists.pjsip.org/attachments/20120414/13c67985/attachment.html>