PJMEDIA conference bridge - direct port?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If it helps, I am using the conference bridge in more or less the 
exact manner of: pjsip-apps/src/samples/mix.c.

I know this part works, too, when I read from WAV files.  But the WAV 
player has a pjmedia_port that the bridge can read from.  My whole 
problem is that when reading RTP directly from the captures, I get raw 
frames, and no pjmedia_port to encapsulate them.  The conference 
bridge API doesn't provide me with a means of throwing raw PCM frames 
at it.  I assume the splitter/combiner is indeed the way to solve that 
problem, based on what you have said, but I seem not to be using it 
correctly.

On 12/19/2011 08:54 AM, Alex Balashov wrote:

> Hi Alain,
>
> On 12/19/2011 08:39 AM, Alain Totouom wrote:
>
>> What does the stack says:
>>
>> frame->size = ?
>> this_port->info.bytes_per_frame = ?
>
> frame->size is 320. I can't seem to get the value for bytes_per_frame
> for some reason.
>
>> Are you writing (PCAP/RTP) partial frames to port?
>
> No, they are full frames, as far as I know. This part works, I know
> that, because what I used to do is write out two WAV files and then
> re-read them and mix them back together with the conference bridge. I
> am just trying to perform the mixing in-memory now to solve some
> timing issues. I haven't changed the RTP processing code at all, other
> than the port to which the frames are put.
>
>> All pjmedia_put_frame calls to the reverse channel are faulty?
>
> It appears that way.
>
>> The custom port is NOT needed in your case since it would just
>> forward your put_frame/get_frame to the reverse channel. It's more a
>> design foo.
>
> Ah, I see.
>
>> Hm lacking some info, your brdige is created without a sound device,
>> who is the timing provider? Null sound device& null port?
>
> I am not sure how to answer that; I assume that the master port has
> some sort of internal clock.
>
> Ultimately, I am reading frames from the conference bridge's master
> port and writing them to a WAV writer port. This code also worked fine
> when I was reading two WAVs and playing them into the conference bridge.
>
>> If I'm correct the splitter is actually the only port connected to
>> your bridge? If that is the case where does the mixing occurs? Who
>> triggers the data flow with the splitter port?
>
> The idea is that the mixing should occur in the conference bridge;
> that is the purpose for which I am using it. I have two separate RTP
> streams that I am reading out of a PCAP loop; for each stream
> direction, I have separate RTP processing. Previously, I was
> separating the call into two legs, writing them out as separate WAV
> files (say, A.wav and B.wav), then reading A.wav and B.wav back in and
> putting the frames into the bridge, and then writing them back out
> into a third WAV file from the conference bridge's master port.
>
> My goal is simply to do that in memory and remove the need for
> intermediate WAV files.
>
>> You can contact me off-list..
>
> Thanks, I really appreciate your willingness to help! If we get to any
> confidential details, I will do that. In the meantime, if it's all the
> same to you, I prefer to keep the thread on the list so that others
> with the same needs may benefit from searching the list archives in
> the future. It's the least I can do to give back to such a helpful
> community!
>
> -- Alex
>


-- 
Alex Balashov - Principal
Evariste Systems LLC
260 Peachtree Street NW
Suite 2200
Atlanta, GA 30303
Tel: +1-678-954-0670
Fax: +1-404-961-1892
Web: http://www.evaristesys.com/



[Index of Archives]     [Asterisk Users]     [Asterisk App Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [Linux API]
  Powered by Linux