Hi. I'm trying to use pjsip for iPhone application, but failed to make it work. I'm really frustrated and have no idea how to deal with it. Any help would be very appreciated. Here is what i tried: 1. Used Conference Bridge with Audio Queue Services to get PCM samples. Application lagged and hung up, sometimes i even had to reboot the iPhone. So i decided to not use software codecs, but to use hardware ones with the Switch Board. 2. Used Switch Board + Audio Queue Services to get encrypted audio. Here i got another problem with outgoing sound. Audio Queue Services slept for about 500 ms and then pushed packets during short period of time (maybe during 120 ms). That created huge latency, sound was lagging at the other side. In order to send packets every 30ms (for ilbc) i queued them and was trying to send in a separate thread, but usleep function in the separate thread was not working properly. It slept 30ms 4 times in a row and then slept 120 ms (from usleep man page: "The actual time slept may be longer, due to system latencies and possible limitations in the timer resolution of the hardware"). So i got lags at the other side again. 3. Then i read in the apple's documentation: "To provide lowest latency audio, especially when doing simultaneous input and output (such as for a VoIP application), use the I/O unit or the Voice Processing I/O unit. See ?Audio Unit Support in iPhone OS.? ". What i needed was how to use hardware codecs with Audio Units. Audio Units could be comibined into graphs, but i failed to connect kAudioUnitSubType_AUConverter and kAudioUnitSubType_RemoteIO. I didn't find any example or docs which i could use. Then i decided to use AudioConverter to encrypt raw pcm samples produced by kAudioUnitSubType_RemoteIO unit. But i got new problems. a) For outgoing sound: the new problems was that audio unit produced samples every 23ms by default. If i used code like this: Float32 fProperty = 0.01; AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, sizeof(fProperty), &fProperty); it produced samples every 11.5 ms or smth like that (not 10ms), i didnt have exactly number of samples which i needed to encrypt (240 for ilbc or 160 for PCMU), i always had more. So i had to create some queue again to store pcm samples and then feed them to AudioConverter when i had correct number of them (>= 240 for ilbc). But because of such behavior (there were extra samples in the buffer), periods of time when packets were send were not equal to 30 ms, but more or less than that. Sound quality at other side was bad. b) For incoming sound: actually i failed to decrypt traffic at all. Here is the code i wrote for test purpose to play iLBC: static OSStatus playbackCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { // ipod_aud_stream - structure for pjlib data struct ipod_aud_stream *strm = (struct ipod_aud_stream *)inRefCon; // zUnit - object of the custom class to store data we need to work with Audio Units ZAUnit *zUnit = strm->zUnit; if (strm->play_thread_initialized == 0 || !pj_thread_is_registered()) { pj_thread_register("ipod_play", strm->play_thread_desc, &strm->play_thread); strm->play_thread_initialized = 1; } unsigned pcmLength = inNumberFrames * 2; // 2 bytes per sample // zUnit.mPlayData - buffer to store samples from the pjlib // get enough frames from pjlib to decode while (([zUnit.mPlayData length] < pcmLength)) { pjmedia_frame *frame; AudioBufferList convBufferList; unsigned packetSize = zUnit.mDataFormat->mBytesPerPacket; // 50 for iLBC char buffer[zUnit.pcmBytesPerCodedFrame]; // 240 samples * 2 bytes for iLBC convBufferList.mNumberBuffers = 1; convBufferList.mBuffers[0].mNumberChannels = 1; convBufferList.mBuffers[0].mDataByteSize = zUnit.pcmBytesPerCodedFrame; // 240 samples * 2 bytes for iLBC convBufferList.mBuffers[0].mData = buffer; frame = &zUnit.xfrmPlay->base; zUnit.xfrmPlay->base.type = PJMEDIA_FRAME_TYPE_EXTENDED; zUnit.xfrmPlay->base.size = packetSize; zUnit.xfrmPlay->base.buf = NULL; zUnit.xfrmPlay->base.timestamp.u64 = strm->timestamp_out; zUnit.xfrmPlay->base.bit_info = 0; (*strm->play_cb)(strm->user_data, frame); UInt32 ioOutputDataPacketSize = zUnit.pcmBytesPerCodedFrame / 2;// 2 bytes per sample AudioConverterFillComplexBuffer(zUnit.audioConvPlay, toPCMInputProc, strm, &ioOutputDataPacketSize, &convBufferList, NULL); [zUnit.mPlayData appendBytes:buffer length:zUnit.pcmBytesPerCodedFrame]; strm->timestamp_out += strm->samples_per_frame; } ioData->mBuffers[0].mDataByteSize = pcmLength; memcpy(ioData->mBuffers[0].mData, [zUnit.mPlayData mutableBytes], pcmLength); // delete head of the buffer NSRange range = {0, [zUnit.mPlayData length]-pcmLength}; [zUnit.mPlayData replaceBytesInRange:range withBytes:[zUnit.mPlayData mutableBytes]+pcmLength]; [zUnit.mPlayData setLength:[zUnit.mPlayData length]-pcmLength]; return noErr; } static OSStatus toPCMInputProc( AudioConverterRef inAudioConverter, UInt32 *ioNumberDataPackets, AudioBufferList *ioData, AudioStreamPacketDescription **outDataPacketDescription, void *inUserData) { struct ipod_aud_stream *strm = (struct ipod_aud_stream *)inUserData; ZAUnit *zUnit = strm->zUnit; unsigned packetSize = zUnit.mDataFormat->mBytesPerPacket; ioData->mBuffers[0].mNumberChannels = 1; ioData->mBuffers[0].mData = pjmedia_frame_ext_get_subframe(zUnit.xfrmPlay, 0)->data; ioData->mBuffers[0].mDataByteSize = (pjmedia_frame_ext_get_subframe(zUnit.xfrmPlay, 0)->bitlen + 7) >> 3; *ioNumberDataPackets = 1; return noErr; } Maybe you could advise me something. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.pjsip.org/pipermail/pjsip_lists.pjsip.org/attachments/20090914/93eb2937/attachment-0001.html>