On Sat, 23.05.09 03:32, Cristian Morales Vega (cmorve69 at yahoo.es) wrote: > Hi, > > I just wanted to know if there is a way to use the > PA_STREAM_ADJUST_LATENCY flag using the simple API. > I see that pa_simple_new() has a parameter for pa_buffer_attr, but no > for pa_stream_flags_t. Why I would want to specify the server buffer > size? I only want to specify the size because of the latency, isn't > that the common case? Perhaps PA_STREAM_ADJUST_LATENCY is the default > when using the simple API? Hmm, it currently is not the default. However I guess it makes sense to make it the default, even if that might be considered API breakage. But I guess that kind of breakage doesn't really count since the usefulness of the attr argument is not really given without the flag set. I have now made this the default int PA git. > The app is a game console emulator. It doesn't needs anything special, > only a low latency so video and audio are in sync. Hmm? For audio/video in sync you don't need low latency. All you need is accurate latency measurements. Which PA provides. > Right now the code is just an init() function that calls > pa_simple_new() and a sample() function that does this: > > void sample(uint16_t left, uint16_t right) { > if(!device.handle) return; > > buffer.data[buffer.offset++] = left + (right << 16); > if(buffer.offset >= 64) { > int error; > pa_simple_write(device.handle, (const void*)buffer.data, > buffer.offset * sizeof(uint32_t), &error); > buffer.offset = 0; > } > } > > I would really like to keep the simple API, if I could specify the > full latency I would not need to touch more than one or two lines. > But... it is possible? With the current git version of PA after my little change mentioned above: yes. Otherwise: no. Sorry, Lennart -- Lennart Poettering Red Hat, Inc. lennart [at] poettering [dot] net http://0pointer.net/lennart/ GnuPG 0x1A015CC4