On Wed, 2016-12-28 at 19:20 +0200, Ahmed S. Darwish wrote: > Welcome back ;-) > > On Wed, Dec 28, 2016 at 04:09:55PM +0200, Tanu Kaskinen wrote: > > > > Previously pacat wrote at most pa_stream_writable_size() bytes at a > > time, now with this patch it can write more than that if there's more > > input data available. Writing in bigger chunks is potentially a bit more > > efficient. > > --- > > src/utils/pacat.c | 4 ++-- > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > diff --git a/src/utils/pacat.c b/src/utils/pacat.c > > index 4e1bbfc..5a000ae 100644 > > --- a/src/utils/pacat.c > > +++ b/src/utils/pacat.c > > @@ -525,7 +525,7 @@ fail: > > /* New data on STDIN **/ > > static void stdin_callback(pa_mainloop_api*a, pa_io_event *e, int fd, pa_io_event_flags_t f, void *userdata) { > > uint8_t *buf = NULL; > > - size_t writable, towrite, r; > > + size_t writable = (size_t) -1, towrite, r; > > > > pa_stream_writable_size() return values which matches the > required latency. Meanwhile pa_stream_begin_write(.., -1) > always returns 64K, regardless of latency sensetivities. > > Is that advisable? When reading from a file, the latency can't be observed, so it doesn't matter. When reading from stdin, the latency can in some situations be observed (e.g. receiving data from parec over a pipe), but I don't think reading big chunks actually has any effect in this case either. If there's a lot of data in the pipe, it will be there regardless of how much or little pacat tries to read, and if there's only a little data in the pipe, pacat can't read much anyway. That said, maybe it's better to keep using pa_stream_writable_size() so that it won't be necessary to think about these issues. > Also, can't read() block with such large values? I've not > any O_NONBLOCK flags in the pacat code. Indeed. I assumed that pacat did nonblocking IO, but that doesn't seem to be the case. I suggest that we just forget about this patch. -- Tanu https://www.patreon.com/tanuk