On 05/17/2012 08:19 AM, Voznesensky Vladimir wrote: > I've just tested libqb. For every message I've: > - Instrumented a client with tcs reading before qb_ipcc_send and after > qb_ipcc_recv; > - Instrumented a client with CPU frequency testing. > - Commented out qb_log in s1_msg_process_fn of a server. > > So, it took 0.000140 sec - 0.000156 sec for every message to pass and > return. > As I understood, I'ts very large number. > Compare, for instance, with > http://code.google.com/p/disruptor/ > Angus may have a better answer here, but executing a send/receive operation will always be slow (synchronous messaging). This is one of the main things that was addressed in corosync 2.0 ipc and made possible by libqb. Corosync 2.0 uses libqb to send messages (via a ring buffer) until that ring buffer is full. It doesn't wait for a response. The server can then process more then 1 message at a time, allowing very high throughput rates from a client to a server (if the client/server combo can handle async style communication). Regards -steve > Thanks. > > On 17.05.2012 17:54, Voznesensky Vladimir wrote: >> Hello. >> >> It seems that corosync gives a very long latency on a one-node >> configuration. >> We have developed a small test passing messages between two threads. >> Each message carried an original tsc timestamp counter value, so we >> have been able to compute the difference with the receiver's tsc. >> >> 1 (100-bytes) message pass took about 200 us. >> 30000 messages batch gave about 2 ms. >> >> Some ipc ring-buffer implementation using eventfd showed us less than >> a 1 microsecond between two processor cores to pass a 128 byte message >> in a 1000 messages batch. >> >> So, what's the source of such relatively high latency? > > _______________________________________________ > discuss mailing list > discuss@xxxxxxxxxxxx > http://lists.corosync.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss@xxxxxxxxxxxx http://lists.corosync.org/mailman/listinfo/discuss