I have sOn 05/17/2012 08:19 AM, Voznesensky Vladimir wrote: > I've just tested libqb. For every message I've: > - Instrumented a client with tcs reading before qb_ipcc_send and after > qb_ipcc_recv; > - Instrumented a client with CPU frequency testing. > - Commented out qb_log in s1_msg_process_fn of a server. > > So, it took 0.000140 sec - 0.000156 sec for every message to pass and > return. > As I understood, I'ts very large number. > Compare, for instance, with > http://code.google.com/p/disruptor/ > > Thanks. > I have sent a test patch which short-circuits totem for cpg messaging single node. Could you see how the latency looks (or post a sample latency tester). Regards -steve > On 17.05.2012 17:54, Voznesensky Vladimir wrote: >> Hello. >> >> It seems that corosync gives a very long latency on a one-node >> configuration. >> We have developed a small test passing messages between two threads. >> Each message carried an original tsc timestamp counter value, so we >> have been able to compute the difference with the receiver's tsc. >> >> 1 (100-bytes) message pass took about 200 us. >> 30000 messages batch gave about 2 ms. >> >> Some ipc ring-buffer implementation using eventfd showed us less than >> a 1 microsecond between two processor cores to pass a 128 byte message >> in a 1000 messages batch. >> >> So, what's the source of such relatively high latency? > > _______________________________________________ > discuss mailing list > discuss@xxxxxxxxxxxx > http://lists.corosync.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss@xxxxxxxxxxxx http://lists.corosync.org/mailman/listinfo/discuss