Re: cpg latency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 17/05/12 19:19 +0400, Voznesensky Vladimir wrote:
I've just tested libqb. For every message I've:
- Instrumented a client with tcs reading before qb_ipcc_send and after qb_ipcc_recv;
- Instrumented a client with CPU frequency testing.
- Commented out qb_log in s1_msg_process_fn of a server.

So, it took 0.000140 sec - 0.000156 sec for every message to pass and return.

See here:
http://code.google.com/p/disruptor/wiki/PerformanceResults

It is one directional (IPC only), so to compare you need to get a timestamp
put it in a message and send it. Then on the server compare with
the time there.

If you run libqb ringbuffer benchmarks they look comparible, although
I am not sure of the message size in their tests:

In one shell: ./tests/rbreader
In another: ./tests/rbwriter
[info] write size 14 OPs/sec 7509762.500 MB/sec   100.266
[info] write size 42 OPs/sec 27063600.000 MB/sec  1084.014
[info] write size 84 OPs/sec 25361400.000 MB/sec  2031.667
[info] write size 140 OPs/sec 26525198.000 MB/sec  3541.496
[info] write size 210 OPs/sec 21258504.000 MB/sec  4257.475
[info] write size 294 OPs/sec 13559322.000 MB/sec  3801.766
[info] write size 392 OPs/sec 7675186.000 MB/sec  2869.294
[info] write size 504 OPs/sec 7496252.000 MB/sec  3603.087
[info] write size 630 OPs/sec 6605020.000 MB/sec  3968.394
[info] write size 770 OPs/sec 7752539.000 MB/sec  5692.916
[info] write size 924 OPs/sec 11303267.000 MB/sec  9960.383
[info] write size 1092 OPs/sec 8590327.000 MB/sec  8946.073
etc...

To get more consistent results I probably need to do more
iterations for each size.

I'll setup an IPC test and let you know.

-Angus

As I understood, I'ts very large number.
Compare, for instance, with
http://code.google.com/p/disruptor/

Thanks.

On 17.05.2012 17:54, Voznesensky Vladimir wrote:
Hello.

It seems that corosync gives a very long latency on a one-node configuration. We have developed a small test passing messages between two threads. Each message carried an original tsc timestamp counter value, so we have been able to compute the difference with the receiver's tsc.

1 (100-bytes) message pass took about 200 us.
30000 messages batch gave about 2 ms.

Some ipc ring-buffer implementation using eventfd showed us less than a 1 microsecond between two processor cores to pass a 128 byte message in a 1000 messages batch.

So, what's the source of such relatively high latency?

_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss
_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss


[Index of Archives]     [Linux Clusters]     [Corosync Project]     [Linux USB Devel]     [Linux Audio Users]     [Photo]     [Yosemite News]    [Yosemite Photos]    [Linux Kernel]     [Linux SCSI]     [X.Org]

  Powered by Linux