Fwd: Re: cpg latency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Taking back to list...

odd that more cpg clients reduce latency..

Assuming the instrumentation is correct (everything I read says don't
trust the TSC), the results are:

rbreader->rbwriter = 10-20 usec
corosync 1 client without totem = 70-120 usec
corosync 1 client with totem = 150-160 usec
corosync 4 client without totem = 40-50 usec
corosync 4 client with totem = 50-60 usec

This implies corosync adds 40usec of latency on top of the ringbuffer
implementation.

Running rbwriter 1000 times in a loop with a rbreader running shows the
following oprofile results:

rb-reader:

2282834  62.0242  libc-2.14.90.so          __memcpy_ssse3_back
495055   13.4506  libqb.so.0.13.0          qb_rb_space_used
378187   10.2753  libpthread-2.14.90.so    sem_timedwait
207293    5.6321  libqb.so.0.13.0          qb_rb_chunk_check
82163     2.2324  libqb.so.0.13.0          qb_rb_chunk_read
78788     2.1407  libqb.so.0.13.0          qb_rb_chunk_reclaim
42200     1.1466  libqb.so.0.13.0          my_posix_sem_timedwait
35622     0.9678  [vdso] (tgid:20388
range:0x7fff2e5ff000-0x7fff2e600000) [vdso]
 (tgid:20388 range:0x7fff2e5ff000-0x7fff2e600000)
32311     0.8779  librt-2.14.90.so         clock_gettime

rb-writer:
852329   45.5852  libc-2.14.90.so          __memcpy_ssse3_back
413116   22.0947  libpthread-2.14.90.so    sem_post
219898   11.7608  libqb.so.0.13.0          qb_rb_space_free
161334    8.6286  libqb.so.0.13.0          qb_rb_chunk_commit
113573    6.0742  libqb.so.0.13.0          qb_rb_chunk_alloc
69525     3.7184  libqb.so.0.13.0          qb_rb_chunk_write
13529     0.7236  lt-rbwriter              main

rb-reader spending 4x as much time in memcpy - possible cache alignment
issue.

Regards
-steve

-------- Original Message --------
Subject: Re:  cpg latency
Date: Fri, 18 May 2012 12:19:25 +0400
From: Voznesensky Vladimir <voznesensky@xxxxxx>
To: Steven Dake <sdake@xxxxxxxxxx>

We had tested it.

1 process sending 1 message per second gave:
70-120 microseconds latency without totem;
150-160 microseconds latency with totem.

4 processes each sending 1 message per second gave:
40-50 microseconds latency without totem;
50-60 microseconds latency with totem.

Thank you.
- VV



On 18.05.2012 11:03, Steven Dake wrote:
> I have sOn 05/17/2012 08:19 AM, Voznesensky Vladimir wrote:
>> I've just tested libqb. For every message I've:
>> - Instrumented a client with tcs reading before qb_ipcc_send and after
>> qb_ipcc_recv;
>> - Instrumented a client with CPU frequency testing.
>> - Commented out qb_log in s1_msg_process_fn of a server.
>>
>> So, it took 0.000140 sec - 0.000156 sec for every message to pass and
>> return.
>> As I understood, I'ts very large number.
>> Compare, for instance, with
>> http://code.google.com/p/disruptor/
>>
>> Thanks.
>>
> I have sent a test patch which short-circuits totem for cpg messaging
> single node.
>
> Could you see how the latency looks (or post a sample latency tester).
>
> Regards
> -steve
>

_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss


[Index of Archives]     [Linux Clusters]     [Corosync Project]     [Linux USB Devel]     [Linux Audio Users]     [Photo]     [Yosemite News]    [Yosemite Photos]    [Linux Kernel]     [Linux SCSI]     [X.Org]

  Powered by Linux