On 15/01/13 11:34 -0700, Steven Dake wrote:
On 01/14/2013 12:38 AM, jason wrote:
Hi Steven,
I have done some investigations about the heavy cpu usage of
cpgbench. I think cpu is used up mainly because if cpgbench calls
cpg_mcast_joined() with CS_ERR_TRY_AGAIN returned, it simply try it
again. Many useless trying again will happen if the network can be
easily congested. So are there any asynchronous method we can use
in that senario to reduce cpu usage?
No such method currently exists. We have identified this spinning as
a performance issue in clients, but don't have a clear solution to
address the issue.
One thing that might be worth investigating is a callback which
identifies when flow control state changes. Currently we just mark a
bit "on or off". I have briefly discussed this some time ago with
Angus (libqb maintainer, where the ipc code is) but we didn't come to
conclusion if this would a) fix the problem b) be possible.
If you look at cpg.c
https://github.com/corosync/corosync/blob/master/lib/cpg.c#L366
we are passing 0 into qb_ipcc_event_recv(), one option you might
want to play with is to pass a non-zero timeout in - say 100ms.
This will probably reduce the spinning, as it will pass this into
poll().
-Angus
Regards
-steve
On Jan 9, 2013 11:10 PM, "Steven Dake" <sdake@xxxxxxxxxx
<mailto:sdake@xxxxxxxxxx>> wrote:
On 01/08/2013 11:26 PM, jason wrote:
Update : By using iperf to check multicast performance, iperf
also give a bad result(100Mbit/sec) on my virtual machine, but
CPU usage is very low(only 1%). The command I used is :
iperf -c 239.255.1.10 -u -l 8K -b 1G -w -i 5 -t 300 -T 4
Jason,
Unfortunately the virtual network drivers perform very poorly with
multicast. Multicast is not a operational mode the authors of
those operating system drivers have optimized. You might try udpu
which uses unicast (which has been optimized), although I can't
guarantee performance will be great either.
Most of the development of Corosync is on bare metal and optimized
for that environment.
Totem uses alot of CPU to operate, especially with encryption.
The usage may be higher depending on how many cpu cores you have
and which version you are using. 1.4.x uses threads, which cause
some context switching chaos - so more cores here would be
helpful. I'd recommend giving 2.x a try if you haven't.
Regards
-steve
On Jan 9, 2013 1:57 PM, "jason" <huzhijiang@xxxxxxxxx
<mailto:huzhijiang@xxxxxxxxx>> wrote:
Hi Jan,
Between our two virtual machines, iperf -i 1 -w 1M can reach
line rate that is 1.00 Gbit/sec. But I do not know why
cpgbench only got 10 MB/s. And when cpgbench was running, top
showed it used up almost one CPU. 30% in usermode, 70% for in
kernel mode.
On Dec 10, 2012 3:28 PM, "Jan Friesse" <jfriesse@xxxxxxxxxx
<mailto:jfriesse@xxxxxxxxxx>> wrote:
Jason,
jason napsal(a):
> Hi All,
> Currently we created a 16 nodes(virtuallization) cluser
environment to test
cool
> the performance of the Totem protocol. We wan to see
the relationship
> between the cluster size(the amount of nodes ) and the
bandwidth/latency.
After you will finish testing, can you please share results?
> And I found there are already some test utilities that
I can use such as
> cpgbench, evtbench and pload and I want to know which
tool is the best
> that I should use to meet my requirement and is there
any other way to test
Question is, what type of performance you would like to
test. Is that
bandwidth, or latency? Network or local one? cpgbench
seems to be
generally best for testing both bandwidth (with big
messages) and
latency (with small messages). I would not recommend
pload, because this
makes corosync to behave weird (it's almost removed in 2.x).
Honza
> such performance?
>
>
>
>
> _______________________________________________
> discuss mailing list
> discuss@xxxxxxxxxxxx <mailto:discuss@xxxxxxxxxxxx>
> http://lists.corosync.org/mailman/listinfo/discuss
_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx <mailto:discuss@xxxxxxxxxxxx>
http://lists.corosync.org/mailman/listinfo/discuss
_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss
_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss