Re: About Totem's performance measurement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Steven,
Another question. When I run cpgbench in a 16 nodes vm cluster, the throuput was about only 3MByte/sec,but I did not see any retransmit notification  in the log and runtime.totem.pg.mrp.srp.mcast_retx is zero. So if the virtual bridge was easy to loss datagram then why there was no retransmition happened?

On Jan 9, 2013 11:10 PM, "Steven Dake" <sdake@xxxxxxxxxx> wrote:
On 01/08/2013 11:26 PM, jason wrote:

Update : By using iperf to check multicast performance, iperf also give a bad result(100Mbit/sec) on my virtual machine, but CPU usage is very low(only 1%). The command I used is :
iperf -c 239.255.1.10 -u -l 8K -b 1G -w -i 5 -t 300 -T 4

Jason,

Unfortunately the virtual network drivers perform very poorly with multicast.  Multicast is not a operational mode the authors of those operating system drivers have optimized.  You might try udpu which uses unicast (which has been optimized), although I can't guarantee performance will be great either.

Most of the development of Corosync is on bare metal and optimized for that environment.

Totem uses alot of CPU to operate, especially with encryption.  The usage may be higher depending on how many cpu cores you have and which version you are using.  1.4.x uses threads, which cause some context switching chaos - so more cores here would be helpful.  I'd recommend giving 2.x a try if you haven't.

Regards
-steve

On Jan 9, 2013 1:57 PM, "jason" <huzhijiang@xxxxxxxxx> wrote:

Hi Jan,

Between our two virtual machines, iperf -i 1 -w 1M can reach line rate that is 1.00 Gbit/sec. But I do not know why cpgbench only got 10 MB/s. And when cpgbench was running, top showed it used up almost one CPU. 30% in usermode, 70% for in kernel mode.

On Dec 10, 2012 3:28 PM, "Jan Friesse" <jfriesse@xxxxxxxxxx> wrote:
Jason,

jason napsal(a):
> Hi All,
> Currently we created a 16 nodes(virtuallization) cluser environment to test

cool

> the performance of the Totem protocol. We wan to see the relationship
> between the cluster size(the amount of nodes ) and the bandwidth/latency.

After you will finish testing, can you please share results?

> And I found there are already some test utilities that I can use such as
> cpgbench, evtbench and pload and  I  want to know which tool is the best
> that I should use to meet my requirement and is there any other way to test

Question is, what type of performance you would like to test. Is that
bandwidth, or latency? Network or local one? cpgbench seems to be
generally best for testing both bandwidth (with big messages) and
latency (with small messages). I would not recommend pload, because this
makes corosync to behave weird (it's almost removed in 2.x).

Honza

> such performance?
>
>
>
>
> _______________________________________________
> discuss mailing list
> discuss@xxxxxxxxxxxx
> http://lists.corosync.org/mailman/listinfo/discuss



_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss

_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss

[Index of Archives]     [Linux Clusters]     [Corosync Project]     [Linux USB Devel]     [Linux Audio Users]     [Photo]     [Yosemite News]    [Yosemite Photos]    [Linux Kernel]     [Linux SCSI]     [X.Org]

  Powered by Linux