Re: RADOS Bench

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Somnath. Do you mean that I should run Rados Bench in parallel on 2 different clients?

Is there a way to run Rados Bench from 2 clients, so that they run in parallel, except launching them together manually?

 

From: Somnath Roy [mailto:Somnath.Roy@xxxxxxxxxxx]
Sent: Monday, June 15, 2015 1:01 PM
To: Garg, Pankaj; ceph-users@xxxxxxxxxxxxxx
Subject: RE: RADOS Bench

 

Pankaj,

It is the cumulative BW of ceph cluster but you will be limited by your single client BW always.

To verify if you are single client 10Gb network limited or not, put another client and see if it is scaling or not.

 

Thanks & Regards

Somnath

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Garg, Pankaj
Sent: Monday, June 15, 2015 12:55 PM
To: ceph-users@xxxxxxxxxxxxxx
Subject: RADOS Bench

 

Hi,

I have a few machines in my Ceph Cluster. I have another machine that I use to run RADOS Bench to get the performance.

I am now seeing numbers around 1100 MB/Sec, which is quite close to saturation point of the 10Gbps link.

 

I’d like to understand what does the total bandwidth number represent after I run the Rados bench test? Is this cumulative bandwidth of the Ceph Cluster or does it represent the

Bandwidth to the client machine?

 

I’d like to understand if I’m now being limited by my network.

 

Thanks

Pankaj

 



PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux