Re: Ceph performance is too good (impossible..)...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Fill up the image with big write (say 1M) first before reading and you should see sane throughput.

 

Thanks & Regards

Somnath

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of V Plus
Sent: Sunday, December 11, 2016 5:44 PM
To: ceph-users@xxxxxxxxxxxxxx
Subject: [ceph-users] Ceph performance is too good (impossible..)...

 

Hi Guys,

we have a ceph cluster with 6 machines (6 OSD per host). 

1. I created 2 images in Ceph, and map them to another host A (outside the Ceph cluster). On host A, I got /dev/rbd0 and /dev/rbd1.

2. I start two fio job to perform READ test on rbd0 and rbd1. (fio job descriptions can be found below)

"sudo fio fioA.job -output a.txt & sudo  fio fioB.job -output b.txt  & wait"

3. After the test, in a.txt, we got bw=1162.7MB/s, in b.txt, we get bw=3579.6MB/s.

The results do NOT make sense because there is only one NIC on host A, and its limit is 10 Gbps (1.25GB/s).

 

I suspect it is because of the cache setting.

But I am sure that in file /etc/ceph/ceph.conf on host A,I already added:

[client]

rbd cache = false

 

Could anyone give me a hint what is missing? why....

Thank you very much.

 

fioA.job:

[A]

direct=1

group_reporting=1

unified_rw_reporting=1

size=100%

time_based=1

filename=/dev/rbd0

rw=read

bs=4MB

numjobs=16

ramp_time=10

runtime=20

 

fioB.job:

[B]

direct=1

group_reporting=1

unified_rw_reporting=1

size=100%

time_based=1

filename=/dev/rbd1

rw=read

bs=4MB

numjobs=16

ramp_time=10

runtime=20

 

Thanks...

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux