Re: [EXTERNAL] Ceph performance is too good (impossible..)...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



My understanding is that when using direct=1 on a raw block device FIO (aka-you) will have to handle all the sector alignment or the request will get buffered to perform the alignment. 

 

Try adding the –blockalign=512b option to your jobs, or better yet just use the native FIO RBD engine.

 

Something like this (untested) -

 

[A]

ioengine=rbd

clientname=admin

pool=rbd

rbdname=fio_test

direct=1

group_reporting=1

unified_rw_reporting=1

time_based=1

rw=read

bs=4MB

numjobs=16

ramp_time=10

runtime=20

 

From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of V Plus <v.plussharp@xxxxxxxxx>
Date: Sunday, December 11, 2016 at 7:44 PM
To: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: [EXTERNAL] [ceph-users] Ceph performance is too good (impossible..)...

 

Hi Guys,

we have a ceph cluster with 6 machines (6 OSD per host). 

1. I created 2 images in Ceph, and map them to another host A (outside the Ceph cluster). On host A, I got /dev/rbd0 and /dev/rbd1.

2. I start two fio job to perform READ test on rbd0 and rbd1. (fio job descriptions can be found below)

"sudo fio fioA.job -output a.txt & sudo  fio fioB.job -output b.txt  & wait"

3. After the test, in a.txt, we got bw=1162.7MB/s, in b.txt, we get bw=3579.6MB/s.

The results do NOT make sense because there is only one NIC on host A, and its limit is 10 Gbps (1.25GB/s).

 

I suspect it is because of the cache setting.

But I am sure that in file /etc/ceph/ceph.conf on host A,I already added:

[client]

rbd cache = false

 

Could anyone give me a hint what is missing? why....

Thank you very much.

 

fioA.job:

[A]

direct=1

group_reporting=1

unified_rw_reporting=1

size=100%

time_based=1

filename=/dev/rbd0

rw=read

bs=4MB

numjobs=16

ramp_time=10

runtime=20

 

fioB.job:

[B]

direct=1

group_reporting=1

unified_rw_reporting=1

size=100%

time_based=1

filename=/dev/rbd1

rw=read

bs=4MB

numjobs=16

ramp_time=10

runtime=20

 

Thanks...

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux