Re: Simple CephFS benchmark

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

Our ceph is running the following hardware:
3 nodes with 36 OSDs, 18 SSD?s one SSD for two OSD?s, each node has 64gb mem
& 2x6core cpu?s
4 monitors running on other servers
40gbit infiniband with IPoIB

Here's my cephfs fio test results using the following file, and changing rw
parameter

[test]
rw=randread # read
size=128m
directory=/var/lib/nova/instances/tmp/fio
ioengine=libaio
#ioengine=rbd
#direct=1
bs=4k
#numjobs=8
iodepth=64


randread:
Jobs: 1 (f=1): [r] [100.0% done] [7532KB/0KB/0KB /s] [1883/0/0 iops] [eta
00m:00s]
random-read: (groupid=0, jobs=1): err= 0: pid=2159: Wed Jul  1 06:53:42 2015
  read : io=131072KB, bw=7275.4KB/s, iops=1818, runt= 18016msec
    slat (usec): min=213, max=3338, avg=543.06, stdev=131.16
    clat (usec): min=4, max=41593, avg=34596.58, stdev=2776.77
     lat (usec): min=555, max=42211, avg=35141.07, stdev=2804.49
    clat percentiles (usec):
     |  1.00th=[28288],  5.00th=[30080], 10.00th=[31104], 20.00th=[32384],
     | 30.00th=[33024], 40.00th=[34048], 50.00th=[35072], 60.00th=[35584],
     | 70.00th=[36096], 80.00th=[37120], 90.00th=[37632], 95.00th=[38656],
     | 99.00th=[39680], 99.50th=[40192], 99.90th=[40704], 99.95th=[41216],
     | 99.99th=[41728]
    bw (KB  /s): min= 6744, max= 7648, per=99.85%, avg=7264.44, stdev=215.44
    lat (usec) : 10=0.01%, 750=0.01%
    lat (msec) : 2=0.01%, 4=0.02%, 10=0.05%, 20=0.08%, 50=99.84%
  cpu          : usr=2.25%, sys=8.79%, ctx=63468, majf=0, minf=491
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%,
>=64=99.8%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%,
>=64=0.0%
     issued    : total=r=32768/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=131072KB, aggrb=7275KB/s, minb=7275KB/s, maxb=7275KB/s,
mint=18016msec, maxt=18016msec

read:
  read : io=131072KB, bw=537180KB/s, iops=134295, runt=   244msec
    slat (usec): min=1, max=41811, avg= 6.75, stdev=252.09
    clat (usec): min=1, max=41995, avg=463.46, stdev=1985.09
     lat (usec): min=3, max=41996, avg=470.26, stdev=2000.60
    clat percentiles (usec):
     |  1.00th=[  126],  5.00th=[  127], 10.00th=[  129], 20.00th=[  129],
     | 30.00th=[  131], 40.00th=[  131], 50.00th=[  133], 60.00th=[  135],
     | 70.00th=[  141], 80.00th=[  153], 90.00th=[  684], 95.00th=[ 2128],
     | 99.00th=[ 4320], 99.50th=[ 4576], 99.90th=[42240], 99.95th=[42240],
     | 99.99th=[42240]
    lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.02%, 50=0.04%
    lat (usec) : 100=0.07%, 250=84.49%, 500=4.10%, 750=1.41%, 1000=1.39%
    lat (msec) : 2=2.53%, 4=4.48%, 10=1.27%, 50=0.19%
  cpu          : usr=1.65%, sys=43.62%, ctx=121, majf=0, minf=71
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%,
>=64=99.8%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%,
>=64=0.0%
     issued    : total=r=32768/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=131072KB, aggrb=537180KB/s, minb=537180KB/s, maxb=537180KB/s,
mint=244msec, maxt=244msec

Hope this gives you some idea.

Br, T

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
Mark Nelson
Sent: 1. heinäkuuta 2015 0:57
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Simple CephFS benchmark

Two popular benchmarks in the HPC space for testing distributed file systems
are IOR and mdtest.  Both use MPI to coordinate processes on different
clients.  Another option may be to use fio or iozone.  Netmist may also be
an option, but I haven't used it myself and I'm not sure that it's fully
open source.

If you are only interested in single-client tests vs a local disk, I'd
personally use fio.

Mark

On 06/30/2015 02:50 PM, Hadi Montakhabi wrote:
> I have set up a ceph storage cluster and I'd like to utilize the 
> cephfs (I am assuming this is the only way one could use some other 
> code without using the API).
> To do so, I have mounted my cephfs on the client node.
> I'd like to know what would be a good benchmark for measuring write 
> and read performance on the cephfs compared to the local filesystem?
>
> Thanks,
> Hadi
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux