Re: Ceph 0.94.5 with accelio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks a lot for the quick update Greg. This lead me to ask if there's anything out there to improve performance in an Infiniband environment with Ceph. In the cluster that I mentioned earlier. I've setup 4 OSD server nodes nodes each with 8 OSD daemons running with 800x Intel SSD DC S3710 disks (740.2G for OSD and 5G for Journal) and also using IB FDR 56Gb/s for the PUB and CLUS network, and I'm getting the following fio numbers:


# fio --rw=randread --bs=1m --numjobs=4 --iodepth=32 --runtime=22 --time_based --size=16777216k --loops=1 --ioengine=libaio --direct=1 --invalidate=1 --fsync_on_close=1 --randrepeat=1 --norandommap --group_reporting --exitall --name dev-ceph-randread-1m-4thr-libaio-32iodepth-22sec --filename=/mnt/rbd/test1
dev-ceph-randread-1m-4thr-libaio-32iodepth-22sec: (g=0): rw=randread, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=32
...
dev-ceph-randread-1m-4thr-libaio-32iodepth-22sec: (g=0): rw=randread, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=32
fio-2.1.3
Starting 4 processes
dev-ceph-randread-1m-4thr-libaio-32iodepth-22sec: Laying out IO file(s) (1 file(s) / 16384MB)
Jobs: 4 (f=4): [rrrr] [33.8% done] [1082MB/0KB/0KB /s] [1081/0/0 iops] [eta 00m:45s]
dev-ceph-randread-1m-4thr-libaio-32iodepth-22sec: (groupid=0, jobs=4): err= 0: pid=63852: Mon Nov 23 10:48:07 2015
  read : io=21899MB, bw=988.23MB/s, iops=988, runt= 22160msec
    slat (usec): min=192, max=186274, avg=3990.48, stdev=7533.77
    clat (usec): min=10, max=808610, avg=125099.41, stdev=90717.56
     lat (msec): min=6, max=809, avg=129.09, stdev=91.14
    clat percentiles (msec):
     |  1.00th=[   27],  5.00th=[   38], 10.00th=[   45], 20.00th=[   61],
     | 30.00th=[   74], 40.00th=[   85], 50.00th=[  100], 60.00th=[  117],
     | 70.00th=[  141], 80.00th=[  174], 90.00th=[  235], 95.00th=[  297],
     | 99.00th=[  482], 99.50th=[  578], 99.90th=[  717], 99.95th=[  750],
     | 99.99th=[  775]
    bw (KB  /s): min=134691, max=335872, per=25.08%, avg=253748.08, stdev=40454.88
    lat (usec) : 20=0.01%
    lat (msec) : 10=0.02%, 20=0.27%, 50=12.90%, 100=36.93%, 250=41.39%
    lat (msec) : 500=7.59%, 750=0.84%, 1000=0.05%
  cpu          : usr=0.11%, sys=26.76%, ctx=39695, majf=0, minf=405
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=99.4%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued    : total=r=21899/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=21899MB, aggrb=988.23MB/s, minb=988.23MB/s, maxb=988.23MB/s, mint=22160msec, maxt=22160msec

Disk stats (read/write):
  rbd1: ios=43736/163, merge=0/5, ticks=3189484/15276, in_queue=3214988, util=99.78%


############################################################################################################################################################


# fio --rw=randread --bs=4m --numjobs=4 --iodepth=32 --runtime=22 --time_based --size=16777216k --loops=1 --ioengine=libaio --direct=1 --invalidate=1 --fsync_on_close=1 --randrepeat=1 --norandommap --group_reporting --exitall --name dev-ceph-randread-4m-4thr-libaio-32iodepth-22sec --filename=/mnt/rbd/test2

fio-2.1.3
Starting 4 processes
dev-ceph-randread-4m-4thr-libaio-32iodepth-22sec: Laying out IO file(s) (1 file(s) / 16384MB)
Jobs: 4 (f=4): [rrrr] [28.7% done] [894.3MB/0KB/0KB /s] [223/0/0 iops] [eta 00m:57s]
dev-ceph-randread-4m-4thr-libaio-32iodepth-22sec: (groupid=0, jobs=4): err= 0: pid=64654: Mon Nov 23 10:51:58 2015
  read : io=18952MB, bw=876868KB/s, iops=214, runt= 22132msec
    slat (usec): min=518, max=81398, avg=18576.88, stdev=14840.55
    clat (msec): min=90, max=1915, avg=570.37, stdev=166.51
     lat (msec): min=123, max=1936, avg=588.95, stdev=169.19
    clat percentiles (msec):
     |  1.00th=[  258],  5.00th=[  343], 10.00th=[  383], 20.00th=[  437],
     | 30.00th=[  482], 40.00th=[  519], 50.00th=[  553], 60.00th=[  594],
     | 70.00th=[  627], 80.00th=[  685], 90.00th=[  775], 95.00th=[  865],
     | 99.00th=[ 1057], 99.50th=[ 1156], 99.90th=[ 1680], 99.95th=[ 1860],
     | 99.99th=[ 1909]
    bw (KB  /s): min= 5665, max=383251, per=24.61%, avg=215755.74, stdev=61735.70
    lat (msec) : 100=0.02%, 250=0.80%, 500=33.88%, 750=53.31%, 1000=10.26%
    lat (msec) : 2000=1.73%
  cpu          : usr=0.07%, sys=12.52%, ctx=32466, majf=0, minf=372
  IO depths    : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=97.4%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued    : total=r=4738/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=18952MB, aggrb=876868KB/s, minb=876868KB/s, maxb=876868KB/s, mint=22132msec, maxt=22132msec

Disk stats (read/write):
  rbd1: ios=37721/177, merge=0/5, ticks=3075924/11408, in_queue=3097448, util=99.77%


Can anyone share some results from a similar environment?

Thanks in advance,

Best,

German

2015-11-23 13:08 GMT-03:00 Gregory Farnum <gfarnum@xxxxxxxxxx>:
On Mon, Nov 23, 2015 at 10:05 AM, German Anders <ganders@xxxxxxxxxxxx> wrote:
> Hi all,
>
> I want to know if there's any improvement or update regarding ceph 0.94.5
> with accelio, I've an already configured cluster (with no data on it) and I
> would like to know if there's a way to 'modify' the cluster in order to use
> accelio. Any info would be really appreciated.

The XioMessenger is still experimental. As far as I know it's not
expected to be stable any time soon and I can't imagine it will be
backported to Hammer even when done.
-Greg

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux