Re: benchmark Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What is your Ceph version? From the test results you posted, your environment's performance is okay in regard of your setup. But there are definitely many things that can be tuned to get you better number.


I normally use top, iostat, pidstat, vmstat, dstat, iperf3, blktrace, netmon, ceph admin socket to monitor system stat. 





------------------ Original ------------------
From: &nbsp;"Tony Liu";<tonyliu0592@xxxxxxxxxxx&gt;;
Date: &nbsp;Sep 15, 2020
To: &nbsp;"rainning"<tweetypie@xxxxxx&gt;; "ceph-users"<ceph-users@xxxxxxx&gt;; 

Subject: &nbsp; Re: benchmark Ceph



Here is the test inside VM.
================
# fio --name=test --ioengine=libaio --numjobs=1 --runtime=30 \
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; --direct=1 --size=2G --end_fsync=1 \
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; --rw=read --bs=4K --iodepth=1
test: (groupid=0, jobs=1): err= 0: pid=14615: Mon Sep 14 21:50:55 2020
&nbsp;&nbsp; read: IOPS=3209, BW=12.5MiB/s (13.1MB/s)(376MiB/30001msec)
&nbsp;&nbsp;&nbsp; slat (usec): min=3, max=162, avg= 6.91, stdev= 4.74
&nbsp;&nbsp;&nbsp; clat (usec): min=85, max=17366, avg=303.17, stdev=639.42
&nbsp;&nbsp;&nbsp;&nbsp; lat (usec): min=161, max=17373, avg=310.38, stdev=639.93
&nbsp;&nbsp;&nbsp; clat percentiles (usec):
&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp; 1.00th=[&nbsp; 167],&nbsp; 5.00th=[&nbsp; 172], 10.00th=[&nbsp; 176], 20.00th=[&nbsp; 182],
&nbsp;&nbsp;&nbsp;&nbsp; | 30.00th=[&nbsp; 188], 40.00th=[&nbsp; 194], 50.00th=[&nbsp; 204], 60.00th=[&nbsp; 221],
&nbsp;&nbsp;&nbsp;&nbsp; | 70.00th=[&nbsp; 239], 80.00th=[&nbsp; 277], 90.00th=[&nbsp; 359], 95.00th=[&nbsp; 461],
&nbsp;&nbsp;&nbsp;&nbsp; | 99.00th=[ 3130], 99.50th=[ 5735], 99.90th=[ 8094], 99.95th=[11338],
&nbsp;&nbsp;&nbsp;&nbsp; | 99.99th=[14091]
&nbsp;&nbsp; bw (&nbsp; KiB/s): min= 9688, max=15120, per=99.87%, avg=12820.51, stdev=1001.88, samples=59
&nbsp;&nbsp; iops&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : min= 2422, max= 3780, avg=3205.12, stdev=250.47, samples=59
&nbsp; lat (usec)&nbsp;&nbsp; : 100=0.01%, 250=74.99%, 500=20.76%, 750=2.21%, 1000=0.50%
&nbsp; lat (msec)&nbsp;&nbsp; : 2=0.39%, 4=0.27%, 10=0.81%, 20=0.06%
&nbsp; cpu&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : usr=0.65%, sys=3.06%, ctx=96287, majf=0, minf=13
&nbsp; IO depths&nbsp;&nbsp;&nbsp; : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, &gt;=64=0.0%
&nbsp;&nbsp;&nbsp;&nbsp; submit&nbsp;&nbsp;&nbsp; : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.0%
&nbsp;&nbsp;&nbsp;&nbsp; complete&nbsp; : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.0%
&nbsp;&nbsp;&nbsp;&nbsp; issued rwts: total=96287,0,0,0 short=0,0,0,0 dropped=0,0,0,0
&nbsp;&nbsp;&nbsp;&nbsp; latency&nbsp;&nbsp; : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
&nbsp;&nbsp; READ: bw=12.5MiB/s (13.1MB/s), 12.5MiB/s-12.5MiB/s (13.1MB/s-13.1MB/s), io=376MiB (394MB), run=30001-30001msec

Disk stats (read/write):
&nbsp; vda: ios=95957/2, merge=0/0, ticks=29225/12, in_queue=6027, util=82.52%
================
================
# fio --name=test --ioengine=libaio --numjobs=1 --runtime=30 \
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; --direct=1 --size=2G --end_fsync=1 \
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; --rw=write --bs=4K --iodepth=1
test: (groupid=0, jobs=1): err= 0: pid=14619: Mon Sep 14 21:52:04 2020
&nbsp; write: IOPS=16.3k, BW=63.7MiB/s (66.8MB/s)(1917MiB/30074msec)
&nbsp;&nbsp;&nbsp; slat (usec): min=3, max=182, avg= 5.94, stdev= 1.30
&nbsp;&nbsp;&nbsp; clat (usec): min=11, max=5234, avg=54.08, stdev=18.58
&nbsp;&nbsp;&nbsp;&nbsp; lat (usec): min=35, max=5254, avg=60.26, stdev=18.80
&nbsp;&nbsp;&nbsp; clat percentiles (usec):
&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp; 1.00th=[&nbsp;&nbsp; 36],&nbsp; 5.00th=[&nbsp;&nbsp; 38], 10.00th=[&nbsp;&nbsp; 40], 20.00th=[&nbsp;&nbsp; 46],
&nbsp;&nbsp;&nbsp;&nbsp; | 30.00th=[&nbsp;&nbsp; 48], 40.00th=[&nbsp;&nbsp; 50], 50.00th=[&nbsp;&nbsp; 53], 60.00th=[&nbsp;&nbsp; 56],
&nbsp;&nbsp;&nbsp;&nbsp; | 70.00th=[&nbsp;&nbsp; 59], 80.00th=[&nbsp;&nbsp; 63], 90.00th=[&nbsp;&nbsp; 67], 95.00th=[&nbsp;&nbsp; 71],
&nbsp;&nbsp;&nbsp;&nbsp; | 99.00th=[&nbsp;&nbsp; 85], 99.50th=[&nbsp; 100], 99.90th=[&nbsp; 289], 99.95th=[&nbsp; 355],
&nbsp;&nbsp;&nbsp;&nbsp; | 99.99th=[&nbsp; 412]
&nbsp;&nbsp; bw (&nbsp; KiB/s): min=59640, max=80982, per=100.00%, avg=65462.25, stdev=7166.81, samples=59
&nbsp;&nbsp; iops&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : min=14910, max=20245, avg=16365.54, stdev=1791.69, samples=59
&nbsp; lat (usec)&nbsp;&nbsp; : 20=0.01%, 50=39.85%, 100=59.65%, 250=0.36%, 500=0.14%
&nbsp; lat (usec)&nbsp;&nbsp; : 750=0.01%, 1000=0.01%
&nbsp; lat (msec)&nbsp;&nbsp; : 2=0.01%, 4=0.01%, 10=0.01%
&nbsp; cpu&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : usr=2.10%, sys=11.63%, ctx=490639, majf=0, minf=12
&nbsp; IO depths&nbsp;&nbsp;&nbsp; : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, &gt;=64=0.0%
&nbsp;&nbsp;&nbsp;&nbsp; submit&nbsp;&nbsp;&nbsp; : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.0%
&nbsp;&nbsp;&nbsp;&nbsp; complete&nbsp; : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.0%
&nbsp;&nbsp;&nbsp;&nbsp; issued rwts: total=0,490635,0,1 short=0,0,0,0 dropped=0,0,0,0
&nbsp;&nbsp;&nbsp;&nbsp; latency&nbsp;&nbsp; : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
&nbsp; WRITE: bw=63.7MiB/s (66.8MB/s), 63.7MiB/s-63.7MiB/s (66.8MB/s-66.8MB/s), io=1917MiB (2010MB), run=30074-30074msec

Disk stats (read/write):
&nbsp; vda: ios=9/490639, merge=0/0, ticks=26/27102, in_queue=184, util=99.36%
================
Both networking and storage workloads are light.
Which system stat should I monitor?

Thanks!
Tony
&gt; -----Original Message-----
&gt; From: rainning <tweetypie@xxxxxx&gt;
&gt; Sent: Monday, September 14, 2020 8:39 PM
&gt; To: Tony Liu <tonyliu0592@xxxxxxxxxxx&gt;; ceph-users <ceph-users@xxxxxxx&gt;
&gt; Subject:  Re: benchmark Ceph
&gt; 
&gt; Can you post the fio results with the ioengine using libaio? From what
&gt; you posted, it seems to me that the read test hit cache. And the write
&gt; performance was not good, the latency was too high (~35.4ms) while the
&gt; numjobs and iodepth both were 1. Did you monitor system stat on both
&gt; side (VM/Compute Node and Cluster)?
&gt; 
&gt; 
&gt; 
&gt; 
&gt; ------------------&amp;nbsp;Original&amp;nbsp;------------------
&gt; From: &amp;nbsp;"Tony Liu";<tonyliu0592@xxxxxxxxxxx&amp;gt;;
&gt; Date: &amp;nbsp;Sep 15, 2020
&gt; To: &amp;nbsp;"ceph-users"<ceph-users@xxxxxxx&amp;gt;;
&gt; 
&gt; Subject: &amp;nbsp; benchmark Ceph
&gt; 
&gt; 
&gt; 
&gt; Hi,
&gt; 
&gt; I have a 3-OSD-node Ceph cluster with 1 480GB SSD and 8 x 2TB 12Gpbs SAS
&gt; HDD on each node, to provide storage to a OpenStack cluster. Both public
&gt; and cluster networks are 2x10G. WAL and DB of each OSD is on SSD and
&gt; they share the same 60GB partition.
&gt; 
&gt; I run fio with different combinations of operation, block size and io-
&gt; depth to collect IOPS, bandwidth and latency. I tried fio on compute
&gt; node with ioengine=rbd, also fio within VM (backed by Ceph) with
&gt; ioengine=libaio.
&gt; 
&gt; The result doesn't seem good. Here are couple examples.
&gt; ====================================
&gt; fio --name=test --ioengine=rbd --clientname=admin \ &amp;nbsp;&amp;nbsp;&amp;nbsp; -
&gt; -pool=benchmark --rbdname=test --numjobs=1 \ &amp;nbsp;&amp;nbsp;&amp;nbsp; --
&gt; runtime=30 --direct=1 --size=2G \ &amp;nbsp;&amp;nbsp;&amp;nbsp; --rw=read --bs=4k -
&gt; -iodepth=1
&gt; 
&gt; test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-
&gt; 4096B, ioengine=rbd, iodepth=1
&gt; fio-3.7
&gt; Starting 1 process
&gt; Jobs: 1 (f=0): [f(1)][100.0%][r=27.6MiB/s,w=0KiB/s][r=7075,w=0 IOPS][eta
&gt; 00m:00s]
&gt; test: (groupid=0, jobs=1): err= 0: pid=56310: Mon Sep 14 19:01:24 2020
&gt; &amp;nbsp;&amp;nbsp; read: IOPS=7610, BW=29.7MiB/s (31.2MB/s)(892MiB/30001msec)
&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp; slat (nsec): min=1550, max=57662, avg=3312.74,
&gt; stdev=2981.42 &amp;nbsp;&amp;nbsp;&amp;nbsp; clat (usec): min=77, max=4799,
&gt; avg=127.39, stdev=39.88 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; lat (usec): min=78,
&gt; max=4812, avg=130.70, stdev=40.67 &amp;nbsp;&amp;nbsp;&amp;nbsp; clat percentiles
&gt; (usec):
&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; |&amp;nbsp; 1.00th=[&amp;nbsp;&amp;nbsp; 82],&amp;nbsp;
&gt; 5.00th=[&amp;nbsp;&amp;nbsp; 86], 10.00th=[&amp;nbsp;&amp;nbsp; 95],
&gt; 20.00th=[&amp;nbsp;&amp;nbsp; 98], &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | 30.00th=[&amp;nbsp;
&gt; 100], 40.00th=[&amp;nbsp; 104], 50.00th=[&amp;nbsp; 116], 60.00th=[&amp;nbsp; 129],
&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | 70.00th=[&amp;nbsp; 141], 80.00th=[&amp;nbsp; 157],
&gt; 90.00th=[&amp;nbsp; 182], 95.00th=[&amp;nbsp; 198], &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; |
&gt; 99.00th=[&amp;nbsp; 233], 99.50th=[&amp;nbsp; 245], 99.90th=[&amp;nbsp; 359],
&gt; 99.95th=[&amp;nbsp; 515], &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | 99.99th=[&amp;nbsp; 709]
&gt; &amp;nbsp;&amp;nbsp; bw (&amp;nbsp; KiB/s): min=27160, max=40696, per=100.00%,
&gt; avg=30474.29, stdev=2826.23, samples=59 &amp;nbsp;&amp;nbsp;
&gt; iops&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : min= 6790, max=10174,
&gt; avg=7618.56, stdev=706.56, samples=59 &amp;nbsp; lat (usec)&amp;nbsp;&amp;nbsp; :
&gt; 100=28.89%, 250=70.72%, 500=0.34%, 750=0.05%, 1000=0.01% &amp;nbsp; lat
&gt; (msec)&amp;nbsp;&amp;nbsp; : 2=0.01%, 10=0.01% &amp;nbsp;
&gt; cpu&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : usr=3.55%,
&gt; sys=3.80%, ctx=228358, majf=0, minf=29 &amp;nbsp; IO
&gt; depths&amp;nbsp;&amp;nbsp;&amp;nbsp; : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%,
&gt; 32=0.0%, &amp;gt;=64=0.0% &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; submit&amp;nbsp;&amp;nbsp;&amp;nbsp; :
&gt; 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &amp;gt;=64=0.0%
&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; complete&amp;nbsp; : 0=0.0%, 4=100.0%, 8=0.0%,
&gt; 16=0.0%, 32=0.0%, 64=0.0%, &amp;gt;=64=0.0% &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; issued
&gt; rwts: total=228333,0,0,0 short=0,0,0,0 dropped=0,0,0,0
&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; latency&amp;nbsp;&amp;nbsp; : target=0, window=0,
&gt; percentile=100.00%, depth=1
&gt; 
&gt; Run status group 0 (all jobs):
&gt; &amp;nbsp;&amp;nbsp; READ: bw=29.7MiB/s (31.2MB/s), 29.7MiB/s-29.7MiB/s
&gt; (31.2MB/s-31.2MB/s), io=892MiB (935MB), run=30001-30001msec
&gt; 
&gt; Disk stats (read/write):
&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp; dm-0: ios=290/3, merge=0/0, ticks=2427/19,
&gt; in_queue=2446, util=0.95%, aggrios=290/4, aggrmerge=0/0,
&gt; aggrticks=2427/39, aggrin_queue=2332, aggrutil=0.95% &amp;nbsp; sda:
&gt; ios=290/4, merge=0/0, ticks=2427/39, in_queue=2332, util=0.95%
&gt; ====================================
&gt; ====================================
&gt; fio --name=test --ioengine=rbd --clientname=admin \ &amp;nbsp;&amp;nbsp;&amp;nbsp; -
&gt; -pool=benchmark --rbdname=test --numjobs=1 \ &amp;nbsp;&amp;nbsp;&amp;nbsp; --
&gt; runtime=30 --direct=1 --size=2G \ &amp;nbsp;&amp;nbsp;&amp;nbsp; --rw=write --bs=4k
&gt; --iodepth=1
&gt; 
&gt; test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-
&gt; 4096B, ioengine=rbd, iodepth=1
&gt; fio-3.7
&gt; Starting 1 process
&gt; Jobs: 1 (f=1): [W(1)][100.0%][r=0KiB/s,w=6352KiB/s][r=0,w=1588 IOPS][eta
&gt; 00m:00s]
&gt; test: (groupid=0, jobs=1): err= 0: pid=56544: Mon Sep 14 19:03:36 2020
&gt; &amp;nbsp; write: IOPS=1604, BW=6417KiB/s (6571kB/s)(188MiB/30003msec)
&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp; slat (nsec): min=2240, max=45925, avg=6526.95,
&gt; stdev=3486.19 &amp;nbsp;&amp;nbsp;&amp;nbsp; clat (usec): min=399, max=35411,
&gt; avg=615.88, stdev=231.41 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; lat (usec): min=402,
&gt; max=35421, avg=622.40, stdev=232.08 &amp;nbsp;&amp;nbsp;&amp;nbsp; clat percentiles
&gt; (usec):
&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; |&amp;nbsp; 1.00th=[&amp;nbsp; 420],&amp;nbsp;
&gt; 5.00th=[&amp;nbsp; 449], 10.00th=[&amp;nbsp; 469], 20.00th=[&amp;nbsp; 498],
&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | 30.00th=[&amp;nbsp; 529], 40.00th=[&amp;nbsp; 562],
&gt; 50.00th=[&amp;nbsp; 611], 60.00th=[&amp;nbsp; 652], &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; |
&gt; 70.00th=[&amp;nbsp; 685], 80.00th=[&amp;nbsp; 709], 90.00th=[&amp;nbsp; 766],
&gt; 95.00th=[&amp;nbsp; 799], &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | 99.00th=[&amp;nbsp; 881],
&gt; 99.50th=[&amp;nbsp; 955], 99.90th=[ 2671], 99.95th=[ 3097],
&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | 99.99th=[ 3785] &amp;nbsp;&amp;nbsp; bw (&amp;nbsp;
&gt; KiB/s): min= 5944, max= 6792, per=100.00%, avg=6415.95, stdev=178.72,
&gt; samples=60 &amp;nbsp;&amp;nbsp; iops&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; :
&gt; min= 1486, max= 1698, avg=1603.93, stdev=44.67, samples=60 &amp;nbsp; lat
&gt; (usec)&amp;nbsp;&amp;nbsp; : 500=20.82%, 750=67.23%, 1000=11.55% &amp;nbsp; lat
&gt; (msec)&amp;nbsp;&amp;nbsp; : 2=0.25%, 4=0.14%, 10=0.01%, 20=0.01%, 50=0.01%
&gt; &amp;nbsp; cpu&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; :
&gt; usr=1.22%, sys=1.25%, ctx=48143, majf=0, minf=18 &amp;nbsp; IO
&gt; depths&amp;nbsp;&amp;nbsp;&amp;nbsp; : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%,
&gt; 32=0.0%, &amp;gt;=64=0.0% &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; submit&amp;nbsp;&amp;nbsp;&amp;nbsp; :
&gt; 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &amp;gt;=64=0.0%
&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; complete&amp;nbsp; : 0=0.0%, 4=100.0%, 8=0.0%,
&gt; 16=0.0%, 32=0.0%, 64=0.0%, &amp;gt;=64=0.0% &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; issued
&gt; rwts: total=0,48129,0,0 short=0,0,0,0 dropped=0,0,0,0
&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; latency&amp;nbsp;&amp;nbsp; : target=0, window=0,
&gt; percentile=100.00%, depth=1
&gt; 
&gt; Run status group 0 (all jobs):
&gt; &amp;nbsp; WRITE: bw=6417KiB/s (6571kB/s), 6417KiB/s-6417KiB/s (6571kB/s-
&gt; 6571kB/s), io=188MiB (197MB), run=30003-30003msec
&gt; 
&gt; Disk stats (read/write):
&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp; dm-0: ios=31/2, merge=0/0, ticks=342/14, in_queue=356,
&gt; util=0.12%, aggrios=33/3, aggrmerge=0/0, aggrticks=390/27,
&gt; aggrin_queue=404, aggrutil=0.13% &amp;nbsp; sda: ios=33/3, merge=0/0,
&gt; ticks=390/27, in_queue=404, util=0.13%
&gt; ====================================
&gt; 
&gt; Does that make sense? How do you benchmark your Ceph cluster?
&gt; Appreciate if you could share your experiences here.
&gt; 
&gt; Thanks!
&gt; Tony
&gt; _______________________________________________
&gt; ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
&gt; email to ceph-users-leave@xxxxxxx
&gt; _______________________________________________
&gt; ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
&gt; email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux