ceph block device IO seems slow? Did i got something wrong?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi,
        I attached one 500G block device to the vm, and test it in vm use "dd if=/dev/zero of=myfile bs=1M count=1024" ,
        Then I got a average io speed about 31MB/s. I thought that i should have got 100MB/s,
        cause my vm hypervisor has 1G nic and osd host has 10G nic。
        Did i got a wrong result? how can i make it faster?

Your sincerely,

Michael.




[root@storage1 ~]# ceph -w
2014-03-16 17:24:44.596903 mon.0 [INF] pgmap v2245: 1127 pgs: 1127 active+clean; 8758 MB data, 100 GB used, 27749 GB / 29340 GB avail; 5059 kB/s wr, 1 op/s
2014-03-16 17:24:45.742589 mon.0 [INF] pgmap v2246: 1127 pgs: 1127 active+clean; 8826 MB data, 100 GB used, 27749 GB / 29340 GB avail; 21390 kB/s wr, 7 op/s
2014-03-16 17:24:46.864936 mon.0 [INF] pgmap v2247: 1127 pgs: 1127 active+clean; 8838 MB data, 100 GB used, 27749 GB / 29340 GB avail; 36789 kB/s wr, 13 op/s
2014-03-16 17:24:49.578711 mon.0 [INF] pgmap v2248: 1127 pgs: 1127 active+clean; 8869 MB data, 100 GB used, 27749 GB / 29340 GB avail; 11404 kB/s wr, 3 op/s
2014-03-16 17:24:50.824619 mon.0 [INF] pgmap v2249: 1127 pgs: 1127 active+clean; 8928 MB data, 100 GB used, 27749 GB / 29340 GB avail; 22972 kB/s wr, 7 op/s
2014-03-16 17:24:51.980126 mon.0 [INF] pgmap v2250: 1127 pgs: 1127 active+clean; 8933 MB data, 100 GB used, 27749 GB / 29340 GB avail; 28408 kB/s wr, 10 op/s
2014-03-16 17:24:54.603830 mon.0 [INF] pgmap v2251: 1127 pgs: 1127 active+clean; 8954 MB data, 100 GB used, 27749 GB / 29340 GB avail; 7090 kB/s wr, 2 op/s
2014-03-16 17:24:55.671644 mon.0 [INF] pgmap v2252: 1127 pgs: 1127 active+clean; 9034 MB data, 100 GB used, 27749 GB / 29340 GB avail; 27465 kB/s wr, 9 op/s
2014-03-16 17:24:57.057567 mon.0 [INF] pgmap v2253: 1127 pgs: 1127 active+clean; 9041 MB data, 100 GB used, 27749 GB / 29340 GB avail; 39638 kB/s wr, 13 op/s
2014-03-16 17:24:59.603449 mon.0 [INF] pgmap v2254: 1127 pgs: 1127 active+clean; 9057 MB data, 100 GB used, 27749 GB / 29340 GB avail; 6019 kB/s wr, 2 op/s
2014-03-16 17:25:00.671065 mon.0 [INF] pgmap v2255: 1127 pgs: 1127 active+clean; 9138 MB data, 100 GB used, 27749 GB / 29340 GB avail; 25646 kB/s wr, 9 op/s
2014-03-16 17:25:01.860269 mon.0 [INF] pgmap v2256: 1127 pgs: 1127 active+clean; 9146 MB data, 100 GB used, 27749 GB / 29340 GB avail; 40427 kB/s wr, 14 op/s
2014-03-16 17:25:04.561468 mon.0 [INF] pgmap v2257: 1127 pgs: 1127 active+clean; 9162 MB data, 100 GB used, 27749 GB / 29340 GB avail; 6298 kB/s wr, 2 op/s
2014-03-16 17:25:05.662565 mon.0 [INF] pgmap v2258: 1127 pgs: 1127 active+clean; 9274 MB data, 101 GB used, 27748 GB / 29340 GB avail; 34520 kB/s wr, 12 op/s
2014-03-16 17:25:06.851644 mon.0 [INF] pgmap v2259: 1127 pgs: 1127 active+clean; 9286 MB data, 101 GB used, 27748 GB / 29340 GB avail; 56598 kB/s wr, 19 op/s
2014-03-16 17:25:09.597428 mon.0 [INF] pgmap v2260: 1127 pgs: 1127 active+clean; 9322 MB data, 101 GB used, 27748 GB / 29340 GB avail; 12426 kB/s wr, 5 op/s
2014-03-16 17:25:10.765610 mon.0 [INF] pgmap v2261: 1127 pgs: 1127 active+clean; 9392 MB data, 101 GB used, 27748 GB / 29340 GB avail; 27569 kB/s wr, 13 op/s
2014-03-16 17:25:11.943055 mon.0 [INF] pgmap v2262: 1127 pgs: 1127 active+clean; 9392 MB data, 101 GB used, 27748 GB / 29340 GB avail; 31581 kB/s wr, 16 op/s


[root@storage1 ~]# ceph -s
    cluster 3429fd17-4a92-4d3b-a7fa-04adedb0da82
     health HEALTH_OK
     monmap e1: 1 mons at {storage1=193.168.1.100:6789/0}, election epoch 1, quorum 0 storage1
     osdmap e245: 16 osds: 16 up, 16 in
      pgmap v2273: 1127 pgs, 4 pools, 9393 MB data, 3607 objects
            101 GB used, 27748 GB / 29340 GB avail
                1127 active+clean

[root@storage1 ~]# ceph osd tree
# id    weight  type name       up/down reweight
-1      16      root default
-2      16              host storage1
0       1                       osd.0   up      1
1       1                       osd.1   up      1
2       1                       osd.2   up      1
3       1                       osd.3   up      1
4       1                       osd.4   up      1
5       1                       osd.5   up      1
6       1                       osd.6   up      1
7       1                       osd.7   up      1
8       1                       osd.8   up      1
9       1                       osd.9   up      1
10      1                       osd.10  up      1
11      1                       osd.11  up      1
12      1                       osd.12  up      1
13      1                       osd.13  up      1
14      1                       osd.14  up      1
15      1                       osd.15  up      1

--------------------------------------------------------
ZTE Information Security Notice: The information contained in this mail (and any attachment transmitted herewith) is privileged and confidential and is intended for the exclusive use of the addressee(s).  If you are not an intended recipient, any disclosure, reproduction, distribution or other dissemination or use of the information contained is strictly prohibited.  If you have received this mail in error, please delete it and notify us immediately.


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux