Re: ceph 9.2.0 SAMSUNG ssd performance issue?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"op_w_latency":      

     "avgcount": 42991,

      "sum": 402.804741329


402.0/42991

0.009350794352306296


~9ms latency, that means this ssd not suitable for journal device?



 "osd": {

        "op_wip": 0,

        "op": 58683,

        "op_in_bytes": 7309042294,

        "op_out_bytes": 507137488,

        "op_latency": {

            "avgcount": 58683,

            "sum": 484.302231121

        },

        "op_process_latency": {

            "avgcount": 58683,

            "sum": 323.332046552

        },

        "op_r": 902,

        "op_r_out_bytes": 507137488,

        "op_r_latency": {

            "avgcount": 902,

            "sum": 0.793759596

        },

        "op_r_process_latency": {

            "avgcount": 902,

            "sum": 0.619918138

        },

        "op_w": 42991,

        "op_w_in_bytes": 7092142080,

        "op_w_rlat": {

            "avgcount": 38738,

            "sum": 334.643717526

        },

        "op_w_latency": {

            "avgcount": 42991,

            "sum": 402.804741329

        },

        "op_w_process_latency": {

            "avgcount": 42991,

            "sum": 260.489972416

        },

...



2016-02-12 15:56 GMT+08:00 Irek Fasikhov <malmyzh@xxxxxxxxx>:

С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757

2016-02-12 10:41 GMT+03:00 Huan Zhang <huan.zhang.jn@xxxxxxxxx>:
Hi,

ceph VERY SLOW with 24 osd(SAMSUNG ssd).
fio /dev/rbd0 iodepth=1 direct=1   IOPS only ~200
fio /dev/rbd0 iodepth=32 direct=1 IOPS only ~3000

But test single ssd deive with fio:
fio iodepth=1 direct=1   IOPS  ~15000
fio iodepth=32 direct=1 IOPS  ~30000

Why ceph SO SLOW? Could you give me some help?
Appreciated!


My Enviroment:
[root@szcrh-controller ~]# ceph -s
    cluster eb26a8b9-e937-4e56-a273-7166ffaa832e
     health HEALTH_WARN
            1 mons down, quorum 0,1,2,3,4 ceph01,ceph02,ceph03,ceph04,ceph05
     monmap e1: 6 mons at {ceph01=
10.10.204.144:6789/0,ceph02=10.10.204.145:6789/0,ceph03=10.10.204.146:6789/0,ceph04=10.10.204.147:6789/0,ceph05=10.10.204.148:6789/0,ceph06=0.0.0.0:0/5
}
            election epoch 6, quorum 0,1,2,3,4
ceph01,ceph02,ceph03,ceph04,ceph05
     osdmap e114: 24 osds: 24 up, 24 in
            flags sortbitwise
      pgmap v2213: 1864 pgs, 3 pools, 49181 MB data, 4485 objects
            144 GB used, 42638 GB / 42782 GB avail
                1864 active+clean

[root@ceph03 ~]# lsscsi
[0:0:6:0]    disk    ATA      SAMSUNG MZ7KM1T9 003Q  /dev/sda
[0:0:7:0]    disk    ATA      SAMSUNG MZ7KM1T9 003Q  /dev/sdb
[0:0:8:0]    disk    ATA      SAMSUNG MZ7KM1T9 003Q  /dev/sdc
[0:0:9:0]    disk    ATA      SAMSUNG MZ7KM1T9 003Q  /dev/sdd

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux