thanks nick,
filestore-> journal_latency: ~1.1ms
214.0/180611
0.0011848669239415096
seems ssd write is ok, any other idea is highly appreciated!
"filestore": {
"journal_queue_max_ops": 300,
"journal_queue_ops": 0,
"journal_ops": 180611,
"journal_queue_max_bytes": 33554432,
"journal_queue_bytes": 0,
"journal_bytes": 32637888155,
"journal_latency": {
"avgcount": 180611,
"sum": 214.095788552
},
"journal_wr": 176801,
"journal_wr_bytes": {
"avgcount": 176801,
"sum": 33122885632
},
"journal_full": 0,
"committing": 0,
"commitcycle": 14648,
"commitcycle_interval": {
"avgcount": 14648,
"sum": 73299.187956076
},
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Huan Zhang
> Sent: 12 February 2016 10:00
> To: Irek Fasikhov <malmyzh@xxxxxxxxx>
> Cc: ceph-users <ceph-users@xxxxxxxx>
> Subject: Re: ceph 9.2.0 SAMSUNG ssd performance issue?
>
> "op_w_latency":
> "avgcount": 42991,
> "sum": 402.804741329
>
> 402.0/42991
> 0.009350794352306296
>
> ~9ms latency, that means this ssd not suitable for journal device?
I believe that counter includes lots of other operations in the OSD including the journal write. If you want pure journal stats, I would under the Filestore->journal_latency counter
>
>
> "osd": {
> "op_wip": 0,
> "op": 58683,
> "op_in_bytes": 7309042294,
> "op_out_bytes": 507137488,
> "op_latency": {
> "avgcount": 58683,
> "sum": 484.302231121
> },
> "op_process_latency": {
> "avgcount": 58683,
> "sum": 323.332046552
> },
> "op_r": 902,
> "op_r_out_bytes": 507137488,
> "op_r_latency": {
> "avgcount": 902,
> "sum": 0.793759596
> },
> "op_r_process_latency": {
> "avgcount": 902,
> "sum": 0.619918138
> },
> "op_w": 42991,
> "op_w_in_bytes": 7092142080,
> "op_w_rlat": {
> "avgcount": 38738,
> "sum": 334.643717526
> },
> "op_w_latency": {
> "avgcount": 42991,
> "sum": 402.804741329
> },
> "op_w_process_latency": {
> "avgcount": 42991,
> "sum": 260.489972416
> },
> ...
>
>
> 2016-02-12 15:56 GMT+08:00 Irek Fasikhov <malmyzh@xxxxxxxxx>:
> Hi.
> You need to read : https://www.sebastien-han.fr/blog/2014/10/10/ceph-
> how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
>
>
> С уважением, Фасихов Ирек Нургаязович
> Моб.: +79229045757
>
> 2016-02-12 10:41 GMT+03:00 Huan Zhang <huan.zhang.jn@xxxxxxxxx>:
> Hi,
>
> ceph VERY SLOW with 24 osd(SAMSUNG ssd).
> fio /dev/rbd0 iodepth=1 direct=1 IOPS only ~200
> fio /dev/rbd0 iodepth=32 direct=1 IOPS only ~3000
>
> But test single ssd deive with fio:
> fio iodepth=1 direct=1 IOPS ~15000
> fio iodepth=32 direct=1 IOPS ~30000
>
> Why ceph SO SLOW? Could you give me some help?
> Appreciated!
>
>
> My Enviroment:
> [root@szcrh-controller ~]# ceph -s
> cluster eb26a8b9-e937-4e56-a273-7166ffaa832e
> health HEALTH_WARN
> 1 mons down, quorum 0,1,2,3,4 ceph01,ceph02,ceph03,ceph04,ceph05
> monmap e1: 6 mons at {ceph01=
> 10.10.204.144:6789/0,ceph02=10.10.204.145:6789/0,ceph03=10.10.204.146:67
> 89/0,ceph04=10.10.204.147:6789/0,ceph05=10.10.204.148:6789/0,ceph06=0.0
> .0.0:0/5
> }
> election epoch 6, quorum 0,1,2,3,4
> ceph01,ceph02,ceph03,ceph04,ceph05
> osdmap e114: 24 osds: 24 up, 24 in
> flags sortbitwise
> pgmap v2213: 1864 pgs, 3 pools, 49181 MB data, 4485 objects
> 144 GB used, 42638 GB / 42782 GB avail
> 1864 active+clean
>
> [root@ceph03 ~]# lsscsi
> [0:0:6:0] disk ATA SAMSUNG MZ7KM1T9 003Q /dev/sda
> [0:0:7:0] disk ATA SAMSUNG MZ7KM1T9 003Q /dev/sdb
> [0:0:8:0] disk ATA SAMSUNG MZ7KM1T9 003Q /dev/sdc
> [0:0:9:0] disk ATA SAMSUNG MZ7KM1T9 003Q /dev/sdd
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com