Hi cepher, I tried to use the following command to create a img, but unfortunately, the command hung for a long time until I broken it by crtl-z.
rbd -p hello create img-003 --size 512
so I checked the cluster status, and showed:rbd -p hello create img-003 --size 512
cluster 0379cebd-b546-4954-b5d6-e13d08b7d2f1
health HEALTH_WARN
2 near full osd(s)
too many PGs per OSD (320 > max 300)
monmap e2: 1 mons at {vl=192.168.90.253:6789/0}
election epoch 1, quorum 0 vl
osdmap e37: 2 osds: 2 up, 2 in
pgmap v19544: 320 pgs, 3 pools, 12054 MB data, 3588 objects
657 GB used, 21867 MB / 714 GB avail
320 active+clean
2015-11-12 22:52:44.687491 7f89eced9780 20 librbd: create 0x7fff8f7b7800 name = img-003 size = 536870912 old_format = 1 features = 0 order = 22 stripe_unit = 0 stripe_count = 0
2015-11-12 22:52:44.687653 7f89eced9780 1 -- 192.168.90.253:0/1006121 --> 192.168.90.253:6800/5472 -- osd_op(client.34321.0:1 img-003.rbd [stat] 2.8a047315 ack+read+known_if_redirected e37) v5 -- ?+0 0x28513d0 con 0x2850000
2015-11-12 22:52:44.688928 7f89e066b700 1 -- 192.168.90.253:0/1006121 <== osd.1 192.168.90.253:6800/5472 1 ==== osd_op_reply(1 img-003.rbd [stat] v0'0 uv0 ack = -2 ((2) No such file or directory)) v6 ==== 178+0+0 (3550830125 0 0) 0x7f89c0000ae0 con 0x2850000
2015-11-12 22:52:44.689090 7f89eced9780 1 -- 192.168.90.253:0/1006121 --> 192.168.90.253:6801/5344 -- osd_op(client.34321.0:2 rbd_id.img-003 [stat] 2.638c75a8 ack+read+known_if_redirected e37) v5 -- ?+0 0x2858330 con 0x2856f50
2015-11-12 22:52:44.690425 7f89e0469700 1 -- 192.168.90.253:0/1006121 <== osd.0 192.168.90.253:6801/5344 1 ==== osd_op_reply(2 rbd_id.img-003 [stat] v0'0 uv0 ack = -2 ((2) No such file or directory)) v6 ==== 181+0+0 (1202435393 0 0) 0x7f89b8000ae0 con 0x2856f50
2015-11-12 22:52:44.690494 7f89eced9780 2 librbd: adding rbd image to directory...
2015-11-12 22:52:44.690544 7f89eced9780 1 -- 192.168.90.253:0/1006121 --> 192.168.90.253:6801/5344 -- osd_op(client.34321.0:3 rbd_directory [tmapup 0~0] 2.30a98c1c ondisk+write+known_if_redirected e37) v5 -- ?+0 0x2858920 con 0x2856f50
2015-11-12 22:52:59.687447 7f89e4074700 1 -- 192.168.90.253:0/1006121 --> 192.168.90.253:6789/0 -- mon_subscribe({monmap=3+,osdmap=38}) v2 -- ?+0 0x7f89b0000ab0 con 0x2843b90
2015-11-12 22:52:59.687472 7f89e4074700 1 -- 192.168.90.253:0/1006121 --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89b0000f40 con 0x2856f50
2015-11-12 22:52:59.687887 7f89e3873700 1 -- 192.168.90.253:0/1006121 <== mon.0 192.168.90.253:6789/0 11 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0 (2867606018 0 0) 0x7f89d8001160 con 0x2843b90
2015-11-12 22:53:04.687593 7f89e4074700 1 -- 192.168.90.253:0/1006121 --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89b0000ab0 con 0x2856f50
2015-11-12 22:53:09.687731 7f89e4074700 1 -- 192.168.90.253:0/1006121 --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89b0000ab0 con 0x2856f50
2015-11-12 22:53:14.687844 7f89e4074700 1 -- 192.168.90.253:0/1006121 --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89b0000ab0 con 0x2856f50
2015-11-12 22:53:19.687978 7f89e4074700 1 -- 192.168.90.253:0/1006121 --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89b0000ab0 con 0x2856f50
2015-11-12 22:53:24.688116 7f89e4074700 1 -- 192.168.90.253:0/1006121 --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89b0000ab0 con 0x2856f50
2015-11-12 22:53:29.688253 7f89e4074700 1 -- 192.168.90.253:0/1006121 --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89b0000ab0 con 0x2856f50
2015-11-12 22:53:34.688389 7f89e4074700 1 -- 192.168.90.253:0/1006121 --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89b0000ab0 con 0x2856f50
2015-11-12 22:53:39.688512 7f89e4074700 1 -- 192.168.90.253:0/1006121 --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89b0000ab0 con 0x2856f50
2015-11-12 22:53:44.688636 7f89e4074700 1 -- 192.168.90.253:0/1006121 --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89b0000ab0 con 0x2856f50
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com