Pascal,
Here is my latest installation: cluster 204986f6-f43c-4199-b093-8f5c7bc641bb health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; recovery 20/40 objects degraded (50.000%) monmap e1: 2 mons at {ceph02=192.168.33.142:6789/0,ceph03=192.168.33.143:6789/0}, election epoch 4, quorum 0,1 ceph02,ceph03 mdsmap e4: 1/1/1 up {0=ceph02=up:active} osdmap e8: 2 osds: 2 up, 2 in pgmap v14: 192 pgs, 3 pools, 1884 bytes data, 20 objects 68796 kB used, 6054 MB / 6121 MB avail 20/40 objects degraded (50.000%) 192 active+degraded host ceph01 - admin host ceph02 - mon.ceph02 + osd.1 (sdb, 8G) + mds host ceph03 - mon.ceph03 + osd.0 (sdb, 8G) $ ceph osd tree # id weight type name up/down reweight -1 0 root default -2 0 host ceph03 0 0 osd.0 up 1 -3 0 host ceph02 1 0 osd.1 up 1 $ ceph osd dump epoch 8 fsid 204986f6-f43c-4199-b093-8f5c7bc641bb created 2014-10-15 13:39:05.986977 modified 2014-10-15 13:40:45.644870 flags pool 0 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool crash_replay_interval 45 stripe_width 0 pool 1 'metadata' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0 pool 2 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0 max_osd 2 osd.0 up in weight 1 up_from 4 up_thru 4 down_at 0 last_clean_interval [0,0) 192.168.33.143:6800/2284 192.168.33.143:6801/2284 192.168.33.143:6802/2284 192.168.33.143:6803/2284 exists,up dccd6b99-1885-4c62-864b-107bd9ba0d84 osd.1 up in weight 1 up_from 8 up_thru 0 down_at 0 last_clean_interval [0,0) 192.168.33.142:6800/2399 192.168.33.142:6801/2399 192.168.33.142:6802/2399 192.168.33.142:6803/2399 exists,up 4d4adf4b-ae8e-4e26-8667-c952c7fc4e45 Thanks, Roman Hello, |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com