Hi Beanos:
So you have 3 OSD servers and each of them have 2 disks.
I have a question. What result of "ceph osd tree". Look like the osd status is "down".
Best wishes,
Vickie
Vickie
2015-02-10 19:00 GMT+08:00 B L <super.iterator@xxxxxxxxx>:
Here is the updated direct copy/paste dumpeph@ceph-node1:~$ ceph osd dumpepoch 25fsid 17bea68b-1634-4cd1-8b2a-00a60ef4761dcreated 2015-02-08 16:59:07.050875modified 2015-02-09 22:35:33.191218flagspool 0 'data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 64 last_change 24 flags hashpspool crash_replay_interval 45 stripe_width 0pool 1 'metadata' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0pool 2 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0max_osd 6osd.0 up in weight 1 up_from 4 up_thru 17 down_at 0 last_clean_interval [0,0) 172.31.0.84:6800/11739 172.31.0.84:6801/11739 172.31.0.84:6802/11739 172.31.0.84:6803/11739 exists,up 765f5066-d13e-4a9e-a446-8630ee06e596osd.1 up in weight 1 up_from 7 up_thru 0 down_at 0 last_clean_interval [0,0) 172.31.0.84:6805/12279 172.31.0.84:6806/12279 172.31.0.84:6807/12279 172.31.0.84:6808/12279 exists,up e1d073e5-9397-4b63-8b7c-a4064e430f7aosd.2 up in weight 1 up_from 10 up_thru 0 down_at 0 last_clean_interval [0,0) 172.31.3.57:6800/5517 172.31.3.57:6801/5517 172.31.3.57:6802/5517 172.31.3.57:6803/5517 exists,up 5af5deed-7a6d-4251-aa3c-819393901d1fosd.3 up in weight 1 up_from 13 up_thru 0 down_at 0 last_clean_interval [0,0) 172.31.3.57:6805/6043 172.31.3.57:6806/6043 172.31.3.57:6807/6043 172.31.3.57:6808/6043 exists,up 958f37ab-b434-40bd-87ab-3acbd3118f92osd.4 up in weight 1 up_from 16 up_thru 0 down_at 0 last_clean_interval [0,0) 172.31.3.56:6800/5106 172.31.3.56:6801/5106 172.31.3.56:6802/5106 172.31.3.56:6803/5106 exists,up ce5c0b86-96be-408a-8022-6397c78032beosd.5 up in weight 1 up_from 22 up_thru 0 down_at 0 last_clean_interval [0,0) 172.31.3.56:6805/7019 172.31.3.56:6806/7019 172.31.3.56:6807/7019 172.31.3.56:6808/7019 exists,up da67b604-b32a-44a0-9920-df0774ad2ef3On Feb 10, 2015, at 12:55 PM, B L <super.iterator@xxxxxxxxx> wrote:On Feb 10, 2015, at 12:37 PM, B L <super.iterator@xxxxxxxxx> wrote:Hi Vickie,Thanks for your reply!You can find the dump in this link:Thanks!B.On Feb 10, 2015, at 12:23 PM, Vickie ch <mika.leaf666@xxxxxxxxx> wrote:Hi Beanos:Would you post the reult of "$ceph osd dump"?Best wishes,
Vickie2015-02-10 16:36 GMT+08:00 B L <super.iterator@xxxxxxxxx>:Having problem with my fresh non-healthy cluster, my cluster status summary shows this:ceph@ceph-node1:~$ ceph -scluster 17bea68b-1634-4cd1-8b2a-00a60ef4761dhealth HEALTH_WARN 256 pgs incomplete; 256 pgs stuck inactive; 256 pgs stuck unclean; pool data pg_num 128 > pgp_num 64monmap e1: 1 mons at {ceph-node1=172.31.0.84:6789/0}, election epoch 2, quorum 0 ceph-node1osdmap e25: 6 osds: 6 up, 6 inpgmap v82: 256 pgs, 3 pools, 0 bytes data, 0 objects198 MB used, 18167 MB / 18365 MB avail192 incomplete64 creating+incompleteWhere shall I start troubleshooting this?P.S. I’m new to CEPH.Thanks!Beanos
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com