Hello,
osdmap e10: 4 osds: 2 up, 2 in
What about following commands : # ceph osd tree # ceph osd dump
You have 2 OSDs on 2 hosts, but 4 OSDs seems to be debined in your crush map.
Regards,
Pascal Hi ALL,
I've created 2 mon and 2 osd on Centos 6.5 (x86_64).
I've tried 4 times (clean centos installation) but always have health: HEALTH_WARN
Never HEALTH_OK always HEALTH_WARN! :(
# ceph -s cluster d073ed20-4c0e-445e-bfb0-7b7658954874 health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean monmap e1: 2 mons at {ceph02=192.168.0.142:6789/0,ceph03=192.168.0.143:6789/0}, election epoch 4, quorum 0,1 ceph02,ceph03 osdmap e10: 4 osds: 2 up, 2 in pgmap v15: 192 pgs, 3 pools, 0 bytes data, 0 objects 68908 kB used, 6054 MB / 6121 MB avail 192 active+degraded
What am I doing wrong???
-----------
host: 192.168.0.141 - admin host: 192.168.0.142 - mon.ceph02 + osd.0 (/dev/sdb, 8G) host: 192.168.0.143 - mon.ceph03 + osd.1 (/dev/sdb, 8G)
ceph-deploy version 1.5.18
[global] osd pool default size = 2 -----------
Thanks, Roman. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-- Pascal Morillon University of Rennes 1 IRISA, Rennes, France SED Offices : E206 (Grid5000), D050 (SED) Phone : +33 2 99 84 22 10
|
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com