Actually, it is. We took the single host getting started out, because nobody would really deploy a distributed system like Ceph for production on single host. The problem is that the default crush rule is set to the host level, not the osd level.
Note, I think ceph-deploy mon create-initial will do the next two steps for you. So those may be redundant.
What you need to do though is after you do ceph-deploy new ceph-a1, is you need to add the following to your ceph.conf file:
osd crush chooseleaf type = 0
Then, follow the rest of the procedure.
On Fri, Jan 31, 2014 at 2:41 PM, Cristian Falcas <cristi.falcas@xxxxxxxxx> wrote:
Hi list,
I'm trying to play with ceph, but I can't get the machine to reach a
clean state.
How I did the installation:
ceph-deploy new ceph-a1
ceph-deploy install ceph-a1
ceph-deploy mon create-initial
ceph-deploy mon create ceph-a1
ceph-deploy gatherkeys ceph-a1
ceph-deploy disk zap ceph-a1:vdb ceph-a1:vdc ceph-a1:vdd ceph-a1:vde
ceph-deploy osd prepare ceph-a1:vdb ceph-a1:vdc ceph-a1:vdd ceph-a1:vde
ceph-deploy osd activate ceph-a1:/dev/vdb ceph-a1:/dev/vdc
ceph-a1:/dev/vdd ceph-a1:/dev/vde
What the status is:
[root@ceph-a1 ~]# ceph health
HEALTH_WARN 49 pgs degraded; 192 pgs stuck unclean
cceph -w:
2014-01-31 17:39:44.060937 mon.0 [INF] pgmap v25: 192 pgs: 102 active,
41 active+remapped, 49 active+degraded; 0 bytes data, 143 MB used, 243
GB / 243 GB avail
Even if I add more disks or play with the crush map settings, I can't
seem to manage to bring the PGs to a clean state.
Is this expected with one host only?
Best regards,
Cristian Falcas
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
John Wilkins
Senior Technical Writer
Intank
john.wilkins@xxxxxxxxxxx
(415) 425-9599
http://inktank.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com