Hi,
I have a cluster of one monitor and eight OSDs. These OSDs are running on four hosts(each host has two OSDs). When I set up everything and started Ceph, I got this:
esta@monitorOne:~$ sudo ceph -s
[sudo] password for esta:
cluster 0b9b05db-98fe-49e6-b12b-1cce0645c015
health HEALTH_WARN
64 pgs stuck inactive
64 pgs stuck unclean
too few PGs per OSD (8 < min 30)
monmap e1: 1 mons at {monitorOne=192.168.1.153:6789/0}
election epoch 1, quorum 0 monitorOne
osdmap e58: 8 osds: 8 up, 8 in
pgmap v191: 64 pgs, 1 pools, 0 bytes data, 0 objects
8460 MB used, 4162 GB / 4171 GB avail
64 creating
How to deal with this HEALTH_WARN status?
This is my ceph.conf:
[global]
fsid = 0b9b05db-98fe-49e6-b12b-1cce0645c015
mon initial members = monitorOne
mon host = 192.168.1.153
filestore_xattr_use_omap = true
public network = 192.168.1.0/24
cluster network = 10.0.0.0/24
pid file = /var/run/ceph/$name.pid
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd pool default size = 3
osd pool default min size = 2
osd pool default pg num = 512
osd pool default pgp num = 512
osd crush chooseleaf type = 1
osd journal size = 1024
[mon]
[mon.0]
host = monitorOne
mon addr = 192.168.1.153:6789
[osd]
[osd.0]
host = storageOne
[osd.1]
host = storageTwo
[osd.2]
host = storageFour
[osd.3]
host = storageLast
Could anybody help me?
best regards,
--
Zhen Wang
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com