On 05/08/2013 10:57 PM, John Wilkins wrote:
James, The output says, " monmap e1: 3 mons at {4=192.168.200.197:6789/0,7=192.168.200.190:6789/0,8=192.168.200.191:6789/0 <http://192.168.200.197:6789/0,7=192.168.200.190:6789/0,8=192.168.200.191:6789/0>}, election epoch 1104, quorum 0,1,2 4,7,8" It looks like you have six OSDs (0,1,2,4,7,8) with only 3 OSDs running. The cluster needs a majority. So you'd need 4 of 6 monitors running.
s/OSD/Monitor/ :-)
On Wed, May 8, 2013 at 4:32 AM, James Harper <james.harper@xxxxxxxxxxxxxxxx <mailto:james.harper@xxxxxxxxxxxxxxxx>> wrote: > On 05/08/2013 08:44 AM, David Zafman wrote: > > > > According to "osdmap e504: 4 osds: 2 up, 2 in" you have 2 of 4 osds that are > down and out. That may be the issue. > > Also, running 'ceph health detail' will give you specifics on what is > causing the HEALTH_WARN. > # ceph health detail HEALTH_WARN mon.4 addr 192.168.200.197:6789/0 <http://192.168.200.197:6789/0> has 26% avail disk space -- low disk space! I guess that's the problem. Thanks James _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- John Wilkins Senior Technical Writer Intank john.wilkins@xxxxxxxxxxx <mailto:john.wilkins@xxxxxxxxxxx> (415) 425-9599 http://inktank.com
-- Joao Eduardo Luis Software Engineer | http://inktank.com | http://ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com