On 05/08/2013 08:44 AM, David Zafman wrote:
According to "osdmap e504: 4 osds: 2 up, 2 in" you have 2 of 4 osds that are down and out. That may be the issue.
Also, running 'ceph health detail' will give you specifics on what is
causing the HEALTH_WARN.
-Joao
David Zafman
Senior Developer
http://www.inktank.com
On May 8, 2013, at 12:05 AM, James Harper <james.harper@xxxxxxxxxxxxxxxx> wrote:
I've just upgraded my ceph install to cuttlefish (was 0.60) from Debian.
My mon's don't regularly die anymore, or at least haven't so far, but health is always HEALTH_WARN even though I can't see any indication of why:
# ceph status
health HEALTH_WARN
monmap e1: 3 mons at {4=192.168.200.197:6789/0,7=192.168.200.190:6789/0,8=192.168.200.191:6789/0}, election epoch 1104, quorum 0,1,2 4,7,8
osdmap e504: 4 osds: 2 up, 2 in
pgmap v210142: 832 pgs: 832 active+clean; 318 GB data, 638 GB used, 1223 GB / 1862 GB avail; 4970B/s rd, 7456B/s wr, 2op/s
mdsmap e577: 1/1/1 up {0=7=up:active}
Anyone have any idea what might be wrong, or where I can look to find out more?
Thanks
James
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Joao Eduardo Luis
Software Engineer | http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com