Re: HEALTH_WARN after upgrade to cuttlefish

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



According to "osdmap e504: 4 osds: 2 up, 2 in" you have 2 of 4 osds that are down and out.  That may be the issue.

David Zafman
Senior Developer
http://www.inktank.com

On May 8, 2013, at 12:05 AM, James Harper <james.harper@xxxxxxxxxxxxxxxx> wrote:

> I've just upgraded my ceph install to cuttlefish (was 0.60) from Debian.
> 
> My mon's don't regularly die anymore, or at least haven't so far, but health is always HEALTH_WARN even though I can't see any indication of why:
> 
> # ceph status
>   health HEALTH_WARN
>   monmap e1: 3 mons at {4=192.168.200.197:6789/0,7=192.168.200.190:6789/0,8=192.168.200.191:6789/0}, election epoch 1104, quorum 0,1,2 4,7,8
>   osdmap e504: 4 osds: 2 up, 2 in
>    pgmap v210142: 832 pgs: 832 active+clean; 318 GB data, 638 GB used, 1223 GB / 1862 GB avail; 4970B/s rd, 7456B/s wr, 2op/s
>   mdsmap e577: 1/1/1 up {0=7=up:active}
> 
> Anyone have any idea what might be wrong, or where I can look to find out more?
> 
> Thanks
> 
> James
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux