HEALTH_WARN active+degraded on fresh install CENTOS 6.5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What's the output of "ceph osd map"?

Your CRUSH map probably isn't trying to segregate properly, with 2
hosts and 4 OSDs each.
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Tue, Jul 1, 2014 at 11:22 AM, Brian Lovett
<brian.lovett at prosperent.com> wrote:
> I'm pulling my hair out with ceph. I am testing things with a 5 server
> cluster. I have 3 monitors, and two storage machines each with 4 osd's. I
> have started from scratch 4 times now, and can't seem to figure out how to
> get a clean status. Ceph health reports:
>
> HEALTH_WARN 34 pgs degraded; 192 pgs stuck unclean; recovery 40/60 objects
> degraded (66.667%)
>
> ceph status reports:
>
> cluster 99567882-2e01-4dec-8ca5-692e439a5a47
>      health HEALTH_WARN 34 pgs degraded; 192 pgs stuck unclean; recovery
> 40/60 objects degraded (66.667%)
>      monmap e2: 3 mons at
> {monitor01=192.168.1.200:6789/0,monitor02=192.168.1.201:6789/0,monitor03=192
> .168.1.202:6789/0}, election epoch 8, quorum 0,1,2
> monitor01,monitor02,monitor03
>      mdsmap e4: 1/1/1 up {0=monitor01.mydomain.com=up:active}
>      osdmap e49: 8 osds: 8 up, 8 in
>       pgmap v85: 192 pgs, 3 pools, 1884 bytes data, 20 objects
>             297 MB used, 14856 GB / 14856 GB avail
>             40/60 objects degraded (66.667%)
>                    1 active
>                   34 active+degraded
>                  157 active+remapped
>
>
> My ceph.conf contains the following:
>
> [default]
> osd_pool_default_size = 2
>
> [global]
> auth_service_required = cephx
> filestore_xattr_use_omap = true
> auth_client_required = cephx
> auth_cluster_required = cephx
> mon_host = 192.168.1.200,192.168.1.201,192.168.1.202
> mon_initial_members = monitor01, monitor02, monitor03
> fsid = 99567882-2e01-4dec-8ca5-692e439a5a47
>
>
>
> Any suggestions are welcome at this point.
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux