can't figure out why I have HEALTH_WARN in luminous

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a few running ceph clusters.  I built a new cluster using luminous, and I also upgraded a cluster running hammer to luminous.  In both cases, I have a HEALTH_WARN that I can't figure out.  The cluster appears healthy except for the HEALTH_WARN in overall status.  For now, I’m monitoring health from the “status” instead of “overall_status” until I can find out what the issue is. 

 

Any ideas?  Thanks!

 

# ceph health detail

HEALTH_OK

 

# ceph -s

  cluster:

    id:     11d436c2-1ae3-4ea4-9f11-97343e5c673b

    health: HEALTH_OK

 

# ceph -s --format json-pretty

 

{

    "fsid": "11d436c2-1ae3-4ea4-9f11-97343e5c673b",

    "health": {

        "checks": {},

        "status": "HEALTH_OK",

        "overall_status": "HEALTH_WARN"

 

<snip>

 

 

 

Mike Kuriger

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux