On Fri, May 4, 2018 at 1:59 AM John Spray <jspray@xxxxxxxxxx> wrote:
On Fri, May 4, 2018 at 7:21 AM, Tracy Reed <treed@xxxxxxxxxxxxxxx> wrote:
> My ceph status says:
>
> cluster:
> id: b2b00aae-f00d-41b4-a29b-58859aa41375
> health: HEALTH_OK
>
> services:
> mon: 3 daemons, quorum ceph01,ceph03,ceph07
> mgr: ceph01(active), standbys: ceph-ceph07, ceph03
> osd: 78 osds: 78 up, 78 in
>
> data:
> pools: 4 pools, 3240 pgs
> objects: 4384k objects, 17533 GB
> usage: 53141 GB used, 27311 GB / 80452 GB avail
> pgs: 3240 active+clean
>
> io:
> client: 4108 kB/s rd, 10071 kB/s wr, 27 op/s rd, 331 op/s wr
>
> but my mgr dashboard web interface says:
>
>
> Health
> Overall status: HEALTH_WARN
>
> PG_AVAILABILITY: Reduced data availability: 2563 pgs inactive
>
>
> Anyone know why the discrepency? Hopefully the dashboard is very
> mistaken! Everything seems to be operating normally. If I had 2/3 of my
> pgs inactive I'm sure all of my rbd backing my VMs would be blocked etc.
A situation like this probably indicates that something is going wrong
with the mon->mgr synchronisation of health state (it's all calculated
in one place and the mon updates the mgr every few seconds).
1. Look for errors in your monitor logs
2. You'll probably find that everything gets back in sync if you
restart a mgr daemon
John
Isn't that the wrong direction for sync issues, though? I mean, the manager is where the PG reports actually go. So if the cluster's still running, the monitor says it's active+clean, and the *dashboard* says the PGs are inactive, it sounds like the monitor has the correct view and something has gone wrong between the rest of the manager guts and the dashboard display.
-Greg
> I'm running ceph-12.2.4-0.el7.x86_64 on CentOS 7. Almost all filestore
> except for one OSD which recently had to be replaced which I made
> bluestore. I plan to slowly migrate everything over to bluestore over
> the course of the next month.
>
> Thanks!
>
> --
> Tracy Reed
> http://tracyreed.org
> Digital signature attached for your safety.
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com