Prometheus monitoring

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am gathering prometheus metrics from my (unhealthy) Octopus (15.2.4) cluster and notice a discrepency (or misunderstanding) with the ceph dashboard.

In the dashboard, and with ceph -s, it reports 807 million objects objects:

    pgs:     169747/807333195 objects degraded (0.021%)
             78570293/807333195 objects misplaced (9.732%)
             24/101158245 objects unfound (0.000%)

But in the prometheus metrics (and in ceph df), it reports almost a factor of 10 fewer objects (dominated by pool 7):

# HELP ceph_pool_objects DF pool objects
# TYPE ceph_pool_objects gauge
ceph_pool_objects{pool_id="4"} 3920.0
ceph_pool_objects{pool_id="5"} 372743.0
ceph_pool_objects{pool_id="7"} 86972464.0
ceph_pool_objects{pool_id="8"} 9287431.0
ceph_pool_objects{pool_id="13"} 8961.0
ceph_pool_objects{pool_id="15"} 0.0
ceph_pool_objects{pool_id="17"} 4.0
ceph_pool_objects{pool_id="18"} 206.0
ceph_pool_objects{pool_id="19"} 8.0
ceph_pool_objects{pool_id="20"} 7.0
ceph_pool_objects{pool_id="21"} 22.0
ceph_pool_objects{pool_id="22"} 203.0
ceph_pool_objects{pool_id="23"} 4415522.0

Why are these two values different? How can I get the total number of objects (807 million) from the prometheus metrics?

--Mike
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux