HI,
I recently upgraded my cluster from 12.2 to 14.2 and I'm having some trouble getting the mgr dashboards for grafana working.
I setup Prometheus and Grafana per https://docs.ceph.com/docs/nautilus/mgr/prometheus/#mgr-prometheus
However, for the osd disk performance statistics graphs on the host details dashboard I'm getting the following error:
"found duplicate series for the match group {device="dm-5", instance=":9100"} on the right hand-side of the operation: [{name="ceph_disk_occupation", ceph_daemon="osd.13", db_device="/dev/dm-8", device="dm-5", instance=":9100", job="ceph"}, {name="ceph_disk_occupation", ceph_daemon="osd.15", db_device="/dev/dm-10", device="dm-5", instance=":9100", job="ceph"}];many-to-many matching not allowed: matching labels must be unique on one side"
This also happens on the following graphs:
Host Overview/AVG Disk Utilization
Host Details/OSD Disk Performance Statistics/*
Host Details/OSD Disk Performance Statistics/*
Also the following graphs show no data points:
OSD Details/Physical Device Performance/*
prometheus version: 2.12.0
node exporter: 0.15.2
grafana version: 6.3.3
below are the Prometheus config files
node exporter: 0.15.2
grafana version: 6.3.3
note that my osds all have separate data and rocks db devices. I have also upgraded all the osds to nautilus via ceph-bluestore-tool repair.
Any idea what's needed to fix this?
Thanks
below are the Prometheus config files
prometheus.yml
global:
scrape_interval: 5s
evaluation_interval: 5s
scrape_configs:
- job_name: 'node'
file_sd_configs:
- files:
- node_targets.yml
- job_name: 'ceph'
honor_labels: true
file_sd_configs:
- files:
- ceph_targets.yml
global:
scrape_interval: 5s
evaluation_interval: 5s
scrape_configs:
- job_name: 'node'
file_sd_configs:
- files:
- node_targets.yml
- job_name: 'ceph'
honor_labels: true
file_sd_configs:
- files:
- ceph_targets.yml
----
node_targets.yml:
[
{
"targets": [ "nas-osd-01:9100" ],
"labels": {
"instance": "nas-osd-01"
}
},
{
"targets": [ "nas-osd-02:9100" ],
"labels": {
"instance": "nas-osd-02"
}
},
{
"targets": [ "nas-osd-02:9100" ],
"labels": {
"instance": "nas-osd-03"
}
}
]
node_targets.yml:
[
{
"targets": [ "nas-osd-01:9100" ],
"labels": {
"instance": "nas-osd-01"
}
},
{
"targets": [ "nas-osd-02:9100" ],
"labels": {
"instance": "nas-osd-02"
}
},
{
"targets": [ "nas-osd-02:9100" ],
"labels": {
"instance": "nas-osd-03"
}
}
]
---
ceph_targets.yml:
ceph_targets.yml:
[
{
"targets": [ "nas-osd-01:9283" ],
"labels": {}
},
{
"targets": [ "nas-osd-02:9283" ],
"labels": {}
},
{
"targets": [ "nas-osd-03:9283" ],
"labels": {}
}
]
{
"targets": [ "nas-osd-01:9283" ],
"labels": {}
},
{
"targets": [ "nas-osd-02:9283" ],
"labels": {}
},
{
"targets": [ "nas-osd-03:9283" ],
"labels": {}
}
]
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx