Dear Ceph users,
on one of my nodes I see that the /var/log/messages is being spammed by
these messages:
Oct 16 12:51:11 bofur bash[2473311]: ::ffff:172.16.253.2 - -
[16/Oct/2022:10:51:11] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4"
Oct 16 12:51:12 bofur bash[2487821]: ts=2022-10-16T10:51:12.324Z
caller=manager.go:609 level=warn component="rule manager" group=pools
msg="Evaluating rule failed" rule="alert: CephPoolGrowthWarning\nexpr:
(predict_linear(ceph_pool_percent_used[2d], 3600 * 24 * 5) * on(pool_id)
group_right()\n ceph_pool_metadata) >= 95\nlabels:\n oid:
1.3.6.1.4.1.50495.1.2.1.9.2\n severity: warning\n type:
ceph_default\nannotations:\n description: |\n Pool '{{ $labels.name
}}' will be full in less than 5 days assuming the average fill-up rate
of the past 48 hours.\n summary: Pool growth rate may soon exceed it's
capacity\n" err="found duplicate series for the match group
{pool_id=\"1\"} on the left hand-side of the operation:
[{instance=\"bofur.localdomain:9283\", job=\"ceph\", pool_id=\"1\"},
{instance=\"172.16.253.3:9283\", job=\"ceph\",
pool_id=\"1\"}];many-to-many matching not allowed: matching labels must
be unique on one side"
(sorry for the ugly formatting but this is the original format). Other
nodes do not experience the same. I don't clearly understand the reason;
the only thing I noticed is that ceph_pool_metadata is mentioned in the
message: I had one such pool when experimenting with Ceph before
deleting that fs and creating the production one. Currently I have only
these pools:
# ceph osd lspools
1 .mgr
2 wizard_metadata
3 wizard_data
so I don't understand why ceph_pool_metadata is appearing in the logs.
Maybe the log spamming is due to some leftover in the configuration? I
tried to stop and restart prometheus, when the service is down the
spamming stops but it restarts as soon as prometheus is restarted.
Thanks in advance for any help,
Nicola
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx