Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________Hi,
My Cluster show me this message cince last two weeks.
Ceph Version (ceph -v):root@heku1 ~ # ceph -v
ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4) mimic (stable)
All pgs are active+clean:
root@heku1 ~ # ceph -s
cluster:
id: 0839c91a-f3ca-4119-853b-eb10904cf322
health: HEALTH_WARN
514 pgs not deep-scrubbed in time
services:
mon: 5 daemons, quorum heku1,heku2,heku3,heku4,heku5
mgr: heku2(active), standbys: heku1, heku5, heku4, heku3
mds: cephfs_fs-1/1/1 up {0=heku2=up:active}, 3 up:standby
osd: 10 osds: 10 up, 10 in
data:
pools: 4 pools, 514 pgs
objects: 1.17 M objects, 1.3 TiB
usage: 2.5 TiB used, 2.8 TiB / 5.3 TiB avail
pgs: 514 active+clean
io:
client: 2.6 KiB/s rd, 1.3 MiB/s wr, 0 op/s rd, 133 op/s wr
I've running manualy the deep scrubing process, but the message was not changed:
ceph pg dump | grep -i active+clean | awk '{print $1}' | while read i; do ceph pg deep-scrub ${i}; doneAlso I've changed this options and restartet all osd's
root@heku1 ~# ceph daemon osd.0 config get osd_deep_scrub_interval
{
"osd_deep_scrub_interval": "604800.000000" << 7 days
}
root@heku1 ~# ceph daemon osd.0 config get mon_warn_not_deep_scrubbed
{
"mon_warn_not_deep_scrubbed": "691200" < 8 days
}
Can me help anyone?
Best Regards
Alex
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com