Hello Eugen,
thank you for your answer.
I restarted all the kube-ceph nodes one after the other. Nothing has
changed.
ok, I deactivate the snap ... : ceph fs snap-schedule deactivate /
Is there a way to see how many snapshots will be deleted per hour?
Regards,
Gio
Am 17.08.2024 um 10:12 schrieb Eugen Block:
Hi,
have you tried to fail the mgr? Sometimes the PG stats are not
correct. You could also temporarily disable snapshots to see if things
settle down.
Zitat von Giovanna Ratini <giovanna.ratini@xxxxxxxxxxxxxxx>:
Hello all,
We use Ceph (v18.2.2) and Rook (1.14.3) as the CSI for a Kubernetes
environment. Last week, we had a problem with the MDS falling behind
on trimming every 4-5 days (GitHub issue link). We resolved the issue
using the steps outlined in the GitHub issue.
We have 3 hosts (I know, I need to increase this as soon as possible,
and I will!) and 6 OSDs. After running the commands:
ceph config set mds mds_dir_max_commit_size 80,
ceph fs fail <fs_name>, and
ceph fs set <fs_name> joinable true,
After that, the snaptrim queue for our PGs has stopped decreasing.
All PGs of our CephFS are in either active+clean+snaptrim_wait or
active+clean+snaptrim states. For example, the PG 3.12 is in the
active+clean+snaptrim state, and its snap_trimq_len was 4077
yesterday but has increased to 4538 today.
I increased the osd_snap_trim_priority to 10 (ceph config set osd
osd_snap_trim_priority 10), but it didn't help. Only the PGs of our
CephFS have this problem.
Do you have any ideas on how we can resolve this issue?
Thanks in advance,
Giovanna
p.s. I'm not a ceph expert :-).
Faulkener asked me for more information, so here it is:
MDS Memory: 11GB
mds_cache_memory_limit: 11,811,160,064 bytes
root@kube-master02:~# ceph fs snap-schedule status /
{
"fs": "rook-cephfs",
"subvol": null,
"path": "/",
"rel_path": "/",
"schedule": "3h",
"retention": {"h": 24, "w": 4},
"start": "2024-05-05T00:00:00",
"created": "2024-05-05T17:28:18",
"first": "2024-05-05T18:00:00",
"last": "2024-08-15T18:00:00",
"last_pruned": "2024-08-15T18:00:00",
"created_count": 817,
"pruned_count": 817,
"active": true
}
I do not understand if the snapshots in the PGs are correlated with
the snapshots on CephFS. Until we encountered the issue with the "MDS
falling behind on trimming every 4-5 days," we didn't have any
problems with snapshots.
Could someone explain me this or send me to the documentation?
Thank you
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx