Hello ceph-users, I am running Proxmox 7 with ceph 16.2.6 with 46 OSDs. I enabled snap_schedule about a month ago, and it seemed to be going fine, at least at the beginning. I’ve noticed, however, that snapshots stopped happening, as shown below: root@vis-mgmt:/ceph/backups/nassie/NAS/.snap# ls scheduled-2021-09-12-23_00_00 scheduled-2021-09-24-23_00_00 scheduled-2021-09-27-18_00_00 scheduled-2021-09-19-23_00_00 scheduled-2021-09-25-23_00_00 scheduled-2021-09-27-19_00_00 scheduled-2021-09-20-23_00_00 scheduled-2021-09-26-23_00_00 scheduled-2021-09-27-20_00_00 scheduled-2021-09-21-23_00_00 scheduled-2021-09-27-15_00_00 scheduled-2021-09-27-21_00_00 scheduled-2021-09-22-23_00_00 scheduled-2021-09-27-16_00_00 scheduled-2021-09-23-23_00_00 scheduled-2021-09-27-17_00_00 root@vis-mgmt:/ceph/backups/nassie/NAS/.snap# Snap-schedule is below: root@vis-mgmt:/ceph/backups/nassie/NAS/.snap# ceph fs snap-schedule list /backups/nassie/NAS /backups/nassie/NAS 1h /backups/nassie/NAS 24h /backups/nassie/NAS 7d /backups/nassie/NAS 4w root@vis-mgmt:/ceph/backups/nassie/NAS/.snap# And snap-schedule status: root@vis-mgmt:/ceph/backups/nassie/NAS/.snap# ceph fs snap-schedule status /backups/nassie/NAS {"fs": "cephfs", "subvol": null, "path": "/backups/nassie/NAS", "rel_path": "/backups/nassie/NAS", "schedule": "1h", "retention": {}, "start": "2021-09-08T02:00:00", "created": "2021-09-09T22:57:14", "first": "2021-09-09T23:00:00", "last": "2021-09-09T23:00:00", "last_pruned": null, "created_count": 1, "pruned_count": 0, "active": true} === {"fs": "cephfs", "subvol": null, "path": "/backups/nassie/NAS", "rel_path": "/backups/nassie/NAS", "schedule": "24h", "retention": {}, "start": "2021-09-08T02:00:00", "created": "2021-09-09T23:01:17", "first": null, "last": null, "last_pruned": null, "created_count": 0, "pruned_count": 0, "active": true} === {"fs": "cephfs", "subvol": null, "path": "/backups/nassie/NAS", "rel_path": "/backups/nassie/NAS", "schedule": "7d", "retention": {}, "start": "2021-09-08T02:00:00", "created": "2021-09-09T23:01:26", "first": null, "last": null, "last_pruned": null, "created_count": 0, "pruned_count": 0, "active": true} === {"fs": "cephfs", "subvol": null, "path": "/backups/nassie/NAS", "rel_path": "/backups/nassie/NAS", "schedule": "4w", "retention": {}, "start": "2021-09-08T02:00:00", "created": "2021-09-09T23:01:36", "first": null, "last": null, "last_pruned": null, "created_count": 0, "pruned_count": 0, "active": true} root@vis-mgmt:/ceph/backups/nassie/NAS/.snap# So, snap scheduler’s looks like it’s active, but no snapshots are being taken. /var/log/ceph/ceph-mgr.<hostname>.*.log on the node running ceph-mgr only has the following lines regarding snapshot schedules: 2021-10-13T22:44:11.339-0500 7f3acac28700 0 [snap_schedule INFO mgr_util] scanning for idle connections.. 2021-10-13T22:44:11.339-0500 7f3acac28700 0 [snap_schedule INFO mgr_util] cleaning up connections: [] 2021-10-13T22:44:41.336-0500 7f3acac28700 0 [snap_schedule INFO mgr_util] scanning for idle connections.. 2021-10-13T22:44:41.336-0500 7f3acac28700 0 [snap_schedule INFO mgr_util] cleaning up connections: [] Any reasons why snapshot schedule would stop working? Why would they get stuck? Thank you! George _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx