problems with snap-schedule on 16.2.7

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Ceph users,

I have a problem with scheduled snapshots on ceph 16.2.7 (in a Proxmox install).

While trying to understand how snap schedules work, I created more schedules than I needed to:

root@vis-mgmt:~# ceph fs  snap-schedule list /backups/nassie/NAS
/backups/nassie/NAS 1h 24h7d8w12m
/backups/nassie/NAS 7d 24h7d8w12m
/backups/nassie/NAS 4w 24h7d8w12m
/backups/nassie/NAS 6h 24h7d8w12m
root@vis-mgmt:~# 

I then went ahead and deleted the ones that I didn’t need:

root@vis-mgmt:~# ceph fs snap-schedule remove /backups/nassie/NAS 1h
Schedule removed for path /backups/nassie/NAS
root@vis-mgmt:~# ceph fs snap-schedule remove /backups/nassie/NAS 7d
Schedule removed for path /backups/nassie/NAS
root@vis-mgmt:~# ceph fs snap-schedule remove /backups/nassie/NAS 4w
Schedule removed for path /backups/nassie/NAS
root@vis-mgmt:~# ceph fs  snap-schedule list /backups/nassie/NAS
/backups/nassie/NAS 6h 24h7d8w12m
root@vis-mgmt:~# 

No problems there.  However, if I restart the ceph manager, the (old) deleted snapshot schedules come back.  Not only that, but after the mgr restart, it seems like the snap schedule status is not really telling the truth:

root@vis-mgmt:/ceph/backups/nassie/NAS/.snap# ceph fs snap-schedule status /backups/nassie/NAS
{"fs": "cephfs", "subvol": null, "path": "/backups/nassie/NAS", "rel_path": "/backups/nassie/NAS", "schedule": "6h", "retention": {"h": 24, "d": 7, "w": 8, "m": 12}, "start": "2022-01-14T00:00:00", "created": "2022-01-14T22:18:38", "first": null, "last": null, "last_pruned": null, "created_count": 0, "pruned_count": 0, "active": true}
root@vis-mgmt:/ceph/backups/nassie/NAS/.snap# ls
scheduled-2021-10-17-18_00_00  scheduled-2022-01-19-12_00_00  scheduled-2022-01-22-12_00_00
scheduled-2021-10-24-18_00_00  scheduled-2022-01-19-18_00_00  scheduled-2022-01-22-18_00_00
scheduled-2021-10-31-18_00_00  scheduled-2022-01-20-00_00_00  scheduled-2022-01-23-00_00_00
scheduled-2021-11-07-18_00_00  scheduled-2022-01-20-06_00_00  scheduled-2022-01-23-06_00_00
scheduled-2021-11-08-18_00_00  scheduled-2022-01-20-12_00_00  scheduled-2022-01-23-12_00_00
scheduled-2021-11-09-00_00_00  scheduled-2022-01-20-18_00_00  scheduled-2022-01-23-18_00_00
scheduled-2022-01-15-18_00_00  scheduled-2022-01-21-00_00_00  scheduled-2022-01-24-00_00_00
scheduled-2022-01-16-18_00_00  scheduled-2022-01-21-06_00_00  scheduled-2022-01-24-06_00_00
scheduled-2022-01-17-18_00_00  scheduled-2022-01-21-12_00_00  scheduled-2022-01-24-12_00_00
scheduled-2022-01-18-18_00_00  scheduled-2022-01-21-18_00_00  scheduled-2022-01-24-18_00_00
scheduled-2022-01-19-00_00_00  scheduled-2022-01-22-00_00_00
scheduled-2022-01-19-06_00_00  scheduled-2022-01-22-06_00_00
root@vis-mgmt:/ceph/backups/nassie/NAS/.snap# 

Note that today (1/26) is after the last snapshot (1/24), but “ceph fs snap-schedule status” reports that no snapshots were performed (“first” and “last” are null), which is obviously not true.  Moreover, no more snapshots are being performed after the mgr restart.

Any thoughts of what’s going on and how to fix it?

Thank you!

George

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux