Re: snap_schedule works after 1 hour of scheduling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 4, 2023 at 7:19 PM Kushagr Gupta
<kushagrguptasps.mun@xxxxxxxxx> wrote:
>
> Hi Milind,
>
> Thank you for your swift response.
>
> >>How many hours did you wait after the "start time" and decide to restart mgr ?
> We waited for ~3 days before restarting the mgr-service.

The only thing I can think of is a stale mgr that wasn't restarted
after an upgrade.
Was an upgrade performed lately ?

Did the dir exist at the time the snapshot was scheduled to take place.
If it didn't then the schedule gets disabled until explicitly enabled.

>
> There was one more instance where we waited for 2 hours and then re-started and in the third hour the schedule started working.
>
> Could you please guide us if we are doing anything wrong.
> Kindly let us know if any logs are required.
>
> Thanks and Regards,
> Kushagra Gupta
>
> On Wed, Oct 4, 2023 at 5:39 PM Milind Changire <mchangir@xxxxxxxxxx> wrote:
>>
>> On Wed, Oct 4, 2023 at 3:40 PM Kushagr Gupta
>> <kushagrguptasps.mun@xxxxxxxxx> wrote:
>> >
>> > Hi Team,Milind
>> >
>> > Ceph-version: Quincy, Reef
>> > OS: Almalinux 8
>> >
>> > Issue: snap_schedule works after 1 hour of schedule
>> >
>> > Description:
>> >
>> > We are currently working in a 3-node ceph cluster.
>> > We are currently exploring the scheduled snapshot capability of the ceph-mgr module.
>> > To enable/configure scheduled snapshots, we followed the following link:
>> >
>> >
>> >
>> > https://docs.ceph.com/en/quincy/cephfs/snap-schedule/
>> >
>> >
>> >
>> > We were able to create snap schedules for the subvolumes as suggested.
>> > But we have observed a two very strange behaviour:
>> > 1. The snap_schedules only work when we restart the ceph-mgr service on the mgr node:
>> > We then restarted the mgr-service on the active mgr node, and after 1 hour it started getting created. I am attaching the log file for the same after restart. Thre behaviour looks abnormal.
>>
>> A mgr restart is not required for the schedule to get triggered.
>> How many hours did you wait after the "start time" and decide to restart mgr ?
>>
>> >
>> > So,  for eg consider the below output:
>> > ```
>> > [root@storagenode-1 ~]# ceph fs snap-schedule status /volumes/subvolgrp/test3
>> > {"fs": "cephfs", "subvol": null, "path": "/volumes/subvolgrp/test3", "rel_path": "/volumes/subvolgrp/test3", "schedule": "1h", "retention": {}, "start": "2023-10-04T07:20:00", "created": "2023-10-04T07:18:41", "first": "2023-10-04T08:20:00", "last": "2023-10-04T09:20:00", "last_pruned": null, "created_count": 2, "pruned_count": 0, "active": true}
>> > [root@storagenode-1 ~]#
>> > ```
>> > As we can see in the above o/p, we created the schedule at 2023-10-04T07:18:41. The schedule was suppose to start at 2023-10-04T07:20:00 but it started at 2023-10-04T08:20:00
>>
>> seems normal behavior to me
>> the schedule starts countdown for 1h from 2023-10-04T07:20:00 and
>> created first snapshot at 2023-10-04T08:20:00
>>
>> >
>> > Any input w.r.t the same will be of great help.
>> >
>> > Thanks and Regards
>> > Kushagra Gupta
>>
>>
>>
>> --
>> Milind
>>


-- 
Milind
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux