Re: cephfs - max snapshot limit?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Tobias,

On Thu, Apr 27, 2023 at 2:42 PM Tobias Hachmer <t.hachmer@xxxxxx> wrote:
>
> Hi sur5r,
>
> Am 4/27/23 um 10:33 schrieb Jakob Haufe:
>  > On Thu, 27 Apr 2023 09:07:10 +0200
>  > Tobias Hachmer <t.hachmer@xxxxxx> wrote:
>  >
>  >> But we observed that max 50 snapshot are preserved. If a new snapshot is
>  >> created the oldest 51st is deleted.
>  >>
>  >> Is there a limit for maximum cephfs snapshots or maybe this is a bug?
>  >
>  > I've been wondering the same thing for about 6 months now and found the
>  > reason just yesterday.
>  >
>  > The snap-schedule mgr module has a hard limit on how many snapshots it
>  > preserves, see [1]. It's even documented at [2] in section
>  > "Limitations" near the end of the page.
>  >
>  > The commit[3] implementing this does not only not explain the reason
>  > for the number at all, it doesn't even mention the fact it implements
>  > this.
>
> Thanks. I've red the documentation, but it's not clear enough. I thought
> "the retention list will be shortened to the newest 50 snapshots" will
> just truncate the list and not delete the snapshots, effectively.
>
> So as you stated the max. number of snapshots is currently a hard limit.
>
> Can anyone clarify the reasons for this? If there's a big reason to hard
> limit this it would be great to schedule snapshots more granular e.g.
> mo-fr every two hours between 8am-6pm.

This was done so that a particular directory does not eat up all the
snapshots - there is a per directory limit on the number of snapshots
controlled by mds_max_snaps_per_dir which defaults to 100 and
therefore MAX_SNAPS_PER_PATH was chosen to be much lower than that.
Also, at one point the kclient wasn't able to handle more than 400
snapshots (per file system), but we have come a long way from that and
that is not a constraint right now.

>
>  > Given the limitation is per directory, I'm currently trying this:
>  >
>  > / 1d 30d
>  > /foo 1h 48h
>  > /bar 1h 48h
>  >
>  > I forgot to activate the new schedules yesterday so I can't say whether
>  > it works as expected yet.
>
> Please let me know if this works.
>
> Thanks,
> Tobias
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx



-- 
Cheers,
Venky
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux