On Thu, Nov 14, 2019 at 11:48 AM Sage Weil <sage@xxxxxxxxxxxx> wrote:
On Thu, 14 Nov 2019, Patrick Donnelly wrote:
> On Wed, Nov 13, 2019 at 6:36 PM Jerry Lee <leisurelysw24@xxxxxxxxx> wrote:
> >
> > On Thu, 14 Nov 2019 at 07:07, Patrick Donnelly <pdonnell@xxxxxxxxxx> wrote:
> > >
> > > On Wed, Nov 13, 2019 at 2:30 AM Jerry Lee <leisurelysw24@xxxxxxxxx> wrote:
> > > > Recently, I'm evaluating the snpahsot feature of CephFS from kernel
> > > > client and everthing works like a charm. But, it seems that reverting
> > > > a snapshot is not available currently. Is there some reason or
> > > > technical limitation that the feature is not provided? Any insights
> > > > or ideas are appreciated.
> > >
> > > Please provide more information about what you tried to do (commands
> > > run) and how it surprised you.
> >
> > The thing I would like to do is to rollback a snapped directory to a
> > previous version of snapshot. It looks like the operation can be done
> > by over-writting all the current version of files/directories from a
> > previous snapshot via cp. But cp may take lots of time when there are
> > many files and directories in the target directory. Is there any
> > possibility to achieve the goal much faster from the CephFS internal
> > via command like "ceph fs <cephfs_name> <dir> snap rollback
> > <snapname>" (just a example)? Thank you!
>
> RADOS doesn't support rollback of snapshots so it needs to be done
> manually. The best tool to do this would probably be rsync of the
> .snap directory with appropriate options including deletion of files
> that do not exist in the source (snapshot).
rsync is the best bet now, yeah.
RADOS does have a rollback operation that uses clone where it can, but
it's a per-object operation, so something still needs to walk the
hierarchy and roll back each file's content. The MDS could do this more
efficiently than rsync give what it knows about the snapped inodes
(skipping untouched inodes or, eventually, entire subtrees) but it's a
non-trivial amount of work to implement.
Would it make sense to extend CephFS to leverage reflinks for cases like this? That could be faster than rsync and more space efficient. It would require some development time though.
----------------
Robert LeBlanc
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com