Have you tried "ceph daemon mds.NAME dump snaps" (available since mimic)? ================= Frank Schilder AIT Risø Campus Bygning 109, rum S14 ________________________________________ From: Lars Täuber <taeuber@xxxxxxx> Sent: 17 December 2019 12:32:34 To: Stephan Mueller Cc: ceph-users@xxxxxxx Subject: Re: list CephFS snapshots Hi Michael, thanks for your gist. This is at least a way to do it. But there are many directories in our cluster. The "find $1 -type d" lasts for about 90 minutes to find all 2.6 million directories. Is there another (faster) way e.g. via mds? Cheers, Lars Mon, 16 Dec 2019 17:03:41 +0000 Stephan Mueller <smueller@xxxxxxxx> ==> "taeuber@xxxxxxx" <taeuber@xxxxxxx>, "ceph-users@xxxxxxx" <ceph-users@xxxxxxx> : > Hi Lars, > > > Is there a mean to list all snapshots existing in a (subdir of) > > Cephfs? > > I can't use the find dommand to look for the ".snap" dirs. > > You can, but you can't search for the '.snap' directories, you have to > append them to the directory like `find $cephFsDir/.snap` but I it's > better to use `ls` instead, to list all snapshots. > > > > > I'd like to remove certain (or all) snapshots within a CephFS. But > > how do I find them? > > > > I just created a gist for you that can do that: > https://gist.github.com/Devp00l/2473f5953d578f440fc71b3d602a9c23 > > As you can see in the script, snapshots starting with an underscore are > filtered out as these directories belong to snapshots that were created > in upper directories and these underscore snapshots can't be used for > deletion. > > The deletion of a snapshot can be done by calling `rmdir`. > > But if you really want to manage CephFS snapshots easily take a look at > the dashboard, as we have integrated the snapshot and quota management > by now :) > > You can delete multiple snapshots of a directory or just create new > snapshots on a directory basis easily through the UI. > > > Stephan _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx