Re: list CephFS snapshots

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Frank,

the command takes the metadata pool not the data pool as argument!

rados -p <CephFS metadata pool> getxattr $(printf "%x\n" <CephFS Inode Number>).00000000 parent | ceph-dencoder type inode_backtrace_t import - decode dump_json

The result is a not very human readable json output:
{
    "ino": 1099511755110,
    "ancestors": [
        {
            "dirino": 1099511946098,
            "dname": "CCC",
            "version": 1850022
        },
        {
            "dirino": 1099511627776,
            "dname": "BBB",
            "version": 1847528
        },
        {
            "dirino": 1,
            "dname": "AAA",
            "version": 222830
        }
    ],
    "pool": 2,
    "old_pools": []
}

The directory containing the .snap dir is /AAA/BBB/CCC.
But the snapshot is in /AAA/BBB/CCC/.snap/$SNAPDIR.

For the client this might be only visible from a subdir depending on the subdir mounted.
In our case the client mounts with option name=AAA. The snapshot is visible as
/mnt/point/BBB/CCC/.snap/$SNAPDIR

The name of ".snap" depends on a mount option. This the metadata pool can't know. The name of the snapshot directory itself is not given.
You can only find directories where snapshots are in, not the actual snapdirs.

So you get the same information from find -inum. And it's simpler to use:

$ find /mnt/point/ -inum <CephFS Inode Number> -print -quit
/mnt/point/BBB/CCC

and then:
ls -1 /mnt/point/BBB/CCC/.snap


Best regards,
Lars

Wed, 18 Dec 2019 09:42:39 +0000
Frank Schilder <frans@xxxxxx> ==> Lars Täuber <taeuber@xxxxxxx> :
> I found it, should have taken a note:
> 
> Command: rados -p <CephFS data pool> getxattr <CephFS Inode Number>.00000000 parent | ceph-dencoder type inode_backtrace_t import - decode dump_json
> 
> Note: <CephFS Inode Number> is hex encoded, use 'printf "%x\n" INUM' to convert from the decimal numbers obtained with dump snaps.
> 
> Explanation: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-May/034958.html>
> 
> I don't have snapshots enabled and cannot test. Could you confirm to the mailing list that the above procedure will provide the path information all the way down to the actual snap dir, that is, contains entries ".snap" and "<DirName under .snap>" at the top?
> 
> Best regards,
> 
> =================
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
> 
> ________________________________________
> From: Lars Täuber <taeuber@xxxxxxx>
> Sent: 18 December 2019 09:24:02
> To: Frank Schilder
> Cc: Marc Roos; ceph-users
> Subject: Re:  Re: list CephFS snapshots
> 
> Hi Frank,
> 
> thanks for you hint. The find for the inode is really fast. At least fast enough for me:
> $ time find /mnt/point -inum 1093514215110 -print -quit
> real    0m3,009s
> user    0m0,037s
> sys     0m0,032s
> 
> Cheers,
> Lars
> 
> 
> Tue, 17 Dec 2019 15:08:06 +0000
> Frank Schilder <frans@xxxxxx> ==> Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx>, taeuber <taeuber@xxxxxxx> :
> > I think you can do a find for the inode (-inum n). At last I hope you can.
> >
> > However, I vaguely remember that there was a thread where someone gave a really nice MDS command for finding the path to an inode in no time.
> >
> > Best regards,
> >
> > =================
> > Frank Schilder
> > AIT Risø Campus
> > Bygning 109, rum S14
> >
> > ________________________________________
> > From: Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx>
> > Sent: 17 December 2019 14:19:54
> > To: Frank Schilder; taeuber
> > Cc: ceph-users
> > Subject: RE:  Re: list CephFS snapshots
> >
> > Thanks, Good tip! If I do not know where I created these, is there a way
> > to get their location in the filesystem? Or maybe a command that deletes
> > by snapid?
> >
> >
> >         {
> >             "snapid": 54,
> >             "ino": 1099519875627,
> >             "stamp": "2017-09-13 21:21:35.769863",
> >             "name": "snap-20170913"
> >         },
> >         {
> >             "snapid": 153485,
> >             "ino": 1099519910289,
> >             "stamp": "2019-10-06 03:18:03.933510",
> >             "name": "snap-6"
> >         },
> >         {
> >             "snapid": 153489,
> >             "ino": 1099519910289,
> >             "stamp": "2019-10-07 03:21:03.218324",
> >             "name": "snap-7"
> >         },
> >
> >
> >
> > -----Original Message-----
> > Cc: ceph-users@xxxxxxx
> > Subject:  Re: list CephFS snapshots
> >
> > Have you tried "ceph daemon mds.NAME dump snaps" (available since
> > mimic)?
> >
> > =================
> > Frank Schilder
> > AIT Risø Campus
> > Bygning 109, rum S14
> >
> > ________________________________________
> > From: Lars Täuber <taeuber@xxxxxxx>
> > Sent: 17 December 2019 12:32:34
> > To: Stephan Mueller
> > Cc: ceph-users@xxxxxxx
> > Subject:  Re: list CephFS snapshots
> >
> > Hi Michael,
> >
> > thanks for your gist.
> > This is at least a way to do it. But there are many directories in our
> > cluster.
> > The "find $1 -type d" lasts for about 90 minutes to find all 2.6 million
> > directories.
> >
> > Is there another (faster) way e.g. via mds?
> >
> > Cheers,
> > Lars
> >
> >
> > Mon, 16 Dec 2019 17:03:41 +0000
> > Stephan Mueller <smueller@xxxxxxxx> ==> "taeuber@xxxxxxx"
> > <taeuber@xxxxxxx>, "ceph-users@xxxxxxx" <ceph-users@xxxxxxx> :  
> > > Hi Lars,
> > >  
> > > > Is there a mean to list all snapshots existing in a (subdir of)
> > > > Cephfs?
> > > > I can't use the find dommand to look for the ".snap" dirs.  
> > >
> > > You can, but you can't search for the '.snap' directories, you have to  
> >  
> > > append them to the directory like `find $cephFsDir/.snap` but I it's
> > > better to use `ls` instead, to list all snapshots.
> > >  
> > > >
> > > > I'd like to remove certain (or all) snapshots within a CephFS. But
> > > > how do I find them?
> > > >  
> > >
> > > I just created a gist for you that can do that:
> > > https://gist.github.com/Devp00l/2473f5953d578f440fc71b3d602a9c23
> > >
> > > As you can see in the script, snapshots starting with an underscore
> > > are filtered out as these directories belong to snapshots that were
> > > created in upper directories and these underscore snapshots can't be
> > > used for deletion.
> > >
> > > The deletion of a snapshot can be done by calling `rmdir`.
> > >
> > > But if you really want to manage CephFS snapshots easily take a look
> > > at the dashboard, as we have integrated the snapshot and quota
> > > management by now :)
> > >
> > > You can delete multiple snapshots of a directory or just create new
> > > snapshots on a directory basis easily through the UI.
> > >
> > >
> > > Stephan  


-- 
                            Informationstechnologie
Berlin-Brandenburgische Akademie der Wissenschaften
Jägerstraße 22-23                      10117 Berlin
Tel.: +49 30 20370-352           http://www.bbaw.de
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux