Re: CephFS - large omap object

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 18, 2019 at 7:28 PM Yan, Zheng <ukernel@xxxxxxxxx> wrote:
>
> On Mon, Mar 18, 2019 at 9:50 PM Dylan McCulloch <dmc@xxxxxxxxxxxxxx> wrote:
> >
> >
> > >please run following command. It will show where is 4.00000000
> > >
> > >rados -p -p hpcfs_metadata getxattr 4.00000000 parent >/tmp/parent
> > >ceph-dencoder import /tmp/parent type inode_backtrace_t decode dump_json
> > >
> >
> > $ ceph-dencoder import /tmp/parent type inode_backtrace_t decode dump_json
> > {
> >     "ino": 4,
> >     "ancestors": [
> >         {
> >             "dirino": 1,
> >             "dname": "lost+found",
> >             "version": 1
> >         }
> >     ],
> >     "pool": 20,
> >     "old_pools": []
> > }
> >
> > I guess it may have a very large number of files from previous recovery operations?
> >
>
> Yes, these files are created by cephfs-data-scan. If you don't want
> them, you can delete "lost+found"

This certainly makes sense, but even with that pointer I can't find
how it's picking inode 4. That should probably be documented? :)
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux