Re: cephfs: [ERR] loaded dup inode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dan and Patrick,

this problem seems to develop into a nightmare. I executed a find on the file system and had some initial success. The number of stray files dropped by about 8%. Unfortunately, this is about it. I'm running a find now also on snap dirs, but I don't have much hope. There must be a way to find out what is accumulating in the stray buckets. As I wrote in another reply to this thread, I can't dump the trees:

> I seem to have a problem. I cannot dump the mds tree:
>
> [root@ceph-08 ~]# ceph daemon mds.ceph-08 dump tree '~mdsdir/stray0'
> root inode is not in cache
> [root@ceph-08 ~]# ceph daemon mds.ceph-08 dump tree '~mds0/stray0'
> root inode is not in cache
> [root@ceph-08 ~]# ceph daemon mds.ceph-08 dump tree '~mds0' 0
> root inode is not in cache
> [root@ceph-08 ~]# ceph daemon mds.ceph-08 dump tree '~mdsdir' 0
> root inode is not in cache
>
> [root@ceph-08 ~]# ceph daemon mds.ceph-08 get subtrees | grep path
>             "path": "",
>             "path": "~mds0",
>

However, this information is somewhere in rados objects and it should be possible to figure something out similar to

# rados getxattr --pool=con-fs2-meta1 <OBJ_ID> parent | ceph-dencoder type inode_backtrace_t import - decode dump_json
# rados listomapkeys --pool=con-fs2-meta1 <OBJ_ID>

What OBJ_IDs am I looking for? How and where can I start to traverse the structure? Version is mimic latest stable.

Thanks for your help,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
Sent: 17 January 2022 09:35:02
To: Patrick Donnelly
Cc: Frank Schilder; ceph-users
Subject: Re:  Re: cephfs: [ERR] loaded dup inode

On Sun, Jan 16, 2022 at 3:54 PM Patrick Donnelly <pdonnell@xxxxxxxxxx> wrote:
>
> Hi Dan,
>
> On Fri, Jan 14, 2022 at 6:32 AM Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
> > We had this long ago related to a user generating lots of hard links.
> > Snapshots will have a similar effect.
> > (in these cases, if a user deletes the original file, the file goes
> > into stray until it is "reintegrated").
> >
> > If you can find the dir where they're working, `ls -lR` will force
> > those to reintegrate (you will see because the num strays will drop
> > back down).
> > You might have to ls -lR in a snap directory, or in the current tree
> > -- you have to browse around and experiment.
> >
> > pacific does this re-integration automatically.
>
> This reintegration is still not automatic (i.e. the MDS does not have
> a mechanism (yet) for hunting for the dentry to do reintegration).
> The next version (planned) of Pacific will have reintegration
> triggered by recursive scrub:
>
> https://github.com/ceph/ceph/pull/44514
>
> which is significantly less disruptive than `ls -lR` or `find`.

Oops, sorry, my bad.
I was thinking about https://github.com/ceph/ceph/pull/33479

Cheers, Dan


>
> --
> Patrick Donnelly, Ph.D.
> He / Him / His
> Principal Software Engineer
> Red Hat, Inc.
> GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux