Re: loaded dup inode (but no mds crash)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 29, 2019 at 9:13 PM Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
>
> On Mon, Jul 29, 2019 at 2:52 PM Yan, Zheng <ukernel@xxxxxxxxx> wrote:
> >
> > On Fri, Jul 26, 2019 at 4:45 PM Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
> > >
> > > Hi all,
> > >
> > > Last night we had 60 ERRs like this:
> > >
> > > 2019-07-26 00:56:44.479240 7efc6cca1700  0 mds.2.cache.dir(0x617)
> > > _fetched  badness: got (but i already had) [inode 0x
> > > [...2,head] ~mds2/stray1/10006289992 auth v14438219972 dirtyparent
> > > s=116637332 nl=8 n(v0 rc2019-07-26 00:56:17.199090 b116637332 1=1+0)
> > > (iversion lock) | request=0 lock=0 caps=0 remoteparent=0 dirtyparent=1
> > > dirty=1 authpin=0 0x5561321eee00] mode 33188 mtime 2017-07-11
> > > 16:20:50.000000
> > > 2019-07-26 00:56:44.479333 7efc6cca1700 -1 log_channel(cluster) log
> > > [ERR] : loaded dup inode 0x10006289992 [2,head] v14437387948 at
> > > ~mds2/stray3/10006289992, but inode 0x10006289992.head v14438219972
> > > already exists at ~mds2/stray1/10006289992
> > >
> > > Looking through this ML this often corresponds to crashing MDS's and
> > > needing a disaster recovery procedure to follow.
> > > We haven't had any crash....
> > >
> > > Is there something we should do *now* to fix these before any assert
> > > is triggered?
> >
> > you can use rados rmomapkey to delete inode with smaller version. For
> > above case:
> >
> > rados -p cephfs_metadata rmomapkey 617.00000000 10006289992_head.
>
> I just checked and all of those inodes are no longer stray.
>
> # rados -p cephfs_metadata listomapkeys 617.00000000 | grep 10006289992
> #
>
> They were originally from hardlink deletion, and another link has been
> stat'ed in the meanwhile.
> I also double checked the parent xattr on the inodes in cephfs_data
> and they refer to a real parent dir, not stray.
>
> So it looks like all those dup inodes have been reintegrated. Am I safe ?
>

check if 10006289992_head is in 615.00000000 (mds2/stray1). If it is, delete it.

> -- dan
>
> > I suggest run 'cephfs-data-scan scan_links' after taking down cephfs
> > (either use 'mds set <fs_name> down true' or  'flush all journasl and
> > kill all mds')
> >
> >
> > Regards
> > Yan, Zheng
> >
> >
> >
> > >
> > > Thanks!
> > >
> > > Dan
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@xxxxxxxxxxxxxx
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux