Re: mds crash loop

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 7, 2019 at 5:50 AM Karsten Nielsen <karsten@xxxxxxxxxx> wrote:
>
> -----Original message-----
> From:   Yan, Zheng <ukernel@xxxxxxxxx>
> Sent:   Wed 06-11-2019 14:16
> Subject:        Re:  mds crash loop
> To:     Karsten Nielsen <karsten@xxxxxxxxxx>;
> CC:     ceph-users@xxxxxxx;
> > On Wed, Nov 6, 2019 at 4:42 PM Karsten Nielsen <karsten@xxxxxxxxxx> wrote:
> > >
> > > -----Original message-----
> > > From:   Yan, Zheng <ukernel@xxxxxxxxx>
> > > Sent:   Wed 06-11-2019 08:15
> > > Subject:        Re:  mds crash loop
> > > To:     Karsten Nielsen <karsten@xxxxxxxxxx>;
> > > CC:     ceph-users@xxxxxxx;
> > > > On Tue, Nov 5, 2019 at 5:29 PM Karsten Nielsen <karsten@xxxxxxxxxx> wrote:
> > > > >
> > > > > Hi,
> > > > >
> > > > > Last week I upgraded my ceph cluster from luminus to mimic 13.2.6
> > > > > It was running fine for a while but yesterday my mds went into a crash
> > loop.
> > > > >
> > > > > I have 1 active and 1 standby mds for my cephfs both of which is running
> > the
> > > > same crash loop.
> > > > > I am running ceph based on https://hub.docker.com/r/ceph/daemon version
> > > > v3.2.7-stable-3.2-minic-centos-7-x86_64 with a etcd kv store.
> > > > >
> > > > > Log details are: https://paste.debian.net/1113943/
> > > > >
> > > >
> > > > please try again with debug_mds=20.  Thanks
> > > >
> > > > Yan, Zheng
> > >
> > > Yes I have set that and had to move to pastebin.com as debian apperently only
> > supports 150k
> > >
> > >
> > > https://pastebin.com/Gv7c5h54
> > >
> >
> > Looks like on-disk root inode is corrupted. have you encountered any
> > unusually things during the upgrade?
> >
> > please run 'rados -p <cephfs metadata pool> stat 1.00000000.inode' ,
> > check if the object is modified before or after the 'luminous ->
> > 13.2.6' upgrade.
> > To fix the corrupted object.  Run  'cephfs-data-scan init
> > --force-init'. Then restart mds. After mds become active, run 'ceph
> > daemon mds.x scrub_path / force repair'
> >
>
> I followed the steps I got the mds started but now a lot of files are in lost+found 24283 and I have these errors in the mds log
>
 'cephfs-data-scan init --force-init' does not move files into
lost+found. have you ever run other 'cephfs-data-scan foo' command or
'cephfs-journal-tool foo' command?

> 2019-11-06 20:20:18.215 7f0bd9090700  1 mds.0.32011 cluster recovered.
> 2019-11-06 20:20:19.019 7f0bd2dfa700  0 mds.0.cache.dir(0x100013acfcb) _fetched missing object for [dir 0x100013acfcb /nextcloud/custom_apps/carnet/ [2,head] auth v=0 cv=0/0 ap=1+0+0 state=1073741888|fetching f() n() hs=0+0,ss=0+0 | waiter=1 authpin=1 0x55d4dc4f5100]
> 2019-11-06 20:20:19.019 7f0bd2dfa700 -1 log_channel(cluster) log [ERR] : dir 0x100013acfcb object missing on disk; some files may be lost (/nextcloud/custom_apps/carnet)
> 2019-11-06 20:20:19.275 7f0bd2dfa700  0 mds.0.cache.dir(0x100013a3156) _fetched missing object for [dir 0x100013a3156 /nextcloud/custom_apps/mail/ [2,head] auth v=0 cv=0/0 ap=1+0+0 state=1073741888|fetching f() n() hs=0+0,ss=0+0 | waiter=1 authpin=1 0x55d4dcc40000]
> 2019-11-06 20:20:19.275 7f0bd2dfa700 -1 log_channel(cluster) log [ERR] : dir 0x100013a3156 object missing on disk; some files may be lost (/nextcloud/custom_apps/mail)
> 2019-11-06 20:20:19.371 7f0bd2dfa700  0 mds.0.cache.dir(0x100013abb3c) _fetched missing object for [dir 0x100013abb3c /nextcloud/custom_apps/passwords/ [2,head] auth v=0 cv=0/0 ap=1+0+0 state=1073741888|fetching f() n() hs=0+0,ss=0+0 | waiter=1 authpin=1 0x55d4dcc40700]
> 2019-11-06 20:20:19.371 7f0bd2dfa700 -1 log_channel(cluster) log [ERR] : dir 0x100013abb3c object missing on disk; some files may be lost (/nextcloud/custom_apps/passwords)
> 2019-11-06 20:20:19.383 7f0bd2dfa700  0 mds.0.cache.dir(0x100013a9b9b) _fetched missing object for [dir 0x100013a9b9b /nextcloud/custom_apps/phonetrack/ [2,head] auth v=0 cv=0/0 ap=1+0+0 state=1073741888|fetching f() n() hs=0+0,ss=0+0 | waiter=1 authpin=1 0x55d4dcc40e00]
> 2019-11-06 20:20:19.383 7f0bd2dfa700 -1 log_channel(cluster) log [ERR] : dir 0x100013a9b9b object missing on disk; some files may be lost (/nextcloud/custom_apps/phonetrack)
> 2019-11-06 20:20:19.431 7f0bd2dfa700  0 mds.0.cache.dir(0x100013a2659) _fetched missing object for [dir 0x100013a2659 /nextcloud/custom_apps/richdocuments/ [2,head] auth v=0 cv=0/0 ap=1+0+0 state=1073741888|fetching f() n() hs=0+0,ss=0+0 | waiter=1 authpin=1 0x55d4dcc41500]
> 2019-11-06 20:20:19.431 7f0bd2dfa700 -1 log_channel(cluster) log [ERR] : dir 0x100013a2659 object missing on disk; some files may be lost (/nextcloud/custom_apps/richdocuments)
> 2019-11-06 20:20:22.360 7f0bd9090700  1 mds.k8s-node-01 Updating MDS map to version 32015 from mon.1
>
>
> >
> > > - Karsten
> > >
> > > >
> > > > > Thanks for any hints
> > > > > - Karsten
> > > > > _______________________________________________
> > > > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > > >
> > > >
> >
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux