On Tue, Jan 22, 2019 at 9:08 PM Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote: > > Hi Zheng, > > We also just saw this today and got a bit worried. > Should we change to: > What is the error message (on stray dir or other dir)? does the cluster ever enable multi-acitive mds? > diff --git a/src/mds/CInode.cc b/src/mds/CInode.cc > index e8c1bc8bc1..e2539390fb 100644 > --- a/src/mds/CInode.cc > +++ b/src/mds/CInode.cc > @@ -2040,7 +2040,7 @@ void CInode::finish_scatter_gather_update(int type) > > if (pf->fragstat.nfiles < 0 || > pf->fragstat.nsubdirs < 0) { > - clog->error() << "bad/negative dir size on " > + clog->warn() << "bad/negative dir size on " > << dir->dirfrag() << " " << pf->fragstat; > assert(!"bad/negative fragstat" == g_conf->mds_verify_scatter); > > @@ -2077,7 +2077,7 @@ void CInode::finish_scatter_gather_update(int type) > if (state_test(CInode::STATE_REPAIRSTATS)) { > dout(20) << " dirstat mismatch, fixing" << dendl; > } else { > - clog->error() << "unmatched fragstat on " << ino() << ", inode has " > + clog->warn() << "unmatched fragstat on " << ino() << ", inode has " > << pi->dirstat << ", dirfrags have " << dirstat; > assert(!"unmatched fragstat" == g_conf->mds_verify_scatter); > } > > > Cheers, Dan > > > On Sat, Oct 20, 2018 at 2:33 AM Yan, Zheng <ukernel@xxxxxxxxx> wrote: >> >> no action is required. mds fixes this type of error atomically. >> On Fri, Oct 19, 2018 at 6:59 PM Burkhard Linke >> <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote: >> > >> > Hi, >> > >> > >> > upon failover or restart, or MDS complains that something is wrong with >> > one of the stray directories: >> > >> > >> > 2018-10-19 12:56:06.442151 7fc908e2d700 -1 log_channel(cluster) log >> > [ERR] : bad/negative dir size on 0x607 f(v133 m2018-10-19 >> > 12:51:12.016360 -4=-5+1) >> > 2018-10-19 12:56:06.442182 7fc908e2d700 -1 log_channel(cluster) log >> > [ERR] : unmatched fragstat on 0x607, inode has f(v134 m2018-10-19 >> > 12:51:12.016360 -4=-5+1), dirfrags have f(v0 m2018-10-19 12:51:12.016360 >> > 1=0+1) >> > >> > >> > How do we handle this problem? >> > >> > >> > Regards, >> > >> > Burkhard >> > >> > >> > _______________________________________________ >> > ceph-users mailing list >> > ceph-users@xxxxxxxxxxxxxx >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com