Re: very long log recovery at mount

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 23, 2015 at 09:22:04AM +0200, Arkadiusz Miśkiewicz wrote:
> On Friday 23 of October 2015, Dave Chinner wrote:
> > On Wed, Oct 21, 2015 at 11:27:52AM +0200, Arkadiusz Miśkiewicz wrote:
> > > Hi.
> > > 
> > > I got such situation, fresh boot, 4.1.10 kernel, init scripts start
> > > mounting filesystems. One fs wasn't very lucky:
> > > 
> > > [   15.979538] XFS (md3): Mounting V4 Filesystem
> > > [   16.256316] XFS (md3): Ending clean mount
> > > [   28.343346] XFS (md4): Mounting V4 Filesystem
> > > [   28.629918] XFS (md4): Ending clean mount
> > > [   28.662125] XFS (md5): Mounting V4 Filesystem
> > > [   28.980142] XFS (md5): Ending clean mount
> > > [   29.049421] XFS (md6): Mounting V4 Filesystem
> > > [   29.447725] XFS (md6): Starting recovery (logdev: internal)
> > > [ 4517.327332] XFS (md6): Ending recovery (logdev: internal)
> > > 
> > > It took over 1h to mount md6 filesystem.
> > > 
> > > Questions:
> > > - is it possible to log how much data is needed to be recovered
> > > from log?
> > 
> > Yes.
> > 
> > > Some data that would give a hint on how big this is (and thus
> > > rough estimate on how long it will take). Not sure if that's known
> > > at time when this message is being printed.
> > 
> > It's not known, then, and can't be known until recovery has sparsed
> > the log and read all the objects from disk it needs to recover.
> 
> So I assume not available early enough to be usable.

No, it's not.

> > > - now such long mount time is almost insane, so I wonder why could
> > > be the reason. Is the process multithreaded, single threaded? cpus
> > > were idle
> > 
> > What kernel? 
> 
> "> > I got such situation, fresh boot, 4.1.10 kernel"

Sorry, my fault, I missed that.

> > We now have readahead which minimises the IO latency of
> > pulling objects into the kernel for recovery, but if you are
> > recovering a couple of million individual inode changes (e.g. from a
> > 'chproj -R /path/with/millions/of/files') then it take a long tiem
> > to read in all the inodes and write them all back out.
> 
> It was like 10x rsnapshots there (so tons of files copied/hardlinked, then 
> rsynced etc).

Ok, so lots of hardlinks, which will result in logs of individual
inode cores being logged due to the link count change.

> > A single
> > inode in the log lik ethis only consumes about 200 bytes of log
> > space, so there can easily be 5000 inodes to recover per megabyte
> > of log space you have. And if you have a 2GB log, then that could
> > contain 10 million inode core changes that need to be recovered....
> 
> Ok, so what I'm looking for is any kind of indication (in dmesg probably) that 
> it will take long time (thus was asking about log recovery size). Because 
> right now it's hard to estimate how long downtime will be and ability to 
> estimate such things is important.

Well, we don't count such things at present. We multiple passes of
the log during recovery - the first pass records all the cancelled
buffers (i.e. freed metadata buffers) so that we can avoid replay
into them. xlog_recover_commit_pass1() doesn't count or look at
anything else, but I suspect we could add some kind of "this many
objects to recover" accounting output at the end of that phase. We
can't do any time estimates, though, because that's entirely
dependent on the underlying storage which we know nothing about.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux