Re: quotacheck speed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sunday 12 of February 2012, Dave Chinner wrote:
> On Sun, Feb 12, 2012 at 10:01:07PM +0100, Arkadiusz Miśkiewicz wrote:
> > Hi,
> > 
> > When mounting 800GB filesystem (after repair for example) here quotacheck
> > takes 10 minutes. Quite long time that adds to whole time of filesystem
> > downtime (repair + quotacheck).
> 
> How long does a repair vs quotacheck of that same filesystem take?
> repair has to iterate the inodes 2-3 times, so if that is faster
> than quotacheck, then that is really important to know....

Don't have exact times but looking at nagios and dmesg it took about:
repair ~20 minutes, quotacheck ~10 minutes (it's 800GB of maildirs).

> 
> > I wonder if quotacheck can be somehow improved or done differently like
> > doing it in parallel with normal fs usage (so there will be no downtime)
> > ?
> 
> quotacheck makes the assumption that it is run on an otherwise idle
> filesystem that nobody is accessing. Well, what it requires is that
> nobody is modifying it. What we could do is bring the filesystem up
> in a frozen state so that read-only access could be made but
> modifications are blocked until the quotacheck is completed.

Read-only is better than no access at all. I was hoping that there is a way to 
make quotacheck being recalculated on the fly with taking all write accesses 
that happen in meantime into account.

> Also, quotacheck uses the bulkstat code to iterate all the inodes
> quickly. Improvements in bulkstat speed will translate directly
> into faster quotachecks. quotacheck could probably drive bulkstat in
> a parallel manner to do the quotacheck faster, but that assumes that
> the underlying storage is not already seek bound. What is the
> utilisation of the underlying storage and CPU while quotacheck is
> running?

Will try to gather more information then.

> 
> Otherwise, bulkstat inode prefetching could be improved like
> xfs_repair was to look at inode chunk density and change IO patterns
> and to slice and dice large IO buffers into smaller inode buffers.
> We can actually do that efficiently now that we don't use the page
> cache for metadata caching. If repair is iterating inodes faster
> than bulkstat, then this optimisation will be the reason and having
> that data point is very important....
> 
> Cheers,
> 
> Dave.


-- 
Arkadiusz Miśkiewicz        PLD/Linux Team
arekm / maven.pl            http://ftp.pld-linux.org/

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux