Re: quotacheck speed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 13, 2012 at 07:16:51PM +0100, Arkadiusz Miśkiewicz wrote:
> On Sunday 12 of February 2012, Dave Chinner wrote:
> > On Sun, Feb 12, 2012 at 10:01:07PM +0100, Arkadiusz Miśkiewicz wrote:
> > > Hi,
> > > 
> > > When mounting 800GB filesystem (after repair for example) here quotacheck
> > > takes 10 minutes. Quite long time that adds to whole time of filesystem
> > > downtime (repair + quotacheck).
> > 
> > How long does a repair vs quotacheck of that same filesystem take?
> > repair has to iterate the inodes 2-3 times, so if that is faster
> > than quotacheck, then that is really important to know....
> 
> Don't have exact times but looking at nagios and dmesg it took about:
> repair ~20 minutes, quotacheck ~10 minutes (it's 800GB of maildirs).

Ok. Seems like repair is a little faster than quotacheck, then.

> > > I wonder if quotacheck can be somehow improved or done differently like
> > > doing it in parallel with normal fs usage (so there will be no downtime)
> > > ?
> > 
> > quotacheck makes the assumption that it is run on an otherwise idle
> > filesystem that nobody is accessing. Well, what it requires is that
> > nobody is modifying it. What we could do is bring the filesystem up
> > in a frozen state so that read-only access could be made but
> > modifications are blocked until the quotacheck is completed.
> 
> Read-only is better than no access at all. I was hoping that there is a way to 
> make quotacheck being recalculated on the fly with taking all write accesses 
> that happen in meantime into account.

The problem is that we'd need to keep two sets of dquots in memory
for each quota user while the quota check is being done - one to
track modifications being made, and the other to track quotacheck
progress. It gets complex quite rapidly then - where do we account
changes to an inode that hasn't been quota-checked yet? Or vice
versa? How do we even know if an inode has been quota checked?

THese are probably all things that can be solved, but I get lost in
the complexity when just thinking about it....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux