Re: Performance decrease over time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 02, 2013 at 03:14:04AM -0500, Stan Hoeppner wrote:
> On 8/1/2013 9:25 PM, Dave Chinner wrote:
> ...
> 
> > So really, the numbers only reflect a difference in layout of the
> > files being tested. And using small direct IO means that the
> > filesystem will tend to fill small free spaces close to the
> > inode first, and so will fragment the file based on the locality of
> > fragmented free space to the owner inode. In the case of the new
> > filesystem, there is only large, contiguous free space near the
> > inode....
> ...
> >> What can be
> >> done (as a user) to mitigate this effect? 
> > 
> > Buy faster disks ;)
> > 
> > Seriously, all filesystems age and get significantly slower as they
> > get used. XFS is not really designed for single spindles - it's
> > algorithms are designed to spread data out over the entire device
> > and so be able to make use of many, many spindles that make up the
> > device. The behaviour it has works extremely well for this sort of
> > large scale scenario, but it's close to the worst case aging
> > behaviour for a single, very slow spindle like you are using.  Hence
> > once the filesystem is over the "we have pristine, contiguous
> > freespace" hump on your hardware, it's all downhill and there's not
> > much you can do about it....
> 
> Wouldn't the inode32 allocator yield somewhat better results with this
> direct IO workload?

What direct IO workload? Oh, you mean the IOZone test? 

What's the point of trying to optimise IOzone throughput? it matters
nothing to Marcus - he's just using it to demonstrate a point that
free space is not as contiguous as it once was...

As it is, inode32 will do nothing to speed up performance on a
single spindle - it spreads all files out across the entire disk, so
locality between the inode and the data is guaranteed to be worse
than an aged inode64 filesystem. inode32 intentionally spreads data
across the disk without caring about access locality so the average
seek from inode read to data read is half the spindle. That's why
inode64 is so much faster than inode32 on general workloads - the
seek between inode and data is closer to the track-to-track seek
time than the average seek time.

> With Markus' single slow spindle?  It shouldn't
> fragment free space quite as badly in the first place, nor suffer from
> trying to use many small fragments surrounding the inode as in the case
> above.

inode32 fragments free space just as badly as inode64, if not worse,
because it is guaranteed to intermingle data of different temporal
stability in the same localities, rather than clustering different
datasets around individual directory inodes...

> Whether or not inode32 would be beneficial to his real workload(s) I
> don't know.  I tend to think it might make at least a small positive
> difference.  However, given that XFS is trying to get away from inode32
> altogether I can see why you wouldn't mention it, even if it might yield
> some improvement in this case.

I didn't mention it because as a baseline for the data Macus is
storing (source trees, doing compilations, etc) inode32 starts off
much slower than inode64 and degrades just as much or more over
time....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux