Re: Performance decrease over time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Aug 2, 2013, at 3:30 PM, Dave Chinner wrote:

> On Fri, Aug 02, 2013 at 03:14:04AM -0500, Stan Hoeppner wrote:
>> On 8/1/2013 9:25 PM, Dave Chinner wrote:
>> ...
>> 
>>> So really, the numbers only reflect a difference in layout of the
>>> files being tested. And using small direct IO means that the
>>> filesystem will tend to fill small free spaces close to the
>>> inode first, and so will fragment the file based on the locality of
>>> fragmented free space to the owner inode. In the case of the new
>>> filesystem, there is only large, contiguous free space near the
>>> inode....
>> ...
>>>> What can be
>>>> done (as a user) to mitigate this effect? 
>>> 
>>> Buy faster disks ;)
>>> 
>>> Seriously, all filesystems age and get significantly slower as they
>>> get used. XFS is not really designed for single spindles - it's
>>> algorithms are designed to spread data out over the entire device
>>> and so be able to make use of many, many spindles that make up the
>>> device. The behaviour it has works extremely well for this sort of
>>> large scale scenario, but it's close to the worst case aging
>>> behaviour for a single, very slow spindle like you are using.  Hence
>>> once the filesystem is over the "we have pristine, contiguous
>>> freespace" hump on your hardware, it's all downhill and there's not
>>> much you can do about it....
>> 
>> Wouldn't the inode32 allocator yield somewhat better results with this
>> direct IO workload?
> 
> What direct IO workload? Oh, you mean the IOZone test? 
> 
> What's the point of trying to optimise IOzone throughput? it matters
> nothing to Marcus - he's just using it to demonstrate a point that
> free space is not as contiguous as it once was...
> 
> As it is, inode32 will do nothing to speed up performance on a
> single spindle - it spreads all files out across the entire disk, so
> locality between the inode and the data is guaranteed to be worse
> than an aged inode64 filesystem. inode32 intentionally spreads data
> across the disk without caring about access locality so the average
> seek from inode read to data read is half the spindle. That's why
> inode64 is so much faster than inode32 on general workloads - the
> seek between inode and data is closer to the track-to-track seek
> time than the average seek time.

Totally concur 100%.

In fact I've either obsoleted or made local 32 bit apps as we run most everything off a NAS type setup using XFS.

The benies of inode64 in our env were just too much to side line, especially with our older SATA 2 disks in use.

In fact, for slower disks, I'd say inode64 is a must.  But I'm talking several in a RAID config as I don't do single disk XFS.

- aurf
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux