Re: Consistent throughput challenge -- fragmentation?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 






On Mon, Feb 25, 2013 at 3:39 PM, Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx> wrote:
On 2/25/2013 10:01 AM, Brian Cain wrote:
> All,
>
> I have been observing some odd behavior regarding write throughput to an
> XFS partition (the baseline kernel version is 2.6.32.27).  I see
> consistently high write throughput (close to the performance of the raw
> block device) to the filesystem immediately after a mkfs, but after a few
> test cycles, there is sporadic poor performance.
>
> The test mechanism is like so:
>
> [mkfs.xfs <blockdev>] (no flags/options, xfsprogs ver 3.1.1-0.1.36)
> ...
> 1. remove a previous test cycle's directory
> 2. create a new directory
> 3. open/write/close a small file (4kb) in this directory
> 4. open/read/close this same small file (by the local NFS server)
> 5. open[O_DIRECT]/write/write/write/.../close a large file (anywhere from
> ~100MB to 200GB)
>

 
It looks like omitting step #2 and putting all files in the root directory (and modifying step #1 to remove only the files that were associated with that previous test cycle) avoids the poor-performance problem as well.  At least, that's the case in testing so far.

--
-Brian
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux