Hi Dave,
Thank you for your suggestion, and very appreciate for your reply!
2016-04-13 5:31 GMT+08:00 Dave Chinner <david@xxxxxxxxxxxxx>:
On Tue, Apr 12, 2016 at 10:07:45PM +0800, Songbo Wang wrote:
> Hi Dave,
>
> Thank you for your reply. I did some test today and described those as
> follows:
>
> Delete the existing test file , and redo the test : fio -ioengine=libaio
> -bs=4k -direct=1 -thread -rw=randwrite -size=50G -filename=/mnt/test
> -name="EBS 4KB randwrite test" -iodepth=64 -runtime=60
> The iops resultes is 19ką(per second); I continue to fio this test file
> untill it was filled to the full. Then I did another test using the same
> test case, the results was 210ką(per second).(The results mentioned
Yup, that's when the workload goes from allocation bound to being an
overwrite workload when there is no allocation occurring.
Perhaps you should preallocate the file using the fallocate=posix
option. This will move the initial overhead to IO completion, so
won't block submission, and the file will not end up a fragmented
mess as the written areas will merge back into large single extents
as more of the file is written.
> yesterday was partial. I used the same test file several times, the
> results degraded because of the test file was not fill to the full)
>
> I try to remake the filesystem using the following command to increase the
> internal log size , inode size and agcount num:
> mkfs.xfs /dev/hioa2 -f -n size=64k -i size=2048,align=1 -d agcount=2045 -l
> size=512m
> but it has no help to the result.
Of course it won't. Turning random knobs without knowing what they
do will not solve the problem. Indeed, if you're workload is
performance limited because it is running out of log space, then
*reducing the log size* will not solve the issue.
Users who tweaking knobs without understanding what they do or how
they affect the application is the leading cause of filesystem
performance and reliability issues on XFS. Just don't do it - all
you'll do is cause something to go wrong when you can least afford
it to happen.
_______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs