Performance degradation over time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We are running XFS filesystem on one of out machines which is a big
store (~3TB) of different data files (mostly images). Quite recently we
experienced some performance problems - machine wasn't able to keep up
with updates. After some investigation it turned out that open()
syscalls (open for writing) were taking significantly more time than
they should eg. 15-20ms vs 100-150us.
Some more info about our workload as I think it's important here:
our XFS filesystem is exclusively used as data store, so we only
read and write our data (we mostly write). When new update comes it's
written to a temporary file eg.

/mountpoint/some/path/.tmp/file

When file is completely stored we move it to final location eg.

/mountpoint/some/path/different/subdir/newname

That means that we create lots of files in /mountpoint/some/path/.tmp
directory, but directory is empty as they are moved (rename() syscall)
shortly after file creation to a different directory on the same
filesystem.
The workaround which I found so far is to remove that directory
(/mountpoint/some/path/.tmp in our case) with its content and re-create
it. After this operation open() syscall goes down to 100-150us again.
Is this a known problem ?
Information regarding our system:
CentOS 5.8 / kernel 2.6.18-308.el5 / kmod-xfs-0.4-2
Let me know if you need to know anything more.
Cheers,

Marcin

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux