Re: XFS: performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> If only I had a dollar for every time I forgot to 'cc the mailing list within Gmail:


On Mon, Nov 29, 2010 at 1:59 AM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> On Mon, Nov 29, 2010 at 01:21:11AM +0000, Yclept Nemo wrote:
>> On Mon, Nov 29, 2010 at 12:11 AM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
>> > On Sun, Nov 28, 2010 at 10:51:04PM +0000, Yclept Nemo wrote:
>> >> After 3-4 years of using one XFS partition for every mount point
>> >> (/,/usr,/etc,/home,/tmp...) I started noticing a rapid performance
>> >> degradation. Subjectively I now feel my XFS partition is 5-10x slower
>> >> ... while other partitions (ntfs,ext3) remain the same.
>> >
>> > Can you run some benchmarks to show this non-subjectively? Aged
>> > filesystems will be slower than new filesytsems, and it should be
>> > measurable. Also, knowing what you filesystem contains (number of
>> > files, used capacity, whether you have run it near ENOSPC for
>> > extended periods of time, etc) would help us understand the way the
>> > filesytesm has aged as well.
>>
>> Certainly, if you are interested I can run either dbench or bonnie++
>> tests comparing an XFS partition (with default values from xfsprogs
>> 3.1.3) on the new hard-drive to the existing partition on the old. As
>> I'm not sure what you're looking for, what command parameters should I
>> profile against?
>>
>> The XFS partition in question is 39.61GB in size, of which 30.71GB are
>> in use (8.90GB free). It contains a typical Arch Linux installation
>> with many programs and many personal files. Usage pattern as follows:
>> . equal runtime split between (near ENOSPC) and (approximately 10.0GB free)
>
> There's your problem - it's a well known fact that running XFS at
> more than 85-90% capacity for extended periods of time causes free
> space fragmentation and that results in performance degradation.
>
>> . mostly small files, one or two exceptions
>> . often reach ENOSPC through carelessness
>> . run xfs_fsr very often
>
> And xfs_fsr is also known to cause free space fragmentation when run
> on filesystems with not much space available...

Pheww... I'm relieved to learn that my performance degradation will be
alleviated with this hard-drive update. Which also means I need no
longer be so obsessive-compulsive when tweaking the second incarnation
of my XFS file-system.

>> >> Similarly a larger agcount should always give better performance,
>> >> right?
>> >
>> > No.
>> >
>> >> Some resources claim that agcount should never fall below
>> >> eight.
>> >
>> > If those resources are right, then why would we default to 4 AGs for
>> > filesystems on single spindles?
>>
>> Obviously you are against modifying the agcount - I won't touch it :)
>
> No, what I'm pointing out is that <some random web reference> is not
> a good guide for tuning an XFS filesystem. You need to _understand_
> what changing that knob does before you change it. If you don't
> understand what it does, then don't change it...

As I understand an XFS system is demarcated into several allocation
groups each containing a superblock as well as private inode and
free-space btrees, and thus increasing the AG count increases
parallelization. I simply assumed the process was CPU-bound, not disk
bound. Though by mentioning spindles, I realized it makes sense to
limit the amount of parallel access to a single hard drive; I've often
noticed XFS come to a crawl when I simultaneously call multiple
intensive IO operations.

>> Not actually sure what I intended. My knowledge of file-systems
>> depends on Google and that statement was only a shot in the dark.
>> However, you've convinced me not to change the blocksize (keep in mind
>> I'm running an entire Linux installation from this one XFS partition,
>> small files included).
>
> Sure, I do that too. My workstation has a 220GB root partition that
> contains all my kernel trees, build areas, etc. It has agcount=16
> because I'm running on a 8c machine and do 8-way parallel builds, a
> log of 105MB and I'm using delaylog....

You mention an eight-core machine (8c?). Since I operate a dual-core
system, would it make sense to increase my AG count slightly, to five
or six?

Hm.. I was going to wait till 2.6.39 but I think I'll enable delayed
logging right now!

>> If the blocksize option is so
>> performance-independent, why does it even exist?
>
> Because there are situations where it makes sense to change the
> block size. That isn't really a general use root filesystem,
> though...

Now for the implementation of transferring the data to new XFS
partition/hard-drive...
I was originally going to use Â"rsync -avxAHX" until I stumbled across
this list's thread, "ENOSPC at 90% with plenty of inodes" which
mentioned xfsdump and xfsrestore. I now have three questions:

. since xfsdump and xfsrestore access the base file-system structure,
these tools will be able to copy everything ???:
 - files
 - special files (sockets/fifos)
 - permissions
 - attributes
 - acls (it is in the man page, but I list it here for completion)
 - symlinks
 - hard links
 - extended attributes
 - character/block devices
 - modification times
 - etc... everything: anything rsync could copy and more

. since xfsdump and xfsrestore access the base file-system structure,
will they be able to:
 - update creation-time XFS parameters (from the original
file-system) to adapt to the new XFS file-system in order to benefit
from performance and capability improvements. For example:
  Â. adapt the log section from version1 to version2
  Â. modify the agcount
  Â. update the metadata attributes to version 2
  Â. enable lazy-count for all files
  Â. etc
 - reduce fragmentation and free-space fragmentation upon
xfsrestore. (or does xfsrestore simply copy bit-by-bit the old XFS
structure into the new file-system).

 -If not, are there any reasons to nonetheless prefer
xfsdump/xfsrestore over rsync?

If any of this is already mentioned in the xfsdump/restore man pages,
i apologize; I simply don't want to wait till tomorrow to begin the
backup/restore process.

Sincerely,
orbisvicis

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux