Re: ext4 performance regression 2.6.27-stable versus 2.6.32 and later

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 02.08.2010 22:21, schrieb Ted Ts'o:
On Mon, Aug 02, 2010 at 05:30:03PM +0200, Kay Diederichs wrote:

we pared down the benchmark to the last step (called "run xds_par in nfs
directory (reads 600M, and writes 50M)") because this captures most of
the problem. Here we report kernel messages with stacktrace, and the
blktrace output that you requested.

Thanks, I'll take a look at it.

Is NFS required to reproduce the problem?  If you simply copy the 100
files using rsync, or cp -r while logged onto the server, do you
notice the performance regression?

Thanks, regards,

						- Ted

Ted,

we've run the benchmarks internally on the file server; it turns out that NFS is not required to reproduce the problem.

We also took the opportunity to try 2.6.32.17 which just came out. 2.6.32.17 behaves similar to 2.6.32.16-patched (i.e. with reverted "ext4: Avoid group preallocation for closed files"); 2.6.32.17 has quite a few ext4 patches so one or a couple of those seems to have a similar effect as reverting "ext4: Avoid group preallocation for closed files".

These are the times for the second (and higher) benchmark runs; the first run is always slower. The last step ("run xds_par") is slower than in the NFS case because it's heavy in CPU usage (total CPU time is more than 200 seconds); the NFS client is a 8-core (+HT) Nehalem-type machine, whereas the NFS server is just a 2-core Pentium D @ 3.40GHz

Local machine: turn5 2.6.27.48 i686
Raid5: /dev/md5 /mnt/md5 ext4dev rw,noatime,barrier=1,stripe=512,data=writeback 0 0
 32 seconds for preparations
 19 seconds to rsync 100 frames with 597M from raid5,ext4 directory
 17 seconds to rsync 100 frames with 595M to raid5,ext4 directory
 36 seconds to untar 24353 kernel files with 323M to raid5,ext4 directory
 31 seconds to rsync 24353 kernel files with 323M from raid5,ext4 directory
267 seconds to run xds_par in raid5,ext4 directory
427 seconds to run the script

Local machine: turn5 2.6.32.16 i686  (vanilla, i.e. not patched)
Raid5: /dev/md5 /mnt/md5 ext4 rw,seclabel,noatime,barrier=0,stripe=512,data=writeback 0 0
 36 seconds for preparations
 18 seconds to rsync 100 frames with 597M from raid5,ext4 directory
 33 seconds to rsync 100 frames with 595M to raid5,ext4 directory
 68 seconds to untar 24353 kernel files with 323M to raid5,ext4 directory
 40 seconds to rsync 24353 kernel files with 323M from raid5,ext4 directory
489 seconds to run xds_par in raid5,ext4 directory
714 seconds to run the script

Local machine: turn5 2.6.32.17 i686
Raid5: /dev/md5 /mnt/md5 ext4 rw,seclabel,noatime,barrier=0,stripe=512,data=writeback 0 0
 38 seconds for preparations
 18 seconds to rsync 100 frames with 597M from raid5,ext4 directory
 33 seconds to rsync 100 frames with 595M to raid5,ext4 directory
 67 seconds to untar 24353 kernel files with 323M to raid5,ext4 directory
 41 seconds to rsync 24353 kernel files with 323M from raid5,ext4 directory
266 seconds to run xds_par in raid5,ext4 directory
492 seconds to run the script

So even if the patches that went into 2.6.32.17 seem to fix the worst stalls, it is obvious that untarring and rsyncing kernel files is significantly slower on 2.6.32.17 than 2.6.27.48 .

HTH,

Kay

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature


[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux