Re: raid6 + caviar black + mpt2sas horrific performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/30/2011 11:20 AM, Louis-David Mitterrand wrote:
On Wed, Mar 30, 2011 at 09:46:29AM -0400, Joe Landman wrote:

[...]

Try a similar test on your two units, without the "v" option.  Then

- T610:

	tar -xjf linux-2.6.37.tar.bz2  24.09s user 4.36s system 2% cpu 20:30.95 total

- PE2900:

	tar -xjf linux-2.6.37.tar.bz2  17.81s user 3.37s system 64% cpu 33.062 total

Still a huge difference.

The wallclock gives you a huge difference. The user and system times are quite similar.

[...]

- T610:

/dev/mapper/cmd1 on / type xfs (rw,inode64,delaylog,logbsize=262144)

- PE2900:

/dev/mapper/cmd1 on / type xfs (rw,inode64,delaylog,logbsize=262144)

Hmmm. You are layering an LVM atop the raid? Your raids are /dev/md1. How is /dev/mapper/cmd1 related to /dev/md1?

[...]

[root@vault t]# dd if=/dev/md2 of=/dev/null bs=32k count=32000

- T610:

32000+0 enregistrements lus
32000+0 enregistrements écrits
1048576000 octets (1,0 GB) copiés, 1,70421 s, 615 MB/s

- PE2900:

32000+0 records in
32000+0 records out
1048576000 bytes (1.0 GB) copied, 2.02322 s, 518 MB/s

Raw reads from the MD device.  For completeness, you should also do

	dd if=/dev/mapper/cmd1 of=/dev/null bs=32k count=32000

and

	dd if=/backup/t/big.file  of=/dev/null bs=32k count=32000

to see if there is a sudden loss of performance at some level.

[root@vault t]# dd if=/dev/zero of=/backup/t/big.file bs=32k count=32000

- T610:

32000+0 enregistrements lus
32000+0 enregistrements écrits
1048576000 octets (1,0 GB) copiés, 0,870001 s, 1,2 GB/s

- PE2900:

32000+0 records in
32000+0 records out
1048576000 bytes (1.0 GB) copied, 9.11934 s, 115 MB/s

Ahhh ... look at that. Cached write is very different between the two. An order of magnitude. You could also try a direct (noncached) write, using oflag=direct at the end of the line. This could be useful, though direct IO isn't terribly fast on MD raids.

If we can get the other dd's indicated, we might have a better sense of which layer is causing the issue. It might not be MD.

--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman@xxxxxxxxxxxxxxxxxxxxxxx
web  : http://scalableinformatics.com
       http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux