Re: cmirror performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



clean-up of previously posted script (plus colorized diff output for easier reading).

 brassow

Attachment: perf_matrix.pl
Description: Binary data



On Mar 9, 2007, at 11:49 AM, Jonathan E Brassow wrote:

Nope, the first version is just slow. Next version should be coming with RHEL5.X (and should be going upstream), which should be faster.

I just wrote up a perl script (which I haven't had a chance to really clean-up yet) that will give performance numbers for various request/transfer sizes. I'm including it at the end.

You must have lmbench package installed (for 'lmdd').  Then run:
# to give you read performance numbers
'perf_matrix.pl if=<block device>'

# to give you write performance numbers
'perf_matrix.pl of=<block device>'

# to do multiple runs and discard numbers outside the std deviation
# The more iterations you do, the more accurate your results
'perf_matrix.pl if=<block device> iter=5

For more information on the options, do 'perf_matrix.pl -h'.

Using the above, you can compare the numbers you're getting from the base device, linear target, mirror target, etc over a wide range of transfer/request sizes.

Let's take a look at a couple examples. (Request sizes increase to the right by powers of two starting at 1kiB. Transfer sizes increase by rows by powers of two starting at 1MiB. Results are in MiB/sec): prompt> perf_matrix.pl if=/dev/vg/linear iter=5 #linear reads, 5 iterations w/ results averaged 25.24 28.16 28.82 28.93 28.96 29.25 28.72 26.54 27.39 27.84 28.94 0.00 0.00 30.48 31.57 31.66 31.32 31.89 32.19 31.66 32.00 33.98 34.23 31.93 33.30 0.00 34.00 33.46 33.39 33.12 33.50 34.32 33.57 33.78 34.81 35.03 33.68 34.25 34.68 34.82 34.33 34.32 34.20 34.49 34.89 35.20 35.24 35.39 35.33 35.56 34.94 35.18 35.50 35.37 35.53 35.37 35.54 35.53 35.41 35.60 35.38 35.53 35.54 35.45 35.33 35.72 35.76 35.82 35.81 35.81 35.80 35.81 35.82 35.81 35.84 35.66 35.78 35.76 35.96 35.97 35.87 35.91 35.98 35.99 35.97 35.97 35.98 35.99 35.90 35.96 35.95 36.05 36.05 36.05 36.03 36.03 36.03 36.06 36.08 36.06 36.07 36.07 36.06 36.06 36.10 36.08 36.08 36.08 36.08 36.10 36.08 36.09 36.10 36.11 36.09 36.10 36.11 36.11 36.11 36.11 36.11 36.11 36.11 36.11 36.12 36.12 36.12 36.12 36.12 36.12 36.13 36.12 36.12 36.12 36.12 36.12 36.13 36.12 36.12 36.13 36.13 36.13 36.13

prompt> perf_matrix.pl of=/dev/vg/linear iter=5 #linear writes, 5 iterations w/ results averaged 11.74 9.00 31.77 31.82 31.78 31.84 31.93 32.03 32.37 32.98 34.52 0.00 0.00 9.14 9.65 33.57 33.65 33.64 33.65 33.70 33.79 33.99 34.33 35.12 33.36 0.00 9.63 9.70 33.03 33.01 34.65 34.65 34.67 33.09 33.16 33.35 33.70 32.88 34.42 9.60 9.66 33.30 32.35 33.47 33.49 33.49 32.73 33.36 33.65 33.84 33.41 33.37 9.68 9.74 33.31 33.36 32.90 32.94 32.94 33.21 33.08 32.99 33.16 33.33 32.59 9.66 9.74 32.88 33.14 33.47 33.38 33.20 33.60 33.18 33.35 33.15 33.10 33.22 9.68 9.73 32.66 32.73 33.30 33.39 33.22 33.18 33.23 32.97 33.01 33.10 33.13 9.69 9.74 33.06 33.28 33.37 33.45 33.32 33.53 33.27 33.34 33.16 33.05 33.08 9.59 9.66 31.88 32.34 32.14 32.41 33.21 32.49 32.41 32.47 32.39 32.69 32.05 9.47 9.58 32.87 32.79 32.80 32.84 33.09 32.96 32.99 32.95 32.65 32.59 32.83 9.45 9.52 33.35 33.10 33.17 33.12 33.05 33.12 33.97 33.14 32.72 33.07 33.24

# if I redirect the above output to files, I can then diff them
prompt> perf_matrix.pl diff clinear-read.txt clinear-write.txt
-53.49% -68.04% 10.24% 9.99% 9.74% 8.85% 11.18% 20.69% 18.18% 18.46% 19.28% -.--% -.--% -70.01% -69.43% 6.03% 7.44% 5.49% 4.54% 6.44% 5.59% 0.03% 0.29% 9.99% 0.18% -.--% -71.68% -71.01% -1.08% -0.33% 3.43% 0.96% 3.28% -2.04% -4.74% -4.80% 0.06% -4.00% -0.75% -72.43% -71.86% -2.97% -5.41% -2.96% -4.01% -4.86% -7.12% -5.74% -4.76% -4.84% -4.38% -5.14% -72.73% -72.46% -6.25% -5.68% -7.43% -7.29% -6.98% -6.71% -6.50% -7.15% -6.70% -5.98% -7.76% -72.96% -72.76% -8.21% -7.46% -6.53% -6.76% -7.29% -6.20% -7.34% -6.95% -7.04% -7.49% -7.10% -73.08% -72.95% -8.95% -8.86% -7.45% -7.22% -7.65% -7.76% -7.64% -8.39% -8.05% -7.95% -7.84% -73.12% -72.98% -8.29% -7.63% -7.38% -7.16% -7.60% -7.07% -7.74% -7.57% -8.07% -8.35% -8.26% -73.43% -73.23% -11.64% -10.37% -10.92% -10.22% -7.95% -9.98% -10.22% -10.08% -10.25% -9.45% -11.24% -73.77% -73.47% -8.97% -9.19% -9.17% -9.06% -8.36% -8.75% -8.67% -8.78% -9.61% -9.77% -9.11% -73.84% -73.64% -7.67% -8.36% -8.17% -8.31% -8.52% -8.31% -5.95% -8.28% -9.44% -8.47% -8.00%

I can see that writes for a linear device are much worse when request sizes are small, but get reasonably close when request sizes are >= 4kiB.

I haven't had a chance to do this with (cluster) mirrors yet. It would be interesting to see the difference in performance from linear -> mirror and mirror -> cmirror...

Once things are truly stable, I will concentrate more on performance. (Also note: While a mirroring is sync'ing itself, performance for nominal operations will be degraded.)

 brassow

<perf_matrix.pl>


On Mar 8, 2007, at 12:13 PM, Robert Clark wrote:

I've been trying out cmirror for a few months on a RHEL4U4 cluster and
it's now working very well for me, although I've noticed that it does
have a bit of a performance hit.

  My set-up has a 32G GFS filesystem on a mirrored LV shared via AoE
(with jumbo frame support). Just using dd with a 4k blocksize to write
files on the same LV when it's mirrored and then unmirrored shows a big
difference in speed:

    Unmirrored: 12440kB/s
    Mirrored:    2969kB/s

which I wasn't expecting as my understanding is that the cmirror design
introduces very little overhead.

The two legs of the mirror are on separate, identical AoE servers and
the filesystem is mounted on 3 out of 6 nodes in the cluster. This is
with the cmirror-kernel_2_6_9_19 tagged version and I've tried with both
core and disk logs.

  I suspect a bad interaction between cmirror and something else, but
I'm not sure where to start looking. Any ideas?

	Thanks,

		Robert

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux