Strance performance problems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

i have the following setup:

CPU: Intel(R) Core(TM)2 Quad CPU    Q9400  @ 2.66GHz
Storage Controller: RAID bus controller: Silicon Image, Inc. SiI 3132
Serial ATA Raid II Controller (rev 01)

External Storage: Lian Li Ex-503 connected via esata to the SiI 3132
Controller.

HDDs: 5x WD 10 EARX 2TB

On the Lian Li Ex-503 i have setup port multiplier mode, because i want
to make a md raid instead using the Lian Li Ex-503 raid.

So i have setup a raid 5 over the 5 HDDs with the following command:

mdadm -C -v /dev/md0 -n 5 -l raid5 -b internal -N data /dev/sdd1
/dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1


After initial sync finished, i made some performance tests with dd.

* dd if=/dev/md0 of=/dev/null bs=64k

271401+0 Datensätze ein
271400+0 Datensätze aus
17786470400 Bytes (18 GB) kopiert, 123,919 s, 144 MB/s


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0,08    0,00    4,83   19,65    0,00   75,44

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sde            6428,33     0,00   57,67    0,00 26282,67     0,00
911,54     1,41   24,45   24,45    0,00  13,24  76,33
sdf            6428,33     0,00   57,67    0,00 26197,33     0,00
908,58     1,94   33,58   33,58    0,00  15,90  91,67
sdg            6428,33     0,00   57,33    0,00 26112,00     0,00
910,88     2,31   40,17   40,17    0,00  17,03  97,67
sdh            6428,33     0,00   57,33    0,00 26026,67     0,00
907,91     1,73   29,88   29,88    0,00  15,41  88,33
sdd            6428,33     0,00   57,00    0,00 25941,33     0,00
910,22     2,96   51,46   51,46    0,00  17,54 100,00
md0               0,00     0,00 32426,67    0,00 129706,67     0,00
8,00     0,00    0,00    0,00    0,00   0,00   0,00

Which brings ~129MB/s read performance.

* dd if=/dev/zero of=/dev/md0 bs=64k

70665+0 Datensätze ein
70665+0 Datensätze aus
4631101440 Bytes (4,6 GB) kopiert, 36,7757 s, 126 MB/s


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0,00    0,00    3,00   23,92    0,00   73,08

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sde               0,00  5756,00    0,67   47,00     2,67 23042,67
966,94     1,59   33,15   20,00   33,33  20,56  98,00
sdf               0,00  5756,00    0,67   47,00     2,67 23042,67
966,94     1,67   34,83   20,00   35,04  20,56  98,00
sdg               0,00  5756,00    0,00   47,00     0,00 23042,67
980,54     1,67   35,39    0,00   35,39  20,78  97,67
sdh               0,00  5756,00    0,67   47,00     2,67 23042,67
966,94     1,70   35,59   30,00   35,67  20,70  98,67
sdd               0,00  5756,00    0,67   47,00     2,67 23042,67
966,94     0,91   18,95   20,00   18,94  18,04  86,00
md0               0,00     0,00    0,00 23040,00     0,00 92160,00
8,00     0,00    0,00    0,00    0,00   0,00   0,00

Which brings ~92MB/s write performance.

Ok these tests were on the unencrypted raid.


Then i created a cryptsetup luks container with:

cryptsetup --cipher aes-xts-plain64 --key-file /dev/random create
crypt_data2 /dev/md0

Now if i repeat the above performance tests read performance is ok but,
write performance is imho really bad.


* dd if=/dev/mapper/crypt_data2 of=/dev/null bs=64k

152114+0 Datensätze ein
152113+0 Datensätze aus
9968877568 Bytes (10 GB) kopiert, 77,203 s, 129 MB/s


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0,00    0,00   27,03   16,74    0,00   56,23

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sde            6428,00     0,00   58,00    0,00 26197,33     0,00
903,36     1,45   25,11   25,11    0,00  13,39  77,67
sdf            6428,33     0,00   57,00    0,00 25941,33     0,00
910,22     1,97   34,50   34,50    0,00  16,32  93,00
sdg            6428,33     0,00   57,33    0,00 26026,67     0,00
907,91     2,40   41,86   41,86    0,00  17,15  98,33
sdh            6428,33     0,00   57,33    0,00 26112,00     0,00
910,88     1,70   29,59   29,59    0,00  14,88  85,33
sdd            6428,33     0,00   57,00    0,00 25941,33     0,00
910,22     2,41   42,16   42,16    0,00  17,13  97,67
md0               0,00     0,00 32426,67    0,00 129706,67     0,00
8,00     0,00    0,00    0,00    0,00   0,00   0,00
dm-16             0,00     0,00 32426,67    0,00 129706,67     0,00
8,00  1250,96   38,54   38,54    0,00   0,03 100,00

Which brings ~129MB/s read performance on the encrypted raid 5.
But also with 100% utilization of the dm-16 block device.

* dd if=/dev/zero of=/dev/mapper/crypt_data2 bs=64k

40370+0 Datensätze ein
40370+0 Datensätze aus
2645688320 Bytes (2,6 GB) kopiert, 426,961 s, 6,2 MB/s

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0,08    0,00    2,01   24,75    0,00   73,16

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sde             224,67   222,33    9,00    6,00   934,67   913,33
246,40     0,83   55,11   44,07   71,67  23,33  35,00
sdf             509,33   516,00   21,00   13,67  2101,33  1926,67
232,38     1,49   37,88   29,52   50,73  12,79  44,33
sdg             464,67   474,67   20,67   17,33  1961,33  2186,67
218,32     1,10   35,53   23,55   49,81   9,65  36,67
sdh             180,67   182,33    7,33    5,67   752,00   752,00
231,38     0,64   49,49   40,45   61,18  18,72  24,33
sdd             164,00   166,33    6,67    4,33   682,67   682,67
248,24     0,75   68,48   64,00   75,38  22,42  24,67
md0               0,00     0,00    0,00  804,00     0,00  3216,00
8,00     0,00    0,00    0,00    0,00   0,00   0,00
dm-16             0,00     0,00    0,00 7551,33     0,00 30205,33
8,00 281284,91 7252,80    0,00 7252,80   0,13 100,00

Which brings ~5MB/s write performance on the encrypted raid 5.

What i find strange is the huge avgqu-sz.


I have no clue what the reason for this poor write performance could be.

Thanks for every hint to resolve this issue.


Tomorrow morning i will make some performance tests with different ciphers.


Kind regards,

morlix

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
dm-crypt mailing list
dm-crypt@xxxxxxxx
http://www.saout.de/mailman/listinfo/dm-crypt

[Index of Archives]     [Device Mapper Devel]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite News]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux