Vladislav Bolkhovitin schrieb:
Encrypted device was created with the following additional options
passed to cryptsetup
(it provides the most performance on systems where CPU is a
bottleneck, but with decreased
security when compared to default options):
-c aes-ecb-plain -s 128
Generally, CPU on the target was a bottleneck, so I also tested the
load on target.
md0, crypt columns - averages from dd
us, sy, id, wa - averages from vmstat
1. Disk speeds on the target
Raw performance: 102.17 MB/s
Raw performance (encrypted): 50.21 MB/s
2. Read-ahead on the initiator: 256 (default); md0, crypt - MB/s
md0 us sy id wa | crypt us sy id
wa STGT 50.63 4% 45% 18% 33% | 32.52 3% 62%
16% 19%
SCST (debug + no patches) 43.75 0% 26% 30% 44% | 42.05 0% 84% 1%
15%
SCST (fullperf + patches) 45.18 0% 25% 33% 42% | 44.12 0% 81% 2%
17%
3. Read-ahead on the initiator: 16384; md0, crypt - MB/s
md0 us sy id wa | crypt us sy id
wa STGT 56.43 3% 55% 2% 40% | 46.90 3%
90% 3% 4%
SCST (debug + no patches) 73.85 0% 58% 1% 41% | 42.70 0% 85% 0%
15%
SCST (fullperf + patches) 76.27 0% 63% 1% 36% | 42.52 0% 85% 0%
15%
Good! You proved that:
1. SCST is capable to work much better than STGT: 35% for md and 37% for
crypt considering maximum values.
2. Default read-ahead size isn't appropriate for remote data access
cases and should be increased. I slowly have been discussing it in past
few months with Wu Fengguang, the read-ahead maintainer.
Note that crypt performance for SCST was worse than that of STGT for
large read-ahead values.
Also, SCST performance on crypt device was more or less the same with
256 and 16384 readahead values. I wonder why performance didn't increase
here while increasing readahead values? Could anyone recheck if it's the
same on some other system?
Which IO scheduler on the target did you use? I guess, deadline? If so,
you should try with CFQ as well.
I used CFQ.
--
Tomasz Chmielewski
http://wpkg.org
--
To unsubscribe from this list: send the line "unsubscribe stgt" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html