Re: Seeking advice on Loop-AES performance with RAID5/RAID6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks to Marc Bevand for pointing out that my basic RAID5 performance was deficient to start with. I found 2 further problems on investigation. It seems that 3 of the 6 drives had factory jumpers that limited them to SATA1 speed of 1.5Gbps. This was unexpected since I bought 6 identical boxes off the shelf. Half the boxes had 7200.9 drives, and the other had 7200.10 drives, but all were marked as containing 7200.9 drives. It looks like Seagate is switching production over to 7200.10, and neglecting to change the box stickers.

The other problem, which just killed the RAID5 performance, was that I was creating the MD device with partitions, in an attempt to be fancy and trying encryption on the root partition. These partitions were NOT aligned with the RAID stripe boundaries, and performance become very poor because of the partition alignments. So I went back to a single monolithic raid device.

I then did a lot of testing trying to see what I could get. Raw RAID5 performance was much better, with 129 Mbyte/sec writing, and 229 Mbyte/s reading (no encryption). But alas, there was still a big hit when adding loop-aes between the filesystem and the raid device.

These are my results, in Mbytes/sec:

# Write   Read      File System, encryption, Chunksize, Disk config
89864,  93406,     reiserfs_aes256_cs64k_raid0,
100195, 104740,    xfs_su16_sw96_aes256_cs64k_raid0,

28142,  85033,     xfs_su1_sw6_aes256_cs4k_raid10,
31375,  65902,     reiserfs_aes256_cs4k_raid10,
73083,  71314,     reiserfs_aes256_cs64k_raid10,
99723,  94169,     xfs_aes256_cs64k_raid10,
99472,  97149,     xfs_su16_sw96_aes256_cs64k_raid10,

35530,  67729,     reiserfs_aes256_cs4k_raid5,
33692,  87475,     xfs_su0_sw0_aes256_cs4k_raid5,
34126,  89407,     xfs_su1_sw5_aes256_cs4k_raid5,
34047,  73184,     reiserfs_aes256_cs16k_raid5,
29406,  101852,    xfs_su4_sw_20_aes256_cs16k_raid5,
33283,  69343,     reiserfs_aes256_cs32k_raid5,
29771,  98699,     xfs_su8_sw40_aes256_cs32k_raid5,
26330,  56534,     reiserfs_aes256_cs256k_raid5,
24633,  60235,     xfs_su64_sw320_aes256_cs256k_raid5,

# for xfs file systems, su=Stripe Unit, sw=Stripe Width, in 4K blocks

I found that specifying the proper stripe unit and stripe width to XFS resulted in a performance improvement, but it's pretty clear that loop-aes is hiding the disk geometry from the file system, and this is part of the problem of the performance hit.

Raid10 had decent write/read speed, at the cost of 800 Gbytes of space lost. But I'm cheap, so I'll stick with Raid5.

As an experiment, I tried building a raid5 device on top of 6 loop devices running loop-aes. This was horrible beyond belief, with a raid rebuild rate of only 2 Mbytes/second. I stopped it after 5 minutes of watching it crawl along. Bad Idea.


Part of the reason why loop-AES on top of linux software RAID5 performs
badly is because loop-AES bangs the backing device with page size requests.

Linux software RAID5 wants bigger requests to be able to provide better
MBytes/s values. Partial stripe size writes are performance killers for
linux software RAID5 which has to do 2 reads and 2 writes for each write
request. I haven't looked at RAID6 parity algorithm, but I assume that it
has to read all unmodified data blocks in stripe and 3 writes for each write
request.

--
Jari Ruusu  1024R/3A220F51 5B 4B F9 BB D3 3F 52 E9  DB 1D EB E3 24 0E A9 DD

I think you have put your finger on the main problem, and I'm wondering if higher performance can be achieved by adding the encryption functionality into the top interface of the raid device, so that it can perform encrypt/decrypt on chunksize blocks, rather than 4k pages. Alternatively, I wonder if it's possible to hack or adjust loop-aes so that it presents a larger blocksize, like 16K or 32K, rather than the standard 4K page. Matching the chunksize of the raid device ought to provide *some* improvement. I noted that the highest write performance came from a 4K raid chunksize.

In any event, I've chosen to go with a 16K chunksize on the raid5 array, and just bite my tongue and endure the lower write performance, since read performance is more important than writing for me.

Thank you everyone!

George Koss

_________________________________________________________________
Fixing up the home? Live Search can help http://imagine-windowslive.com/search/kits/default.aspx?kit=improve&locale=en-US&source=hmemailtaglinenov06&FORM=WLMTAG


-
Linux-crypto:  cryptography in and on the Linux system
Archive:       http://mail.nl.linux.org/linux-crypto/


[Index of Archives]     [Kernel]     [Linux Crypto]     [Gnu Crypto]     [Gnu Classpath]     [Netfilter]     [Bugtraq]
  Powered by Linux