George Koss wrote: > But it all went to hell when I tried the full stack on the six drive RAID5 > or RAID6 array. Performance for writing fell to a pitiful 15 Mbytes/sec. > I theorized that maybe the RAID helper thread was thrashing the caches, so I > reduced the raid array chunksize to 16K, then 8K, then 4K. This didn't help > at all. I increased the drive readahead to 1024 sectors, which seemed to > help the read performance, but did nothing for write performance. The SATA2 > drives are already using 16 sector transfers, which appears to be the > maximum possible. I've tried ext3, xfs, jfs and reiser3.6 filesystems, > nothing seems to help. The best write speed I've gotten so far is 17 > Mbyte/s with RAID6 and 64K chunksizes on a reiser3.6 filesystem. > > RAID5 performance is terrible when I stack loop-AES on top of it. Without > loop-AES, I'm getting 99 Mbytes/sec Read, and 105 Mbytes Write with RAID5. > This is the performance level I want to hit, since it's just about right > when transferring data with Gigabit ethernet. Part of the reason why loop-AES on top of linux software RAID5 performs badly is because loop-AES bangs the backing device with page size requests. Linux software RAID5 wants bigger requests to be able to provide better MBytes/s values. Partial stripe size writes are performance killers for linux software RAID5 which has to do 2 reads and 2 writes for each write request. I haven't looked at RAID6 parity algorithm, but I assume that it has to read all unmodified data blocks in stripe and 3 writes for each write request. -- Jari Ruusu 1024R/3A220F51 5B 4B F9 BB D3 3F 52 E9 DB 1D EB E3 24 0E A9 DD - Linux-crypto: cryptography in and on the Linux system Archive: http://mail.nl.linux.org/linux-crypto/