Re: strange iowait-cycles

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



A quick update on this:

I've switched to kernel 2.6.28, and a 64 bit System, and observed a performance shift. Cryptoloop is down in performance somewhat (???), now showing a performance similar to that of my old dm-crypt  setup (80% iowait, ~17 MB/s read), while my RAID 5 is now down to 20% iowait, and up to 50% for kcryptd, allowing me reads in the 30 MB/s range.
Potentially cryptoloop is defaulting to the nen x64 AES codec, which would explain the difference. But for now color me surprised..


Rick


On Sat, 29 Aug 2009 09:26:11 -0700 "Nehls, Patrick" <pnehls@xxxxxxxx> wrote:

> Hi Rick,
> 
> I too have been seeing this issue for quite a while and have so far found no solution. I have tried several different iterations of raid5 md in combination with dmcrypt and lvm2 and all have had "reasonable" read speeds but incredibly slow write speeds after intial high speed ones for sometimes up to a few GBs.
> 
> In the past I had used an md->lvm->dmcrypt layering for the raid5. Currently I'm trying a dmcrypt per drive->md->lvm as this gets 1 kcryptd process per drive. Originally I thought that the single kcryptd was potentially the bottleneck. Unfortunately though this seems to have made the system a bit more responsive it hasn't helped throughput much.
> 
> In using iostat -d -m -N -x 1 I'll see the md device usually do 6-10MB/s writes with occasional spikes up into the 30MB/s or more range but they will quickly drop back down. Even this seems to vary to occasionally higher or lower sustained performance. Also, inevitably some to most of the sd* and dm* devices that make up the array will be at 100% in the %util column which I believe indicates that under current conditions the individual disks are unable to service any more IO requests. Some combination of things seems to max out the disks.
> 
> One older solution online was to renice some of the kcryptd or other processes involved which has not helped me. Another involved disabling all cpu speed throttling which also didn't help. I'm running an ubuntu 9.04 server x64 2.6.28-14 on a core2quad box with 8GB of RAM. The disks are connected via a 3ware controller in jbod mode and maybe one via the onboard sata from the ich9r controller. The box has cryptsetup 1.06 as well and the array is using twofish xts-plain. One of my next steps was going to be a gentoo install with whatever bleeding edge kernels, cryptsetup, and mdadm code I could find to see if that helped.
> 
> I too feel the problem is some interaction between dmcrypt and the md code or how the kernel deals with loopback devices. Unfortunately I haven't found a functional solution or even a better method of seeing the problem than iostat. If I find anything useful I'll send it along and I'd be happy to test out any potential solutions you find.
> 
> A few questions: Are you using lvm at all? Have you tested this scenario with raid 0 or 1? How are your hard drives hooked up in the system?
> 
> Patrick 
> 
> -----Original Message-----
> From: dm-crypt-bounces@xxxxxxxx [mailto:dm-crypt-bounces@xxxxxxxx] On Behalf Of Rick Moritz
> Sent: Friday, August 28, 2009 11:03 AM
> To: dm-crypt-list
> Subject:  strange iowait-cycles
> 
> Dear List,
> 
> I have encountered a performance issue with dm-crypt, for which the algorithms at Google could not point me to a solution.
> I am running dm-crypt on a RAID 5 md device. The RAID is capable of sustained read rates of around 150MB/s.
> Using dm-crypt I am able to get up to slightly more than 20MB/s read performance.
> Now, this would not surprise me all that much, but cryptoloop on another RAID (capable of 30-35 MB/s sustained read) is giving me 30MB/s. 
> 
> The interesting issue is that during a read from the mapped/crypted device top reports around 20-30% CPU usage by kcryptd, and a whopping 70-80% iowait cycles. Now, I don't quite understand what is summed up under iowait. If this extends to the memory subsystem, I may just be memory/cache starved (sempron 3200: 1800Mhz, 128kB L2 cache, DDR2 533). If it doesn't, then I am completely clueless, as clearly the underlying I/O-subsystem is readily capable of sustaining higher datarates.
> 
> Even worse than reads are writes. I get an initial burst of about 80-100MB (each disk in the 4 disk array has only 16MB cache) at 25 MB/s, then throughput falls to 8MB/s.
> 
> One possibility that I just came up with is that this performance drop may be due to mapper-requests being non-sequential, where raw device requests are perfectly sequential - but nobody else has mentioned similar issues, so this seems strange.
> 
> On to the boring part:
> /dev/mapper//dev/mapper/0 is active:
>   cipher:  aes-cbc-plain
>   keysize: 256 bits
>   device:  /dev/md0
>   offset:  0 sectors
>   size:    2344252416 sectors
>   mode:    read/write
> 
> cryptsetup is at version 1.0.6-r2 according to Portage.
> 
> Linux Skeletor 2.6.25-hardened-r8 #1 Sat Nov 22 23:22:01 CET 2008 i686 AMD Sempron(tm) Processor 3200+ AuthenticAMD GNU/Linux
> 
> 
> There's a whole lot of PAX and GRSec going on, like adress space randomization, in case that is of interest.
> 
> I'd be hugely thankful if anyone could check performance on low-cache machines and performance versus cryptoloop.
> I'm currently not willing to upgrade the kernel, waiting for version 2.6.29 to be declared stable by the Gentoo hardened team.
> Any tips would als be appreciated.
> 
> Thanks all!
> 
> Rick


_______________________________________________
dm-crypt mailing list
dm-crypt@xxxxxxxx
http://www.saout.de/mailman/listinfo/dm-crypt

[Index of Archives]     [Device Mapper Devel]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite News]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux