Hi I have been using aes loop for quite some time now, and it has been working fine until now. Today I seem to get much degraded performance on certain files. I am not sure what is causing this (or if it is even aes loop that causing it), so any pointers of what to do to further invistigate would be nice. /dev/loop3: [0301]:516734 (/dev/hda2) encryption=AES128 multi-key-v2 The file are 200mb in size, and not cached. > time cat filea >/dev/null real 2m25.955s user 0m0.056s sys 0m1.135s == result from iostat -x 10 == avg-cpu: %user %nice %sys %iowait %idle 1.60 0.00 0.70 0.60 97.10 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util (IDLE) hda 3.10 1.10 0.10 1.50 25.60 20.80 12.80 10.40 29.00 0.05 29.12 8.69 1.39 avg-cpu: %user %nice %sys %iowait %idle 1.90 0.00 8.61 72.37 17.12 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util (START READ) hda 155.06 1.90 72.47 0.60 1821.82 20.02 910.91 10.01 25.21 13.37 182.63 10.39 75.94 avg-cpu: %user %nice %sys %iowait %idle 2.00 0.00 33.63 64.37 0.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 947.31 0.80 23.15 0.90 7762.08 13.57 3881.04 6.79 323.29 1.14 48.71 29.46 70.86 avg-cpu: %user %nice %sys %iowait %idle 2.10 0.00 22.38 75.52 0.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 603.60 4.00 36.86 1.50 5190.01 44.76 2595.00 22.38 136.46 35.00 198.02 22.10 84.78 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 2.40 1.00 6.11 1.30 60.06 29.63 30.03 14.81 12.11 52.72 6087.69 135.16 100.12 avg-cpu: %user %nice %sys %iowait %idle 1.60 0.00 0.90 97.50 0.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.20 3.30 2.10 27.20 6.40 13.60 3.20 6.22 50.25 9371.72 185.22 100.02 avg-cpu: %user %nice %sys %iowait %idle 1.20 0.00 0.60 98.20 0.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 1.10 0.70 4.70 0.80 56.00 12.00 28.00 6.00 12.36 41.14 10463.84 181.85 100.02 <snip> After some time it ends up working like in the start. This seems to happen on fragmented files > filefrag filea 1483 extents found, perfection would be 2 extents Same thing on another file also 200mb in size. time cat fileb >/dev/null real 1m3.854s user 0m0.057s sys 0m1.151s >filefrag fileb 6 extents found, perfection would be 2 extents iostat result are like the first one of filea all the way through. With no huge utilization or queue size growth. I normal have an around 4mb/s throughput. Hardware: PII 450 MHz 96mb ram linux 2.6.12 aes loop (cant find installed version number) Sorry if this mail just shows the obvious and expected behaviour. But my mind cant find a good explanation for these slowdowns. Are there any ways to examine the queues to see what is taking so long? best regards Kim -- _______________________________________________ Surf the Web in a faster, safer and easier way: Download Opera 8 at http://www.opera.com Powered by Outblaze - Linux-crypto: cryptography in and on the Linux system Archive: http://mail.nl.linux.org/linux-crypto/