On 11 September 2010 16:57, Heinz Diehl <htd@xxxxxxxxxxxxxxxxx> wrote: > On 11.09.2010, dave b wrote: > >> A simple test is just to dd if=/dev/zero of=DELETEME for a short time >> and the system will stall rather a lot - it is not a complete lock up >> - just the entire system is unresponsive for large periods at a time >> (from around 30seconds - to 2 minutes). > > This is most likely due to the massive disk i/o which is generated by the > dd command. I don't this *this* should stall the entire system :) > >> This may be related to >> http://thread.gmane.org/gmane.linux.kernel.mm/51444 . > > I think this is completely unrelated, it's more a kind of a disk scheduler > issue. I can't compare directly, because I'm running XFS on all of my > machines, but you could try to fine-tune your disk scheduler. In any case, > writing a big file and work on the same disk at the same time will always > give you, hmmm.. "some delay", even without using encryption. Agreed. :) > > You can see stalls up to 23 secs here, too. I'm using the latest kernel > 2.6.36-rc3 from Linus' git repository, with the "global workqueue per cpu" > patch from Andi Kleen on top of it (which should not give any performance > boost here in this case). Scheduler is cfq, tuned this way: My cfq is the same ^^ (settings) Well it isn't just dd - if you do a lot of grepping / find . etc. - like rkhunter does then the system also noticeably stalls :/ I do not think the test you attached is a good representation of the *actual* behaviour. What it looks like to me is that there is a total collapse of scheduled reads vs writes for *all* programs (other than the process causing the io work) for a given period When I test the deadline scheduler my system is slightly more usable :) - noop is a bit iffy. Without dding: atop --> DSK | sda | busy 64% | read 78040 | write 133104 | avio 3 ms | hdparm -tT /dev/sda /dev/sda: Timing cached reads: 6382 MB in 2.00 seconds = 3191.73 MB/sec Timing buffered disk reads: 382 MB in 3.02 seconds = 126.67 MB/sec While dding: (dd if=/dev/zero of=TEMP count=15553600) atop --> DSK | sda | busy 98% | read 5 | write 2057 | avio 4 ms | (it spiked at 98% for extended periods vs the one offs seen during experimenting with noop and deadline) ----------- Some of the output while dding using: dd if=/dev/zero of=TEMP count=15553600 CFQ 924 6.93s 0.00s 0K 0K 0K 0K -- - R 68% kcryptd fsync time: 0.0275 fsync time: 0.0232 fsync time: 0.0274 fsync time: 0.0215 fsync time: 0.7706 fsync time: 9.3548 fsync time: 14.4264 fsync time: 10.4625 fsync time: 12.5968 fsync time: 16.5984 fsync time: 15.4739 fsync time: 2.3007 fsync time: 0.0249 fsync time: 0.0513 Deadline: fsync time: 0.1949 fsync time: 6.9050 fsync time: 14.3582 fsync time: 13.0077 fsync time: 11.6368 fsync time: 12.3486 fsync time: 5.1431 etc. Really you can see that the deadline is 'worse' even though my system is more usable than with the CFQ. Lets switch to noop ;) (output for the entire dd run). the system is still more usable than on the CFQ. fsync time: 0.0243 fsync time: 0.0100 fsync time: 0.0186 fsync time: 1.1323 fsync time: 14.5452 fsync time: 18.4730 fsync time: 18.9720 fsync time: 15.3721 rfsync time: 9.4640 fsync time: 1.6990 fsync time: 0.0179 fsync time: 0.0178 fsync time: 0.0477 fsync time: 0.0263 fsync time: 0.0200 fsync time: 0.0279 I think we need a better test :-) _______________________________________________ dm-crypt mailing list dm-crypt@xxxxxxxx http://www.saout.de/mailman/listinfo/dm-crypt