Sorry for being silent so long, I did a lot of testing ... I've even set up a dedicated testing box cobbled together from spare parts :) 1) It's not kvm, I took that out of the mix. 2) dm-crypt is at least a co-culprit since lvm on md works fine. 3) Tuning changing the I/O scheduler and/or the /proc/sys/vm/ settings does little. So I tried playing with the layering: a) dm-crypt on lvm on md (1 kcryptd / LV): bad b) lvm on dm-crypt on md (1 kcryptd): worse c) lvm on md on dm-crypt: (1 kcryptd / physical disk): excellent The last case does not have any abnormal starvation issues at all and performance is near the raw speed for the array. It's a landslide, even on the dual core testbox (~110 MiB/s per core, max. 60 MB/s per disk). If only it weren't so ugly. Data is now written n/(n-1) times in the whole-stripe case, I don't even want to think about read-modify-write ... Multicore dm-crypt would be really nice to have now ... I've opened an LKML thread in the hopes that something can be done outside dm-crypt but so far, not much ... Thanks, Chris _______________________________________________ dm-crypt mailing list dm-crypt@xxxxxxxx http://www.saout.de/mailman/listinfo/dm-crypt