I think earlier you (or someone else in this thread?) were mentioning that the slowdown is a function of array speed (higher is worse) and processing power (slower is worse). With AES hardware acceleration coming to mainstream x86 with the next generation of Intel boards, would you expect the issue to be allayed? Is anyone running one of the first two scenarions on a machine with hardcoded AES instructions? I expect the last scenario scales up to a core/disk ratio of one? Is the advantage still there, when you've only got one core available? (I expect that to be the case, because the issue appears to be of architectural provenance - somewhere the io-subsystem is inducing wait states...) On Wed, 21 Oct 2009 12:48:55 +0200 Christian Pernegger <pernegger@xxxxxxxxx> wrote: > Sorry for being silent so long, I did a lot of testing ... I've even > set up a dedicated testing box cobbled together from spare parts :) > > 1) It's not kvm, I took that out of the mix. > 2) dm-crypt is at least a co-culprit since lvm on md works fine. > 3) Tuning changing the I/O scheduler and/or the /proc/sys/vm/ settings > does little. > > So I tried playing with the layering: > > a) dm-crypt on lvm on md (1 kcryptd / LV): bad > b) lvm on dm-crypt on md (1 kcryptd): worse > c) lvm on md on dm-crypt: (1 kcryptd / physical disk): excellent > > The last case does not have any abnormal starvation issues at all and > performance is near the raw speed for the array. It's a landslide, > even on the dual core testbox (~110 MiB/s per core, max. 60 MB/s per > disk). > > If only it weren't so ugly. Data is now written n/(n-1) times in the > whole-stripe case, I don't even want to think about read-modify-write > ... Multicore dm-crypt would be really nice to have now ... > > I've opened an LKML thread in the hopes that something can be done > outside dm-crypt but so far, not much ... > > Thanks, > > Chris _______________________________________________ dm-crypt mailing list dm-crypt@xxxxxxxx http://www.saout.de/mailman/listinfo/dm-crypt