Hi Milan. Thanks for your answers :) On Sun, 2013-07-07 at 19:39 +0200, Milan Broz wrote: > dmcrypt keeps IO running on the CPU core which submitted it. > > So if you have multiple IOs submitted in parallel from *different* CPUs, > they are processed in parallel. > > If you have MD over dmcrypt, this can cause problem that MD sumbits all IOs > with the same cpu context and dmcrypt cannot run it in parallel. Interesting to know :) I will ask Arno over at the dm-crypt list to at this to the FAQ. I'd guess there are no further black magical issues one should expect when mixing MD and/or dmcrypt with LVM (especially when contiguous allocation is used)... and even less expectable when using partitions (should be just offsetting)?! > (Please note, this applies for kernel with patch above and later, > previously it was different. There were a lot of discussions about it, > some other patches which were never applied to mainline etc, see dmcrypt > and dm-devel list archive for more info...) IIRC, than these included discussions about paralleling IO sent from one CPU context, right? That's perhaps a bit off-topic now,... but given that stacking dmcrypt with MD seems to be done by many people I guess it's not totally off-topic, so... Are there any plans for that (paralleling IO from one core)? Which should make it (at least performance wise) okay again do but dmcrypt below MD (not that I'd consider that much useful, personally). > Block layer (including transparent mappings like dmcrypt) can > reorder requests. It is FS responsibility to handle ordering (if it is > important) though flush requests. Interesting... but I guess the main filesystems (ext*, xfs, btrfs, jfs) do just right with any combinations of MD/LVM/dmcrypt? Lots of thanks again, Chris.
<<attachment: smime.p7s>>