Hi, short summary because there is mix of problems and dangerous ideas :-) 1) if you have dm-crypt device, and exported plaintext block device above dm-crypt using iSCSI od DRBD (IOW the encryption run only on one place) to several nodes - this should work (in theory, not sure if anyone using that). (using cluster aware FS is required here but that's another level) (note you are exporting "plaintext" device - so maybe this vetoes your use case) 2) you should not export underlying (ciphertext) block device to several nodes and activate dm-crypt layer on several nodes in parallel (iow every node run its own block lever encryption). dm-crypt was neither designed nor tested to be cluster aware. With the crypto queue inside (which works asynchronously) you will probably hit data corruption. (And I never saw request for cluster support here, need to think what is needed to change here...) 3) LUKS is only key management system, you can remove it from picture completely here for simplification. (FYI there is no write request during LUKS activation on LUKS metadata) On 04/25/2010 07:12 AM, Ryan Lynch wrote: > Also, the man page for 'cryptsetup' mentions an interesting option: > > --non-exclusive > > This option is ignored. Non-exclusive access to the > same block device can cause data corruption thus this mode is no > longer supported by cryptsetup. > > There are references to it in the release notes for 1.07 and 1.10 Yes, I mentioned it here: http://code.google.com/p/cryptsetup/wiki/Cryptsetup110 * Removes support for dangerous non-exclusive option (it is ignored now, LUKS device must be always opened exclusive) Multiple problems here, but probably the most important is that system see two devices - managing cache separately, so you can get data corruption - one device see old data, (f)sync will not help here. > I also had an idea to test the basic concept re: LUKS, with a much > simpler local configuration: > - Allocate a file big enough to hold a LUKS/dm-crypt volume (a few > 10s of MBs should be enough). > - Set up a first loopback device on the new file. > - Set up a second loopback device, backed by either the file or > the first loopback device. waste of time, it will not work for *data* access properly (here probably loop stop it anyway). (and LUKS metadata are written using direct-io, and read-only for activation, so you see it works in that case - but it is not important here.) Adding loop to the mix will cause another level of headache, you have been warned ;-) >> One data-loss risk I see is that dm-crypt does not "sync" until >> unmapped (found that out when zeroing a new dm-crypt partition, What do you mean? If you need sync, issue it explicitly, or use direct-io. (try dd oflag=direct ... ) There is barrier concept and device-mapper fully supports it now. (fsync() issues barrier request in the end for the device and waits for it, it is in core device-mapper, no need additional code in dm-crypt) >> the start of the partition only got overwritten after unmapping). >> So for a hard sync where you need to be sure the data is on disk, >> you would need to remove the mapping. No. see above. > I don't think I understand this part very well: Is dm-crypt operating > synchronously WRT the underlying device? Basically, no. But it will sync if it requested. Not sure if I understand which level you mean, but there is queue for crypto requests and these are processed later. Milan _______________________________________________ dm-crypt mailing list dm-crypt@xxxxxxxx http://www.saout.de/mailman/listinfo/dm-crypt