Hi, On 02/06/2013 11:06 AM, Stavros Kousidis wrote: > One essential issue that concerns full disk encryption on SSDs, that > I have not seen in a mail discussion here so far (might be there and > I simply missed it), is the distribution of an uncontrollable amount > of copies of SSD-page contents (~4096 Bytes) where only a limited > number of blocks (~16 Bytes) have changed. This is initiated by local > changes in userspace data and technically due to the complex nature > of the flash translation layer (mainly wear leveling techniques), the > narrow-block encryption modes (here: XTS) and sector-wise constant > IVs. In Cipher-block chaining mode the position where a bit-flip > happened is visible in principle. The main problem (I see) with these analysis is that it usually take just part of the system (either SSD architecture or block encryption). But it is more complex system with several layer interfering, this interference can make the problem with ciphertext block copies worse but also decrease impact. An example (illustrative but I hope not completely wrong:) Imagine you have write pattern changing several consecutive 512 bytes sectors with some delay between writes. In theory, that's exactly case you are describing, you will get several copies inside SSD (on flash chips) because of wear leveling and similar techniques (do not forget internal garbage collection as well). In reality, it depends - Filesystem above uses large blocks as well (most common is page size 4k I think). The filesystem (and page cache) will send IO request only of this block (page) size, not 512 bytes sector. Depending on timing, you will see either several IOs or only just one (write-cache effect). - The transparent dmcrypt is using 512 bytes for encryption but it always encrypts the whole submitted IO before resending. (Let's ignore corner case with IO split because of lack or memory or storage restriction for now.) (Note dmcrypt has no own IO scheduler but it has internal queues for requests - so there is some delay in processing.) - The encrypted IO request is submitted to underlying device, where IO scheduler is responsible for real submission to device. And here it can be (and will be with exception of noop scheduler) merged into one big IO. - internal SSD architecture, and storage in general, have write-cache internally and will optimize writes as well. - storage (specifically SSD) uses hints for IO size, and every user in kernel should (or must) produce IOs restricted by these hints So, in this example you changed e.g. whole 4k page in several writes, but you cold see just one final IO! All it depends how the stack is configured and how filesystem is using Flush/FUA requests to sync IO in-flight. There is always some corner cases but I am trying to say that I am not sure if this problem is really so important in usual use cases. (As someone already mentioned, there are other attack vectors where the risk is probably higher.) But that said, yes I'm very well aware of this problem and I would like to have at least some analysis what's really going on in todays flash storage devices and how it is related to disk encryption security. So let's try to gather some data first. > A countermeasure to those history-building SSD-mechanisms is to blur > what is written to the flash device by forcing a multi-sector-wide > change when a bit changes (preferrably: multi-sector = size of file > system blocks). There are two concrete realizations of that strategy > that I know of: > > 1) Use random IVs. Leaves you with data expansion, since you have to > store the IVs, and a pain in the *** effort to implement this > reliably. Anything what need more space is problem (that includes data integrity, auth tags, whatever). And IMHO (pseudo-)random IV will cause more problems than it solves. > 2) Add a (claimed patent free) wide-block mode like TET or HEHfp (see > articles below) from the Hash-Encrypt-Hash family to the Crypto-API > and change the dmcrypt crypto engine to handle variable block sizes > instead of always operating on 512 Bytes of data (compare the > discussion: > http://www.saout.de/pipermail/dm-crypt/2011-April/001667.html) Yes, the work on deploying new cipher or encryption mode (either narrow or wide) for Linux block encryption starts in Linux kernel API. Send a patch to crypto kernel API list, get it in official kernel and dmcrypt/cryptsetup can start to use it. (But if there is any patent problem, no chance...) But do not forget one thing - while cryptsetup is always open to support wide range of algorithms, a huge user base is bound by standards which do not allow them to use anything else. That's why XTS is so widely used. BTW anyone know what had happened with EME2 wide mode? http://siswg.net/index.php?option=com_content&task=view&id=36&Itemid=1 Milan _______________________________________________ dm-crypt mailing list dm-crypt@xxxxxxxx http://www.saout.de/mailman/listinfo/dm-crypt