Re: Loop-AES, security concerns, stability of file backed loop-aes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

a.engels@xxxxxxx wrote:
> It appears to me that loop-aes is the only one choice under linux with real
> secure implementation of strong encryption. However, I am no crypto analyst
> and would love to read some professional comments about loop-aes. So, my
> first question is, if somebody knows a link to a document which deals with
> this?

Scientifically? Unfortunately not. Loop-AES is only secure if it has
been set up in multi-key mode with encrypted swap. Loop-AEs in
single-key mode is vulnerable to a watermark attack and
cryptoloop/dm-crypt are vulnerable to watermark/optimized dictionary
attacks. 

Optimized dictionary attack:
http://marc.theaimsgroup.com/?l=linux-kernel&m=107419912024246&w=2

Watermark attack:
http://marc.theaimsgroup.com/?l=linux-kernel&m=107719798631935&w=2



> Also, I have questions related to file backed loop-aes encryption.

I would avoid file backed loop-aes completely. Too much uncertainty
lies in what could be the cause of lockups.


> The reason I dont want to use device backed loop-aes is the dependency from
> the block device. If I use file backed loop-aes and one server crashes, I
> can just copy the crypto container as file to to an arbitrary fs created on
> a i.e. a ide, scsi-blockdevice or even software raid of a new server. I
> think I wouldnt have this functionality if I backup the (ide-,scsi- or
> software raid-) block device with "dd" (maybe I am wrong?).

In theory this should work. A partition image can be mounted, so it
should work with loop-AES as well. This should be sufficient to
access your data in case of an emergency, but be sure to mount the
image read-only. Because of the filesystem's last access time (atime)
record a simple 'ls -al' causes a write. Of course you could use
noatime as mount option but I would play it safe.

I would go along with this.


OTOH, don't know about your setup... a different approach could be to
split up your large data collection in smaller chunks (directorywise,
f.e.) tar/bzip2 'em up and use gpg to encrypt the *.tar.bz2.

Additionally you could use rar to create archives with recovery
information, this comes in handy if network traffic somehow caused
corrupted files. The big disadvantage with gpg is that even slightly
corrupted encrypted files can't be decrypted. Therefore I use the
'protective layer' of rar archives. :)

Then you could use rsync (over ssh) for backup. This method is quite
messy (setup, maintenance, ressources), but works.


- -- 
Bastard Administrator in $hell

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)

iD8DBQFA8YatLMyTO8Kj/uQRAsWIAJ9qTnO0bUS94NV/vF3mbLmLAt7gKwCcD0Lp
EVdTHH8OTiSx62llfryWYOQ=
=y3VY
-----END PGP SIGNATURE-----

-
Linux-crypto:  cryptography in and on the Linux system
Archive:       http://mail.nl.linux.org/linux-crypto/


[Index of Archives]     [Kernel]     [Linux Crypto]     [Gnu Crypto]     [Gnu Classpath]     [Netfilter]     [Bugtraq]
  Powered by Linux