Roscoe wrote:
Is it really critical? What benefit does one gain if one is confident
in the security of the symmetric cipher?
The goal is to deny a prospective attacker any reasonable information
that he/she could use to engage in a cryptanalytic attack on your data.
One can be confident in the security of any cipher, _for a finite period
of time_, if the overall crypto _system_ is properly implemented
including appropriate key selection, the security of the key and other
related issues.
It is a weakest link approach, more often than not, limited by human
behavior and engineering issues. The goal being to implement the system
in such a way that the most reasonable attack is the brute force
approach of trying every possible key allowable within the specification
of the cipher in use. And...that type of attack is purely time limited,
which is why longer keys (256 bits at least) are better.
Select a weak key (either a known weak key for the algorithm or one that
is subject to a dictionary attack) and your security system might as
well be a "wet paper bag".
There are other means of attacking crypto systems, which include the use
of the unnecessary provision of information to the attacker and not just
the key.
Part of the process of writing random data, especially prior to the
first use of encryption, is to deny the use of the residual
electromagnetic imprints of old data "in the clear" on the magnetic
media. Read Peter Gutmann's paper on "Secure Deletion of Data from
Magnetic and Solid-State Memory", available via his home page at:
http://www.cs.auckland.ac.nz/~pgut001/
All in all, it is a risk/benefit decision relative to the value of your
data and the loss to you if it is revealed. Especially if that data is
protected under law, such as patient identifiable medical data, which in
the U.S. is covered under HIPAA and for which there are regulations
covering its protection and where there are penalties for compromise.
We all lock our cars and homes to preclude easy access to most
prospective intruders. However, a determined and sophisticated intruder
will render those typical mechanisms impotent to the point where we
might not know until after the fact, that something has been compromised.
As I noted in a prior reply, I am not worried about TLA's, because on
the outside, we don't know what tools, including fundamental
breakthroughs in number theory, that they can avail themselves of.
So that leaves others, who may yet have sophisticated tools available.
Pick up one of Bruce Schneier's books for more information. Applied
Cryptography (both editions) were his first works, but Practical
Cryptography is probably a better resource. There are also some good
resources on Gutmann's page.
BTW, while we all focus on the use of open source operating systems and
applications as one means to allow for the peer review of source code,
we forget that these operating systems and applications run on top of
proprietary chips which contain firmware for which there is no similar
recourse. Consider what information would be available to appropriate
folks if there are what are known as "covert channels" embedded in such
chips.
That's why the U.S. government just banned the use of Lenovo PCs for
governmental applications. Recall that Lenovo is the former IBM PC
division (ie. ThinkPads), which they sold to a Chinese based company.
And that's not being paranoid, just realistic.
I personally figure dd if=/dev/zero of=/dev/mapper/home before running
mkfs /dev/mapper/home is sufficient (to protect against issue
mentioned below).
(I don't really know much about file systems, but in my imagination
given a zeroed disk if one were to create a dm-crypt encrypted
filesystem on it, one would be able to ascertain how full the
encrypted partition was (and possibly some vague information about
size/number of files?) by looking at what areas of the disk *hadn't*
have ciphertext written to them. Of course if there exists
sophisticated analysis of the disk surface that can establish when and
to what area of a disk writing had taken place then you'd just have to
grin and bear that slight information leak.)
The point is not to 'zero' the disk, but to write random patterns of 0's
AND 1's to the disk, where the random patterns have very long periods.
That makes it more difficult to ascertain what is simply random and what
is data that has been encrypted and should "appear" to be random.
One of the problems with weaker crypto algorithms is that the patterns
are not sufficiently random and have periods that are too short,
enabling the analyses of the patterns and the possibility of leading to
the breaking of the cipher text.
That's one of the reasons why even the approach taken to writing random
data is important to consider.
Using "badblocks" is weaker than "/dev/urandom" which is weaker than
"/dev/random".
The problem is that the latter is impractical for use with very large
hard drives. And in each case, they are pseudo random number generators.
In highly secure applications, other dedicated devices that have been
subjected to intense statistical pattern analysis are used.
And yes, there are sophisticated attacks on the physical media
available, including the use of STM's (scanning tunneling microscopes)
to review atomic level changes in the magnetic media. That would be a
resource to TLA's of course and not the general community of prospective
attackers, However, there are other EM based tools that would require
less financial resources to secure. Which is why the only secure way to
protect a discarded hard drive from revealing data is to physically
destroy it.
HTH,
Marc Schwartz
---------------------------------------------------------------------
- http://www.saout.de/misc/dm-crypt/
To unsubscribe, e-mail: dm-crypt-unsubscribe@xxxxxxxx
For additional commands, e-mail: dm-crypt-help@xxxxxxxx