Design flaw in LUKS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I've been using/experimenting with cryptsetup for a long while now but there
are some serious
concerns that need to be addressed to make it suitable for a real
"production system".

I've read through the mailing list and I see that people are loosing their
data because of the
LUKS-header. There isn't any redundancy there and the standard answer is
always make
backups.

Backups are important but suppose you have this small/middle-sized company
and you are
working with something that needs security/redundancy:

You invest in 2 servers to run a clustered environment in case a machine
goes down.
You invest in a striped & mirrored array (waste of disks but often used).
You run a journaled file system.
You run a database that have redo-logs just in case you need to go back to a
state.

Everything until this point is redundant with no single-point-of-error for
hardware/disk/
file system/database.

Now it comes to backups. The only time you can really do a backup is during
the nights
after your busy production day (calculations whatever). Backups takes time
and
loads the disks/system quite heavily, even if you sync up a 3rd mirror and
do a backup
from there. Backups during production hours are just not working in real
life.

So far so good. But you want to add that extra security by doing a crypted
file system
and can live with that small penalty in CPU/disk overhead that it will
produce. The problem
then arises with LUKS. It have just this small information that is written
in the beginning
of the disk and if it is damaged it will just crash the whole access to the
partition. This
is a single-point-of-error, something that you can never have on a real
production system.

If your computer fries, your environment will switch to the other clustered
machine.
If your disk fails, you have a mirror and can continue working until the
disk is changed.
If your file system fails, you have the option to at least try a copy of the
super block or
play back the journal.
If the database fails you can roll forward with the logs that the database
produces.
But if your LUKS-header gets corrupted you loose everything.

Of course, you can go back to your backup, but that means that you lose a
whole
production day and have to redo it with all the problems that will arise
(probably
more then just loosing one days work in my experience).

I haven't seen any real way to do a backup for critical metadata as the
LUKS-header.
Without the option to freely duplicate/restore/having alternative ways to
recover a broken
header in LUKS the system will never be seriously production ready and that
means
that companies will never implement this in a real system. They have to use
other
commercial (read EXPENSIVE) cryptographic solutions and are often forced to
change
to some other proprietary OS to have the benefit of total redundancy.If the
company
is small this will be impossible because they will not be able to afford it.

The solution is so easy to implement; making a proper utility to save the
header to
a file and having more copies of the header, security meassures to ensure
that the header
isn't damaged and more recovery options. Since the header is crypted it
doesn't matter if it's
written to a file since you can already read it on the disk.

That would be the first step to secure the LUKS-implementation with more
options for
recovery in later versions. Backups are a necessity but it's not the best
way to restore a
8TB big array just because a few bytes have been altered.

Regards,
Paul T

[Index of Archives]     [Device Mapper Devel]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite News]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux