Re: The future of disk encryption with LUKS2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Am 08.02.2016 um 04:43 schrieb f-dm-c@xxxxxxxxxxxxx:
     > Date: Mon, 8 Feb 2016 03:46:27 +0100
     > From: Sven Eschenberg <sven@xxxxxxxxxxxxxxxxxxxxx>

     > If a sector fails, it is not that uncommon that a whole chunk of
     > consecutive sectors fail (for rotating disks that is).

Oh, come on.  A one-meg gap is 256 4K sectors and 1024 1K sectors.

I've never seen anything take out more than a handful of sectors
adjacent to each other unless the disk has completely failed.
Anything that's chewing up multiple megs or tens of megs at the start
of your FS is likely to destroy any other random parts of it as well.

Okay, how about a -10- meg gap?  That enough?

Well, I've seen several thoundand adjacent sectors going down. And not just once.

As I pointed out creating a filesystem can easily destroy both headers, even though many FSes have a rather thin metadata structure. Another neat example mdadm - default is header at 4k (primary header will be damaged) followed by a bad block list and and intent bitmap. The size of those can vary afaik.

To be honest, I am not completely sure what a good offset would be.


If you need resiliency from massive corruption like that, use a header
backup -on other media-, and -also- an actual backup of the FS.

Of course, that's what I usually do anyway. But we'll have to at least consider an average user aswell, having a normal desktop, using a magic mumbo jumbo installer with LVM on top of dm-crypt setup. It's not about covering your or my use case, but rather those of as many users as possible, without sacrificing i.e. security though.


Complicating LUKS to the point where resizing becomes fraught and
difficult to handle and other tools need all kinds of special
instructions to solve a problem where the disk is already in severe
distress or something's written tens of megs of garbage all over it
seems pointless.

Of course overcomplicating things is not an option. But you should always remember: A damaged header is a total loss, a damaged fs can quite often be recovered easily enough. And other tools need 'special instructions' for every fs, dm-layer and task. There's no generic resize() IOCTL for all your block layers.


The (potentially solvable) problem we've seen most on this list is not
massive disk failure, but OS's that decide to overwrite a sector or
two near the front.  So maybe we'll be extravagent and use 10 megs of
clear space between the two copies---that's still absolutely in the
noise on any reasonable disk, while being dead simple to implement,
does not require any knowledge of the ultimate container size, does
not require motion if the size changes, and will withstand almost any
conceivable failure except someone doing "dd if=/dev/zero of=part" and
then not noticing until a minute later---at which point, it's time to
go to the backups anyway.  And it doesn't involve hairing up the
options to enable/disable/move around/dance a jig with where the
backup header is stored.  Keep it simple.

I don't mind keeping it simple. Really simple and secure was already mentioned: You do have a backup anyway, just recreate the container and pull back the backup and you are done ;-). Resizing (growing) is just a convenient thing to do.

Regards

-Sven


_______________________________________________
dm-crypt mailing list
dm-crypt@xxxxxxxx
http://www.saout.de/mailman/listinfo/dm-crypt



[Index of Archives]     [Device Mapper Devel]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite News]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux