Re: Prepare SSD for encrypted linux install

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/08/17 09:36, Merlin Büge wrote:
I want to use an SSD (Samsung 850 PRO 512GB) for a fully encrypted Linux
system. I've read the cryptsetup FAQ and various posts in the
internet and I'm familiar with the common problems/pitfalls regarding
dm-crypt on SSDs.

To avoid information leakage about the storage device's usage patterns,
it is generally recommended to fill the entire device with random data
before setting up encryption. It is also recommended to issue an 'ATA
secure erase' to SSDs before using it to avoid performance issues.

But doing these two things, either my (1) random data gets 'deleted' via
the (2) 'ATA secure erase' (the SSD will report all zeros), or I end up
with degraded performance when (1) issuing 'ATA secure erase' before
(2) putting random data on it.

I thought of TRIMing the SSD via 'blkdiscard' instead of using
'ATA secure erase' after putting random data on it (twice, see [0]),
but that should make no difference, since the SSD will most probably
report all zeros for TRIMed sectors. Either way, the flash chips will
contain all random data (making it impossible to distinguish encrypted
data from free space) but the drive controller will still report all
zeros for the entire SSD (making it possible to distinguish encrypted
data from free space).

(Note: I'm assuming that an 'ATA secure erase' does not actually empty
the flash cells, but merely changes the internal encryption key. I'm not
sure on this, but it doesn't really matter.)

Any solution/thoughts on this?

Choosing between randomizing, secure erase, trim, zerofree, etc., is a matter of balancing conflicting goals -- security, performance, longevity, maintenance, etc..


Filling an SSD with random bytes and then doing a secure erase will produce nearly the same result as doing a secure erase alone (a drive full of zero's), but wastes one erase/write cycle of the cells written.


Doing a secure erase and then filling an SSD with random bytes will produce nearly the same result as filling the drive with random bytes alone (a drive full of random bytes), but wastes one write/erase cycle of all drive storage cells and limits available erased cells to those kept in reserve by the SSD controller (see below).


Random filling unused blocks is a form of steganography [1], which some believe makes brute-force attacks harder. (Finding weaknesses in the science or technology makes brute-force attacks easier.) But, it is my understanding that successful brute-force attacks are uncommon -- most successful attacks involve obtaining the passphrase (through psychology, espionage, interrogation, blackmail, extortion, torture, etc.).


SSD's require erased cells to write data. The SSD controller maps drive blocks to cells. There are more cells than there are drive blocks. The difference is hidden from your operating system by the controller. This allows the controller to erase dirty cells in the background and maintain a pool of erased cells for future writes. If your operating system is doing a lot of writes and the pool of erased cells runs out, writes will stall while dirty blocks are erased. This can cause operating system write timeout failures, and can easily snowball if the device is a system drive and/or swap device. If your workload involves heavy writes and/or swapping, use devices separate from your system drive and consider "over-provisioning" [2]


Cells are rated for a finite number of erase/write cycles, so it is best to conserve them.


Drive imaging is a useful technique for system provisioning and disaster preparedness/ recovery. Trimmed blocks read as zeros, which compress nicely; thus saving time and storage requirements.


I use Debian 9 GNU/Linux for my SOHO network, as described below.


For my system drives:

1. I prefer 16~60 GB SSD's for workstations. For headless servers, USB 3.0 flash drives are a cheap alternative.

2.  I backup the data and/or take an image.

3. If the device supports secure erase and I have the right tool, I erase the device.

4.  I use the Debian installer:

    a.  MBR partition table/ boot block (default).

    b.  ~1 GB partition with btrfs for /boot.

c. ~1 GB partition with LUKS (random key) for swap. If the device does not support trim, I randomize the partition.

d. 10~12 GB partition with LUKS (passphrase) and btrfs for root. If the device does not support trim, I randomize the partition.


I was using 1.5 TB HDD, LUKS, and btrfs for data drives. I believe I created them with Debian 7. I have experienced weirdness using these drives with Debian 9. The btrfs on my Debian 9 system drives was created by Debian 9, and I haven't seen any weirdness yet, but I'm having doubts about btrfs.


I recently migrated the data in my primary file server to two 1.5 TB HDD's in mdadm RAID 1, LUKS, and ext4. So far, it has been very stable.


My backups, archives, and images are on 3 TB HDD's, GPT, LUKS, and ext4. The drives are mounted in mobile docks and I use a rotation scheme. These are also very stable.


HTH,

David


References:

[1] https://en.wikipedia.org/wiki/Steganography

[2] https://duckduckgo.com/?q=ssd+over+provisioning&t=ffsb&ia=web
_______________________________________________
dm-crypt mailing list
dm-crypt@xxxxxxxx
http://www.saout.de/mailman/listinfo/dm-crypt




[Index of Archives]     [Device Mapper Devel]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite News]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux