Re: Sharing encrypted block devices, for GFS2 over iSCSI?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Apr 24, 2010 at 06:41:30PM -0400, Ryan Lynch wrote:
> On Sat, Apr 24, 2010 at 11:55, Arno Wagner <arno@xxxxxxxxxxx> wrote:
> > On Fri, Apr 23, 2010 at 05:20:05PM -0400, Ryan Lynch wrote:
> >> Would it be possible to run a shared-disk, clustered filesystem over a
> >> dm-crypt block device, which in turn runs on a shared iSCSI device?
> >> I'd be interested in knowing if anyone has tried, or has theoretical
> >> knowledge of why it would (not) work.
> 
> > As I said, GFS2 does not care whether the block device it
> > runs on is directly hardware or in any way transformed
> > by LVM/dm-crypt, whatever. The only practical experience
> > I have is RAID1/5/6 -> dm-crypt -> ext3 -> NFS export. I have not
> > noticed any additional problems there in several years of
> > doing it. There are of course the individual problems of
> > the layers. E.g. slow access because files are heavily
> > fragmented (don't ask) or NFS (old version) being unreliable
> > when not used with TCP.
> 
> This seems logical, today. After a good night's sleep, and considering
> your response, the source of my trepidation is a little clearer to me.

Good. 

> I think I should be asking two specific questions:
> 
>     1) Do dm-crypt's individual cipher chains correspond to disk
> sectors? To put that another way, can a single chain encompass a
> larger area of disk? If larger chains are possible, can a cipher chain
> ever cross outside the boundaries of GFS2's locking granularity?

Should all be 512 byte blocks, as these are the smallest
addressable units when talking to disks. All larger units 
come from the filesystems, i.e. above dm-crypt. The dm-crypt
documentation also stipulates 512 byte sectors.
 
>     2) Is the LUKS/dm-crypt "mounting" (?) attachment process
> read-only and immune to concurrency problems? (Sorry, I'm not sure of
> the right terminology, here--"mount" is wrong, there's no FS involved,
> but what is the process of setting up the virtual block device
> called?)

I would propose to call the process "mapping".

AFAIK dm-crypt mapping is completely no-disk-access, i.e. not even 
read as there is no metadata on disk. Well, to be exact, the 
partition table gets read if you map a partition. I am not sure 
about LUKS.

> Based on what I've read, so far, I believe there's no problem WRT
> question #1. But I really don't know about question #2.

LUKS could be a problem with regard to #2. I agree on #1.

> > You can get pretty bad "synergies" even locally though.
> > Once I had ??XFS on a Linux software RADI5 and they interacted
> > so badly when both did a resync/FS check, that it would have
> > taken months to finish. Basically both seemd to work, then
> > backed off, then tried again, backed off.... The RAID resync
> > speed was arounf 10kB/sec.
> 
> I'm prepared for a pretty big performance hit. For my application, 2x
> hosts at >= 1 MB/sec, each, for relatively local file reads/writes w/o
> any lock contention is fine. Against a SATA RAID array over a single
> GigE switch, I can do 100x that. I haven't even really started to
> worry about this issue, yet.

Sounds sensible to me with requirements this low. My intuition is
that this should work with more than a bit of margin, even if there
are some performance problems.

> > I think unless somebody has the exact same configuration and
> > workload as you do, you really need to try ot out.
> 
> I'll be happy if it works and passes some basic tests, but I don't
> have the knowledge or resources to exhaustively test it. Someone more
> knowledgeable would have to tell me whether there are any hidden
> cryptographic risks, or potential crash/data loss bugs on uncommon
> corner cases that my tests don't cover.

There are no crypto-risks that I can see. There should also not be 
crash or data-loss issues (other than the combined ones of the 
layers). The only risk I see is potentially low performance, but
even that may not be a problem for your requirements.

One data-loss risk I see is that dm-crypt does not "sync" until
unmapped (found that out when zeroing a new dm-crypt partition,
the start of the partition only got overwritten after unmapping).
So for a hard sync where you need to be sure the data is on disk,
you would need to remove the mapping. But this is already 
problematic with modern disks in a power-loss scenario. You do
not really get a write-through these days. But if you force
disk flushes by umounting (the only reliable way these days, it
seems), remember to unmap dm-crypt as well.

Arno
-- 
Arno Wagner, Dr. sc. techn., Dipl. Inform., CISSP -- Email: arno@xxxxxxxxxxx 
GnuPG:  ID: 1E25338F  FP: 0C30 5782 9D93 F785 E79C  0296 797F 6B50 1E25 338F
----
Cuddly UI's are the manifestation of wishful thinking. -- Dylan Evans

If it's in the news, don't worry about it.  The very definition of 
"news" is "something that hardly ever happens." -- Bruce Schneier 
_______________________________________________
dm-crypt mailing list
dm-crypt@xxxxxxxx
http://www.saout.de/mailman/listinfo/dm-crypt

[Index of Archives]     [Device Mapper Devel]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite News]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux