Re: Using GFS without a network?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Steve Wilcox <spwilcox@xxxxxxx>
> On Wed, 2005-09-07 at 00:57 +0200, Andreas Brosche wrote:
[multi-initiated SCSI issues]
> All tested configurations were equally crash-happy due to the bus 
> resets.  
[...]
> Calling these
> configurations simply "not supported" is an understatement - this type
> of config is guaranteed trouble. 

OK, thank you for sharing your experiences. It definitely sounds like we're
not going to use this setup. Maybe these issues should find the way into the
GFS documentation, as multi-initiated busses *should* be standard compliant.
A simple "it don't work", as it basically says now, is not enough, IMHO. 

From: Axel Thimm <Axel.Thimm@xxxxxxxxxx>
> > Hmm. The whole setup is supposed to physically divide two networks, and
> > nevertheless provide some kind of shared storage for moving data from 
> > one network to another.
[...]
> > This is the (strongly simplified) topology:
> > 
> > mid-secure network -- Server1 -- Storage -- Server2 -- secure Network
> > 
> > A potential attacker could use a possible security flaw in the dlm 
> > service (which is bound to the network interface) to gain access to the 
> > server on the "secure" side *instantly* when he was able to compromise 
> > the server on the mid-secure side (hey, it CAN happen). If any sort of 
> > shared storage can be installed *without* any ethernet link or - 
> > ideally - any sort of inter-server communication, there is a way to 
> > *prove* that an attacker cannot establish any kind of connection into 
> > the secure net (some risks remain, but they have nothing to do with the
> > physical connection).
> 
> If you are paranoid like that and consider that even if you could do
> away with dlm and IP connectivity, then
> 
> o an attacker on the mid-secure network could alter files that the
>   secure network accesses and gain privileges that way.

Data corruption is not really an issue - the only way to gain privileges
that way on any system would be on the system where the actual data is being
processed (which is, in fact, possible, think of viruses in multimedia
files, or MS Word macro viruses). The only way the data is processed by
Server2 is by transferring it into the secure network. As most files in the
secure network will be documents, we'll have to keep our word processing
software up to date. But attacks embedded into the actual data are an issue
we'd have to deal with, no matter what the transport medium is. 

> o an attacker can exploit potential bugs in GFS's code, just as well
>   as in dlm's, and having physical access to the Server 2's journals
>   is probably more harmful than trying to hack through dlm's API
>   calls.

Sure, the possibility of potential bugs in GFS was also under my
considerations. Injection of harmful code could be possible either way, if
there is in fact a security flaw in the sync code, granted... it wouldn't
make much of a difference if the code is injected via disk or via service...

> There is no way to "prove" what you want. Just go for second best to
> the ideal theorem. You probably don't want GFS, but a hardened NFS
> connection to the storage allocated within the secure network only.

So you would set up one (hardened) server only between the two networks? I'd
really rather have a solution without the technical ability to set up any
kind of tunnel which allows data to be read *from* the secure network. IP
over storage might be possible, but the counterpart in the secure network
has to interpret it, so any kind of trojan must be injected into the data.
For an attacker, the situation is the same, no matter how the data gets into
the network. With a single server connected to two networks, however, the
situation is by far easier for the attacker, as it introduces a far more
elegant way of setting up a tunnel. 

What the whole setup is supposed to prevent is that an attacker who manages
to get into Server1 has no immediate connection to the secure network (which
would be the case with a shared NFS server with, say, two ethernet devices).

> Axel.Thimm at ATrpms.net

Thank you both for your ideas and exeriences, I'll look into the
possibilities of hardening network filesystems. Looks like I'll discard the
shared bus idea completely; I'm going to fiddle a bit with it though and
test when the data gets corrupted. I'm not going to waste too much time on
it though. 

As GFS is supposed to be a file system which is shared between equal nodes
of a cluster, I guess it really is not the file system of choice for our
needs. An NFS solution sounds less insane. I'll think about the whole thing
again. 

Regards,

Andreas

-- 
5 GB Mailbox, 50 FreeSMS http://www.gmx.net/de/go/promail
+++ GMX - die erste Adresse für Mail, Message, More +++

--

Linux-cluster@xxxxxxxxxx
http://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux