Re: Using GFS without a network?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2005-09-07 at 00:57 +0200, Andreas Brosche wrote:

> I've read in Red Hat's docs that it is "not supported" because of 
> performance issues. Multi-initiator buses should comply to SCSI 
> standards, and any SCSI-compliant disk should be able to communicate 
> with the correct controller, if I've interpreted the specs correctly. Of 
> course, you get arbitrary results when using non-compliant hardware... 
> What are other issues with multi-initiator buses, other than performance 
> loss?

Dueling resets.  Some drivers will reset the bus when loaded (some cards
do this when the machine boots, too).  Then, the other initator's driver
detects a reset, and goes ahead and issues a reset.  So, the first
initiator's driver detects the reset, and goes ahead and issues a reset.

I'm sure you see where this is going.

The important thing is that (IIRC) the number of resets is unbounded.
It could be 1, it could be 20,000.  During this time, none of the
devices on the bus can be accessed.

> > The DLM runs over IP, as does the cluster manager.  Additionally, please
> > remember that GFS requires fencing, and that most fence-devices are
> > IP-enabled.
> 
> Hmm. The whole setup is supposed to physically divide two networks, and 
> nevertheless provide some kind of shared storage for moving data from 
> one network to another. Establishing an ethernet link between the two 
> servers would sort of disrupt the whole concept, which is to prevent 
> *any* network access from outside into the secure part of the network. 
> This is the (strongly simplified) topology:
> 
> mid-secure network -- Server1 -- Storage -- Server2 -- secure Network

Ok, GFS will not work for this.  However, you *can* still use, for
example, a raw device to lock the data, then write out the data directly
to the partition (as long as you didn't need file I/O).

You can use a disk-based locking scheme similar to the one found in
Cluster Manager 1.0.x and/or Kimberlite 1.1.x to synchronize access to
the shared partition.

If you're using a multi-initator bus, you can certainly also use SCSI
reservations to synchronize access as well.

> A potential attacker could use a possible security flaw in the dlm 
> service (which is bound to the network interface) to gain access to the 
> server on the "secure" side *instantly* when he was able to compromise 
> the server on the mid-secure side (hey, it CAN happen). 

Fair enough.

> You cannot, however, disable *read* caching (which seems to be 
> buried quite deeply into the kernel), which means you actually have to 
> umount and then re-mount (ie, not "mount -o remount") the fs. This means 
> that long transfers could block other users for a long time. And 
> mounting and umounting the same fs over and over again doesn't exactly 
> sound like a good idea... even if it's only mounted ro.

Yup.

> 
> Maykel Moya wrote:
>  > El lun, 05-09-2005 a las 22:52 +0200, Andreas Brosche escribió:
>  > I recently set up something like that. We use a external HP Smart
>  > Array Cluster Storage. It has a separate connection (SCSI cable) to
>  > both hosts.
> 
> So it is not really a shared bus, but a dual bus configuration.

Ah, that's much better =)

-- Lon

--

Linux-cluster@xxxxxxxxxx
http://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux