Re: iSCSI GFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Mon, 28 Jan 2008, isplist@xxxxxxxxxxxx wrote:

It's pretty simple to set up. You just need to be familliar with iSCSI
tools and software RAID tools, all of which are almost certainly in your
distro's apt/yum repositories.

Figured I would ask. Never know, might be some cool management tools that help
keep an eye on things. The setup sounds simple enough as you say.

Sadlt, contrary to what users of some other operating systems may think, you cannot control a complex and flexible system by clicking on pretty pictures. ;)

I need a machine which will become the aggregator, plenty of memory,
multi-port Ethernet card and of course an FC HBA.
FC storage will be attached to this machine. Then, iSCSI storage targets
will also export to this machine.

Not quite sure I follow this - you want to use FC storage and combine it
with iSCSI storage into a bigger iSCSI storage pool? No reasib why not, I
suppose.

I need relatively small central GFS for the shared data between the servers
but the rest is for media and such. I'll need to have FC HBA's in every LAMP
server since it needs access to GFS but media servers, those who only offload,
don't need access to GFS so why install an FC HBA in those? Rather, I could
export the FCC storage as part of the aggregate volume so that any server can
gain access over iSCSI. Seems that would give me more options.
Am I thinking incorrectly on this?

Sure, that works.

Note that software RAID only goes up to RAID 6 (i.e. n+2). So you cannot
lose more than 2 nodes (FC or iSCSI), otherwise you lose your data.

So basically, can't lose more than one storage chassis.

You can't lose more than 2.

Since they are all
RAID with hot swap, I should be ok so long as I keep a close eye on it all
which one needs to anyhow. That's why I wondered about any software tools that
might help but I'm sure there's a ton out there which will work.

cat /proc/mdstat is a good one to check every morning. :-)

yum install iscsi-target

No?! I saw this a long time ago as a new concept, never looked at it since.
Wonderful :).

Actually, on RHEL you'll probably want up2date, but you get the idea.

When a node needs to take over, it fences the other node, connects
the iSCSI shares, starts up the RAID on them, assumes the floating IP and
exports the iSCSI/NFS shares.

So this sounds like the complex part then because being able to fail over or
switch over seems terribly important to me. If one machine is handling all
this I/O and something happens to it, everything is down until that one
machine is fixed.

This, I would need to find a solution for first. I need to better understand
how I would do this fencing.

You need working fencing support for sane GFS operation anyway. You can do this via DRAC, ILO, switches that allow you to disable a port, UPS that lets you cut off power to a machine, etc. Just something you can use to make a machine stay down when it goes wrong until you can fix it.

Gordan

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux