hi,
i'm new at clustered file systems and i have a "should i?"-type question.
i have three boxes - a file server (FOO), and two workstations (DT0 and
DT1). they all run ubuntu (more on why below).
i'm working now to get SCST running with my qla2xxx fcp hba's.
i have a raid array that presents four arrays, each with one LUN, to
FOO, and two LUNS for two of those arrays are presented to desktops as
well - one to DT0 and one to DT1. when the desktops use "their two
arrays", FOO has them unmounted, and vice-versa. it works, i get fast
dasd access, but of course it's unworkable over the long run; it's a
stop-gap until SCST is in.
once SCST is running, then i plan on having FOO present one LUN each for
three raid arrays to the two desktops. array0 will be shared by FOO and
DT0; array1 will be shared by FOO and DT1; and array2 will be shared by
FOO, DT0 and DT1.
the scenario for my question is this:
1) SCST is in and running
2) all four of the raid arrays will be available and mounted on FOO
3) three of the four raid arrays will be presented to more than one host
at a time
4) all access to the arrays will be controlled by FOO
5) FOO will see all modified buffers / data going to-from the three
shared raid arrays. (in effect, this is "DAS through a fileserver",
not a "SAN Fabric".)
so here's my question:
will a "non-clustered" filesystem like ext3 / xfs be sufficient? i
can't use zfs because this is FCP SCSI not iSCSI.
i've looked at OPEN-E, OpenFiler (which has a 64-bit version that
doesn't like HP DL380 G6's), DataCore and tons more and all fall short
or failed the install.
initially i tried RHEL 5.4 (which is how i got on this mailing list) and
SLES 11 and i'm farther along with ubuntu than with either of those.
i'm not saying anything bad about RHEL/SLES; it's just that the
"learning how to setup a cluster" curve for a debian person is
non-trivial. i still have the SLES 11 config on a bootable partition in
case i can never get SCST working with qlogic HBA's on ubuntu and thus
ubuntu doesn't work out.
i somewhat grok NFS locking, but when it comes to FCP SCSI locking i'm
lost. do i need GFS/2 or will ext3 on a file server be sufficient?
we could certainly go iSCSI (which seems to be a trend), but we've
invested some non-trivial budget in 8gbps FCP and don't feel it's
antiquidated just yet.
if anyone could share their advice, i'd appreciate it.
thanks!
yvette hirth
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster