D Canfield wrote:
I'm just looking for a bit of general advice about GFS... We're
basically just looking to use it as a SAN-based replacement for NFS.
We've got a handful of servers that need constant read/write access to
our users' home directories (Samba PDC/BDC, web server, network
terminal servers, etc.), and we thought GFS might be a good
replacement from a performance and security standpoint, let alone
removing the SPOF of our main NFS/file server.
That should be OK, IMO.
We're still using GFS 5.2 (I think) on our small webserver-farm and it's
amazing how much performance you can beat out of tualatin Tualatin-CPUs
and an aging hd-array.
We're looking to move it to GFS6.1 on newer hardware.
Another place we're thinking of using it is underneath our mail
servers, so that as we grow, SMTP deliveries (and virus scanning) can
happen on one machine while IMAP/POP connections can be served through
another.
That probably depends on your MTA.
You should, IMO, not deploy it without some real-world test-results.
GFS+small-files <=> possible nightmare.
Unfortunately, even at academic prices, Red Hat wants more per single
GFS node than I'm paying for twenty AS licenses, so I've been heading
down this road by building from the SRPMS. I mostly have a 2-node
test cluster built under RHEL4, but a number of things have me a
little bit hesitant to move forward, so I'm wondering if some folks
can offer some advice.
For starters, is my intended use even appropriate for GFS? It does
seem as though I'm looking to put an awful lot of overhead (with the
cluster management suite) onto these boxes just to eliminate a SPOF.
Indeed. And unless the storage itself is mirrored, that's still a SPOF ;-)
But on the over hand, it enables some things NFS can't do.
Another concern is that this list seems to have a lot more questions
posted than answers. Are folks running into situations where
filesystems are hopelessly corrupted or that they've been unable to
recover from? That's the impression I feel like I'm getting, but I
suppose a newbie to Linux in general could get the same impression
from reading the fedora lists out of context. The last thing I want
to do is put something into production and then have unexplained
fencing occurences or filesystem errors.
The support-contract should deal with these.
I suppose this list is not a replacement for a support-contract - merely
a feedback-list for the developers.
Finally, Red Hat sales is laying it on pretty heavy that the reason
the GFS pricing is so high is because it's nearly impossible to
install it yourself. That was particularly true before GFS landed in
Fedora. Now the claim is just that it's very difficult to manage
without a support contract. Is this just marketing, or does GFS
really turn out to be a nightmare to maintain?
From my (limited) exposure - I haven't done too much with it in the
sense of experimenting and tinkering - I'd say that it requires an awful
lot of knowledge to really "master" it.
One may get it to work with some tutorial and the mailing-list, but the
technology behind is much more complex than your average NFS-setup.
You should ask yourself: "In case of an alert @ 3am - can I deliver a
solution?".
If the answer is no.....
Having a test-setup that is similar or (equal) to the production-system
should also help tremendously in avoiding any silly "mishaps"
Any insights people could provide would be appreciated.
I think GFS does have its merrits (we really need to make some test with
6.1 next year), but only in cases where the number of concurrent (write)
accesses in the same directory is small.
Otherwhise, the overhead (at least by 6.0) is no longer worth the whole
effort.
cheers,
Rainer
--
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster