Flavio Junior wrote:
On Fri, Jul 3, 2009 at 4:40 PM, Gordan Bobic <gordan@xxxxxxxxxx
<mailto:gordan@xxxxxxxxxx>> wrote:
Sounds like you are running into the same bug that I ran into with
GFS2 on a similar setup nearly 2 years ago, except I could produce a
lock-up in under 2 seconds every time. Solution is to use GFS1 if
you really want to stick with that setup, but bear in mind that,
regardless of the cluster file system (GFS1, GFS2, OCFS2) the
performance will scale _inversely_. Cluster file systems really
don't work well with millions of small files.
Hi Gordan, thanks for answer.
But, if it is "possible" to be solved (as it was with GFS1) why is it
not feasible to GFS2?
1) Performance will such regardless of whether it's GFS1 or GFS2. It's
fine for 10-20 users, but if you have 10,000-20,000 users, it will grind
to a halt.
2) The GFS2 clearly still isn't stable enough if this sort of crash
still happens.
Well, no problem at al to migrate to GFS1, actually I've already thinked
about it, but all those gfs1 tunning options and tests makes me a bit
apprehensive.
GFS1 doesn't have any more tuning options than GFS2 that I can think of.
And besides, in practice, if the performance isn't in the right ball
park out of the box, no amount of tweaking will help. Just about the
only think that makes a significant difference is the noatime mount
option. I wouldn't bother with the rest unless you really need those
last few percent.
I'll wait a bit more for GFS2 community, if they say that it can't be
done I go to GFS1 or even ocfs2 (what is the third option, as I've
already a RHCS structure with clvmd).
The problem with GFS2 is that it's still a bit buggy, as you've found.
But there isn't that much difference in performance between various
similar file systems. Sure, GFS2 is faster than GFS1, but it's not an
order of magnitude faster.
Gordan
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster