Re: Full bore.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 15 Nov 2007, Kevan Benson wrote:

Chris Johnson wrote:
     Hi again,

     Ok, now have CentOS5 on the two DELL front ends to this SATA
Beast thingy.  I have gluster enhanced fuse on both and glusterfs
1.3.5 on both.

     Performance is still way below NFS.  I have turned on
write-behind and read-ahead.  And with fuse it's a little faster. but
not significantly.  To get more out of this it's pretty ovious I need
to do unify if I want BIG space, maybe striping.  If I  want high
availability I need AFR (that works with unify?).  And if I want to
use the second DELL front end I'm probably going to need locking if
they share drives.  Self healing works, right?  Wasn't there discussion
about a potential problem not long ago?

      Sound about right?

Pretty much. The self heal discussion was about edge cases. It works for most situations you'll see it in.

AFR and Unify (and striping) are stackable, layer as many as you want (AFR dual unified AFR's if you want.

     I might if I get another one.


      I think I'm seeing from the glusterfs wiki that the order
in which things are defined in the config files matters.  It's a bit
unclear to me what the real order of things should be.

It's doesn't REALLY matter too much, unless it's a client config and you aren't explicitly defining the share to load (then it grabs the last specified).

Most translators specified that rely on other volumes have you specifically state which volumes they are using. The only way I see order mattering there is that the subvolumes used by a translator may need to be defined above the location they are used in a translator. I.e. define share1 and share2 before you include them in an AFR/Unify. I'm not sure this is required, but it's probably easier to read the configs if you do it anyways.

     About what I thought but the wiki sounded otherwise.


     Oh, this is ext3 with xattr turned on.  But we could use Reiserfs
if that would help.  I may run tests on that as well.

I don't know about others, but I wouldn't use reiserfs if you care about your data and aren't supplying redundancy above teh file system level (with glusterfs, for example). I've seen multiple reports of how Reiserfs has a tendency to have unrecoverable errors in the file system when it gets sufficiently screwed up, basically necessitating a reformat of the partition.

     Huh.  Now that's interesting.  We've been running our mail server
off Reiserfs for a few years now and never an issue.  It's servived
power outs and dropped drives.  We're using software RAID on it.


     Can we have a discussion on whether I'm heading in the right
direction and what order things go in for the config files?

That depends on your goals. What's important here, speed, redundancy, or a mix of both?


      Both of course.  Are the mutually exclusive?  Please, I know
they can be to some extent.  I'm talking about the real world,
whatever that is.  I need to get the performance up,
redundancy/failover would be be real good too.  NFS has a few problems
with that.

     Also, any operational notes on running gluster on anything this
big will be appreciated.


--

-Kevan Benson
-A-1 Networks




------------------------------------------------------------------------------- Chris Johnson |Internet: johnson@xxxxxxxxxxxxxxxxxxx
Systems Administrator       |Web:      http://www.nmr.mgh.harvard.edu/~johnson
NMR Center                  |Voice:    617.726.0949
Mass. General Hospital      |FAX:      617.726.7422
149 (2301) 13th Street      |Do not meddle in the affairs of wizards, for
Charlestown, MA., 02129 USA |they are subtle and quick to anger.
-------------------------------------------------------------------------------




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux