RE: Extended Attributes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jordan,

 

 No immediate reason for xfs namespace, just seemed good for resizing and
specifying heaps of inodes - plus it shouldn't need any fsck'ing, even
though it is only 10gb in size I don't know how long it would take because
it has so many directories/files.  I cannot remember trying to create a
namespace on ZFS, I am sure I did and it worked fine, it just made sense in
my setup to have the namespace on the client, as my setup will always be a
'one client' / 'multiple server' scenario.  

 

 Sorry I just realised off some other messages, I have been running
1.3.8pre6 ( or 7), I think this then somehow got renamed to 1.3.8.freebsd2
after we were compiling it and fixing any issues we came across.  I had
thought this had been integrated into the main tree already.

 

 Cheers

 

 Paul Arch

 

 

 

From: jordanmendler@xxxxxxxxx [mailto:jordanmendler@xxxxxxxxx] On Behalf Of
Jordan Mendler
Sent: Tuesday, 13 May 2008 11:01 AM
To: Paul Arch
Cc: gluster-devel@xxxxxxxxxx
Subject: Re: Extended Attributes

 

Hi Paul,

I got unify working across 2 ZFS nodes (12-13TB each), and am able to write
files into it. I can read and cat files too, but for some reason doing an
'ls' of the directory takes forever and never returns. Have you had any
issues this like? It's weird because I can so all files when doing an ls of
the namespace, just not when doing so on the fuse mount.

Also, why use XFS for the namespace? I was thinking to just create a
separate ZFS directory or zpool for the namespace on one of the storage
servers. Any reason not to do this?

Lastly, what version of gluster are you using on FreeBSD?

I also gave some thought to OpenSolaris and Nexenta, but they don't support
3ware RAID cards so its not an option. It's looking like either figure out
how to get FreeBSD working flawlessly, or use Linux and give up on
compression.

Thanks so much,
Jordan

On Mon, May 12, 2008 at 5:37 PM, Paul Arch <paul@xxxxxxxxxxxxxx> wrote:



<snip>


>Thanks again.
>
>Jordan
>
>On Mon, May 12, 2008 at 3:38 PM, Amar S. Tumballi <amar@xxxxxxxxxxxxx>
>wrote:
>

<snip>



Hi Jordan,

 Also FYI we are running Gluster on FreeBSD 6.1, and FreeBSD 7.0RC1 (
Servers only ).  7.0RC1 has ZFS running on backend store, 6.1 is UFS.

 System has ~ 10 million files over maybe 7Tb, running a simple unify,
client is Linux with namespace Linux also.

 Generally, I would say things are 99.5% good, system seems to be holding
together, I believe the only issues I have had related to attempting to
bring in data on the servers( without gluster ) and then unify them.  After
that, anything written to the cluster seems very stable.  In between I did
do a lot of chopping/changing on namespace so I am sure that didn't help.

 I can't remember specifically if the client worked under freebsd ( I am
quite sure it ended up working ), but as Amar has suggested AFR and stripe
won't work, looks like because of the attributes.

 The only real gotcha I got, and this will relate to any unify/cluster setup
I assume, is to make sure the namespace filesystem can support the number of
files you have ( ie FREE INODES )  I got unstuck with this a couple of
times, hence the reason for chopping/changing namespace.  In the end I
created a 10gb XFS loopback image under linux - but even now I just checked
and I am nearly out of inodes again ! But at least I can easily resize it.


 Cheers

 Paul Arch

 



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux