Thanks for the info on the -n, I have a much clearer understanding of it.
I didn't see any other docs on the -n option, and I thought I had read all
the docs. After you mentioned it, I did remember the AFR limitation
when doing the :2 to the first two bricks, so the other 3 don't get
anything.
I had just forgotten about the limitation or wasn't correlating
it with my testcase.
Hmmm, This makes me think tho, if my total dataset is larger than the size
of
the first two bricks, then replication will fail. I noticed that all my
files were going
to the last brick that was listed. I had like 300 some odd files in each
brick, and the last
brick had all the files. I hope that this will get redesigned so that the
end user doesn't
have to be soo smart about which order the bricks are listed in the
subvolume command.
Here is a thought for a simple fix on AFR, if I have say 10 bricks, and say
AFR *:2, then use
a modulus of the available bricks, so you would still use available bricks.
- eg copy set 1 could
go to 1,3,5,7,9 and copy set 2 could go to the 2,4,6,8, or 10th listed brick
in the subvolume
command.
I'm shooting for the small number of client and large number of distributed
server/brick model
and a couple NFS gateways. I would naturally have a lot of server bricks,
but would only want to
do like 4 or 5 replicas. If I had 10 bricks, that would leave the last 5-6
bricks not used for replication.
From: "Daniel van Ham Colchete" <daniel.colchete@xxxxxxxxx>
To: gluster-devel <gluster-devel@xxxxxxxxxx>
Subject: Re: df -kh not reporting correct value
Date: Wed, 11 Jul 2007 21:03:36 -0300
DeeDee,
I read your spec file from another e-mail, and I think I know the answer.
Please any developer correct-me if I'm wrong.
According with
http://lists.gnu.org/archive/html/gluster-devel/2007-03/msg00106.html the
AFR translator will return the minimum size between all it's volumes when
you df -h.
But, from what I could read at the source, AFR currently will return the
total space of the first available subvolume.
For fact you can be assured that it does not sum the total space of all
it's
volumes.
As a fellow user, I recommend you to sent more details when asking
something
at the list. It helps me understand your problem more. If it wasn't for the
other e-mail you sent to the list I would never imagine that you are
putting
all the 5 bricks inside AFR, although you only make 2 copies of each file
at
the first two subvolumes available at open() time and the other three is
almost never used.
Suggestion:
http://www.gluster.org/docs/index.php/Understanding_Unify_Translator
Best regards,
Daniel Colchete
On 7/11/07, Daniel van Ham Colchete <daniel.colchete@xxxxxxxxx> wrote:
DeeDee,
I'm not a Gluster Developer but I think I can help.
First, it is easier if you send your volume files :-) .
When you mount a Gluster brick running the glusterfs command, you can use
the '-n' option to mount just a 'protocol/client' brick by its name and
see
how everything is at the last level. I would start tracing using this. W
Best regards,
Daniel Colchete
On 7/11/07, DeeDee Park < deedee6905@xxxxxxxxxxx> wrote:
>
> I have 6 bricks totalling 2.3T of space right now in my test setup.
> (6GB,
> 40GB, 250GB, 500GB, 750GB, 750GB)
> I run the 'df' command and it currently shows 967M of space available
at
> the client.
> It use to show the correct amount a while back. How can i trace this
> so I can find out how much each brick is reporting as total space
> available?
>
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel
_________________________________________________________________
Don't get caught with egg on your face. Play Chicktionary!
http://club.live.com/chicktionary.aspx?icid=chick_hotmailtextlink2