Re: combining AFR and cluster/unify

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/14/07, Krishna Srinivas <krishna@xxxxxxxxxxxxx> wrote:

Pooya,

Your client spec was wrong. For a 4 node cluster with 2 replicas of
each file following will be the spec file: (similarly you can write
for 20 nodes)

### CLIENT client.vol ####
volume brick1
  type protocol/client
  option transport-type tcp/client
  option remote-host 172.16.30.11
  option remote-port 6996
  option remote-subvolume brick
end-volume

volume brick1-afr
  type protocol/client
  option transport-type tcp/client
  option remote-host 172.16.30.12
  option remote-port 6996
  option remote-subvolume brick-afr
end-volume

volume brick2
  type protocol/client
  option transport-type tcp/client
  option remote-host 172.16.30.12
  option remote-port 6996
  option remote-subvolume brick
end-volume

volume brick2-afr
  type protocol/client
  option transport-type tcp/client
  option remote-host 172.16.30.13
  option remote-port 6996
  option remote-subvolume brick-afr
end-volume

volume brick3
  type protocol/client
  option transport-type tcp/client
  option remote-host 172.16.30.13
  option remote-port 6996
  option remote-subvolume brick
end-volume

volume brick3-afr
  type protocol/client
  option transport-type tcp/client
  option remote-host 172.16.30.14
  option remote-port 6996
  option remote-subvolume brick-afr
end-volume

volume brick4
  type protocol/client
  option transport-type tcp/client
  option remote-host 172.16.30.14
  option remote-port 6996
  option remote-subvolume brick
end-volume

volume brick4-afr
  type protocol/client
  option transport-type tcp/client
  option remote-host 172.16.30.11
  option remote-port 6996
  option remote-subvolume brick-afr
end-volume

volume afr1
  type protocol/client
  subvolumes brick1 brick1-afr
  option replicate *:2
endvolume

volume afr2
  type protocol/client
  subvolumes brick2 brick2-afr
  option replicate *:2
endvolume

volume afr3
  type protocol/client
  subvolumes brick3 brick3-afr
  option replicate *:2
endvolume

volume afr4
  type protocol/client
  subvolumes brick4 brick4-afr
  option replicate *:2
endvolume

volume unify1
  type cluster/unify
  subvolumes afr1 afr2 afr3 afr4
...
..
endvolume


I'm no gluster expert but I think this config will put each file pair in the
same server, doesn't it? Like, volume afr4 uses the brick4 and brick4-afr,
that happend to be on the same server, on it's subvolumes.

Shouldn't it be something like:

volume afr1
type protocol/client
subvolumes brick1 brick2-afr
option replicate *:2
endvolume

volume afr2
type protocol/client
subvolumes brick2 brick1-afr
option replicate *:2
endvolume

volume afr3
type protocol/client
subvolumes brick3 brick4-afr
option replicate *:2
endvolume

volume afr4
type protocol/client
subvolumes brick4 brick3-afr
option replicate *:2
endvolume

So that everyfile has a copy of itself on two diferent servers?

Best regards,
Daniel Colchete


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux