Re: clustered afr

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Tibor,

It is behaving the way it is expected to.

Your requirement is you have 3 nodes, you want 2 copies of every file
and if one node goes down, all files should still be available.

It can be achieved through a config similar to what is explained here:
http://www.gluster.org/docs/index.php/GlusterFS_User_Guide#AFR_Example_in_Clustered_Mode

Regards
Krishna

On 3/12/07, Tibor Veres <tibor.veres@xxxxxxxxx> wrote:
i'm trying to build a 3-node storage cluster which should be able to
withstand 1 node going down.
first I tried glusterfs 1.3.0-pre2.2, but had some memory leakage
which seems to be fixed in the source checked from the repository

i'm exporting 3 bricks with this configs like this:
volume brick[1-3]
  type storage/posix
  option directory /mnt/export/shared/[1-3]
end-volume
volume server
  type protocol/server
  option transport-type tcp/server     # For TCP/IP transport
 option listen-port 699[6-8]              # Default is 6996
  subvolumes brick[1-3]
  option auth.ip.brick[1-3].allow * # Allow access to "brick" volume
end-volume

my client config looks like this:
volume b[1-3]
  type protocol/client
  option transport-type tcp/client     # for TCP/IP transport
  option remote-host 127.0.0.1         # IP address of the remote brick
 option remote-port 699[6-8]              # default server port is 6996
  option remote-subvolume brick[1-3]        # name of the remote volume
end-volume
volume afr
    type cluster/afr
    subvolumes b1 b2 b3
    option replicate *:2
    option scheduler rr
    option rr.limits.min-free-disk 512MB
    option rr.refresh-interval 10
end-volume

i didnt activate any performance-enhance translators.

This setup sort of works, except that i saw files created only on
bricks1 and 2, brick3 got only the directories and symlinks created on
it. After killing the brick2 glusterfsd, the filesystem stayed up,
which is promising, but still no files are created on brick3.

is this setup supposed to work? can i get comparable functionality set
up with current glusterfs? preferably in a way that can be extended to
5 nodes, withstanding 2 going down. is there any plan for some
raid6-like functionality, or this would kill performance alltogether?


--
Tibor Veres
  tibor.veres@xxxxxxxxx


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel





[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux