Re: fault tolerancy in glusterfs distributed volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Volume will be available even if one of the brick each sub volume goes down.

Sub volume 1 bricks:
Brick1: 10.0.0.2:/brick
Brick2: 10.0.0.3:/brick
Brick3: 10.0.0.1:/brick

Subvolume 2 bricks:
Brick4: 10.0.0.5:/brick
Brick5: 10.0.0.6:/brick
Brick6: 10.0.0.7:/brick

On Wednesday 24 January 2018 04:36 PM, atris adam wrote:
I have made  a distributed replica3 volume with 6 nodes. I mean this:

Volume Name: testvol
Type: Distributed-Replicate
Volume ID: f271a9bd-6599-43e7-bc69-26695b55d206
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.0.0.2:/brick
Brick2: 10.0.0.3:/brick
Brick3: 10.0.0.1:/brick
Brick4: 10.0.0.5:/brick
Brick5: 10.0.0.6:/brick
Brick6: 10.0.0.7:/brick
Options Reconfigured:
cluster.quorum-type: auto
cluster.server-quorum-type: server
nfs.disable: on
transport.address-family: inet

I have set quorum in client and server side, I want to know about fault tolerancy in distributed volume, how many bricks goes down, my volume is still available?


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users


-- 
regards
Aravinda VK
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux