No subject

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



bricks. As they can mount the sum of the two storage brick capacity.
I can pull out one brick and the clients fail over to the other brick.
 They do seem to revert to the former brick when I connect it back to
the network.  If this the appropriate working behaviour? Wouldn't it
be more faster to alternate the file creating between the bricks?  Is
there anything I can do from the iozone to alternate between bricks?

Client
[root at uranus williamm]# rpm -qa | grep gluster
glusterfs-3.3.1-1.el5
glusterfs-fuse-3.3.1-1.el5


Server
[root at gfs1 ~]# rpm -qa | grep gluster
glusterfs-3.3.1-1.el6.x86_64
glusterfs-server-3.3.1-1.el6.x86_64
glusterfs-fuse-3.3.1-1.el6.x86_64
glusterfs-geo-replication-3.3.1-1.el6.x86_64

[root at gfs1 ~]# gluster  volume status
Status of volume: example
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick gfs2.example.com:/storage                        24009   Y       10381
Brick gfs1.example.com:/storage                        24009   Y       3901
NFS Server on localhost                                 38467   Y       3907
NFS Server on gfs2.example.com                         38467   Y       10387

[root at gfs1 ~]# gluster  volume info

Volume Name: example
Type: Distribute
Volume ID: fcd31ea6-45e2-4c1f-bfa9-1bdb82f573d1
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gfs2.example.com:/storage
Brick2: gfs1.example.com:/storage
[root at gfs1 ~]#


Thanks in advance

Regards,

William


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux