Hi,
We are using Ubuntu 14.04 server and for storage purpose we configured gluster 3.5 as distributed volume and find the below details,
1).4 Servers - 14.04 Ubuntu Server and each server disks free spaces are configured as ZFS raiddz2 volume
2). Each server has /pool/gluster zfs volume and capacity as - 5 TB,8 TB,6 TB and 10 TB
3). Bricks are - rep1,rep2,rep3 and st1 and all the bricks are connected as Distributed Volume and mounted on each system as,
For E.x in rep1 -> mount -t glusterfs rep1:/glustervol /data.
rep2 -> mount -t glusterfs rep2:/glustervol /data
rep3 -> mount -t glusterfs rep3:/glustervol /data
st1 -> mount -t glusterfs st1:/glustervol /data
So we get /data is having around 29 TB and all our applications data's are stored in /data mount point.
Details about volume:
volume glustervol-client-0
type protocol/client
option send-gids true
option password b217da9d1d8b-bb55
option username 9d76-4553-8c75
option transport-type tcp
option remote-subvolume /pool/gluster
option remote-host rep1
option ping-timeout 42
end-volume
volume glustervol-client-1
type protocol/client
option send-gids true
option password b217da9d1d8b-bb55
option username jkd76-4553-5347
option transport-type tcp
option remote-subvolume /pool/gluster
option remote-host rep2
option ping-timeout 42
end-volume
volume glustervol-client-2
type protocol/client
option send-gids true
option password b217da9d1d8b-bb55
option username 19d7-5a190c2
option transport-type tcp
option remote-subvolume /pool/gluster
option remote-host rep3
option ping-timeout 42
end-volume
volume glustervol-client-3
type protocol/client
option send-gids true
option password b217da9d1d8b-bb55
option username c75-5436b5a168347
option transport-type tcp
option remote-subvolume /pool/gluster
option remote-host st1
option ping-timeout 42
end-volume
volume glustervol-dht
type cluster/distribute
subvolumes glustervol-client-0 glustervol-client-1 glustervol-client-2 glustervol-client-3
end-volume
volume glustervol-write-behind
type performance/write-behind
subvolumes glustervol-dht
end-volume
volume glustervol-read-ahead
type performance/read-ahead
subvolumes glustervol-write-behind
end-volume
volume glustervol-io-cache
type performance/io-cache
subvolumes glustervol-read-ahead
end-volume
volume glustervol-quick-read
type performance/quick-read
subvolumes glustervol-io-cache
end-volume
volume glustervol-open-behind
type performance/open-behind
subvolumes glustervol-quick-read
end-volume
volume glustervol-md-cache
type performance/md-cache
subvolumes glustervol-open-behind
end-volume
volume glustervol
type debug/io-stats
option count-fop-hits off
option latency-measurement off
subvolumes glustervol-md-cache
end-volume
ap@rep3:~$ sudo gluster volume info
Volume Name: glustervol
Type: Distribute
Volume ID: 165b-XXXXX
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: rep1:/pool/gluster
Brick2: rep2:/pool/gluster
Brick3: rep3:/pool/gluster
Brick4: st1:/pool/gluster
We have standalone server(bak-server), which is sitting outside glusterfs for backup our data.At present our /data is having around 4 TB and initiated rsync data size around 1.2 TB from server rep1 (one of the gluster server) to backup server. After some 2 hour, it throws "Transport end point connect error" and the copy is canceled. But if i checked the volume status and all is working fine.
What could be the reason and is there any tuning parameter needs to be set, for these type of activity ?
Regards,
Varad
We are using Ubuntu 14.04 server and for storage purpose we configured gluster 3.5 as distributed volume and find the below details,
1).4 Servers - 14.04 Ubuntu Server and each server disks free spaces are configured as ZFS raiddz2 volume
2). Each server has /pool/gluster zfs volume and capacity as - 5 TB,8 TB,6 TB and 10 TB
3). Bricks are - rep1,rep2,rep3 and st1 and all the bricks are connected as Distributed Volume and mounted on each system as,
For E.x in rep1 -> mount -t glusterfs rep1:/glustervol /data.
rep2 -> mount -t glusterfs rep2:/glustervol /data
rep3 -> mount -t glusterfs rep3:/glustervol /data
st1 -> mount -t glusterfs st1:/glustervol /data
So we get /data is having around 29 TB and all our applications data's are stored in /data mount point.
Details about volume:
volume glustervol-client-0
type protocol/client
option send-gids true
option password b217da9d1d8b-bb55
option username 9d76-4553-8c75
option transport-type tcp
option remote-subvolume /pool/gluster
option remote-host rep1
option ping-timeout 42
end-volume
volume glustervol-client-1
type protocol/client
option send-gids true
option password b217da9d1d8b-bb55
option username jkd76-4553-5347
option transport-type tcp
option remote-subvolume /pool/gluster
option remote-host rep2
option ping-timeout 42
end-volume
volume glustervol-client-2
type protocol/client
option send-gids true
option password b217da9d1d8b-bb55
option username 19d7-5a190c2
option transport-type tcp
option remote-subvolume /pool/gluster
option remote-host rep3
option ping-timeout 42
end-volume
volume glustervol-client-3
type protocol/client
option send-gids true
option password b217da9d1d8b-bb55
option username c75-5436b5a168347
option transport-type tcp
option remote-subvolume /pool/gluster
option remote-host st1
option ping-timeout 42
end-volume
volume glustervol-dht
type cluster/distribute
subvolumes glustervol-client-0 glustervol-client-1 glustervol-client-2 glustervol-client-3
end-volume
volume glustervol-write-behind
type performance/write-behind
subvolumes glustervol-dht
end-volume
volume glustervol-read-ahead
type performance/read-ahead
subvolumes glustervol-write-behind
end-volume
volume glustervol-io-cache
type performance/io-cache
subvolumes glustervol-read-ahead
end-volume
volume glustervol-quick-read
type performance/quick-read
subvolumes glustervol-io-cache
end-volume
volume glustervol-open-behind
type performance/open-behind
subvolumes glustervol-quick-read
end-volume
volume glustervol-md-cache
type performance/md-cache
subvolumes glustervol-open-behind
end-volume
volume glustervol
type debug/io-stats
option count-fop-hits off
option latency-measurement off
subvolumes glustervol-md-cache
end-volume
ap@rep3:~$ sudo gluster volume info
Volume Name: glustervol
Type: Distribute
Volume ID: 165b-XXXXX
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: rep1:/pool/gluster
Brick2: rep2:/pool/gluster
Brick3: rep3:/pool/gluster
Brick4: st1:/pool/gluster
We have standalone server(bak-server), which is sitting outside glusterfs for backup our data.At present our /data is having around 4 TB and initiated rsync data size around 1.2 TB from server rep1 (one of the gluster server) to backup server. After some 2 hour, it throws "Transport end point connect error" and the copy is canceled. But if i checked the volume status and all is working fine.
What could be the reason and is there any tuning parameter needs to be set, for these type of activity ?
Regards,
Varad
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users