Issues with adding a volume with glusterfs 3.2.1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I can't succeed adding two new replicated bricks in my volume.

I do have a replicated volume on two servers. I created it with the 
following commands :

root at glusterfs1:/var/log# gluster peer probe 192.168.1.31
Probe successful
root at glusterfs1:~# gluster volume create test-volume replica 2 
transport tcp 192.168.1.30:/sharedspace 192.168.1.31:/sharedspace
Creation of volume test-volume has been successful. Please start the 
volume to access data.
root at glusterfs1:~# gluster volume start test-volume
root at glusterfs1:~# gluster volume info

Volume Name: test-volume
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.1.30:/sharedspace
Brick2: 192.168.1.31:/sharedspace

Ok now from a client :

client~# mount -t glusterfs 192.168.1.30:test-volume 
/distributed-volume
client~# echo "hello" > /distributed-volume/foo

Foo is well replicated across the two bricks :

root at glusterfs1:~# ll /sharedspace/
total 28
-rw-r--r-- 1 root root    26  1 juil. 10:29 foo
drwx------ 2 root root 16384 11 janv. 12:18 lost+found

root at glusterfs2:~# ll /sharedspace/
total 28
-rw-r--r-- 1 root root    26  1 juil. 10:29 foo
drwx------ 2 root root 16384 11 janv. 12:18 lost+found

That's perfect... until I try to add two new bricks :

root at glusterfs1:~# gluster peer probe 192.168.1.32
Probe successful
root at glusterfs1:~# gluster peer probe 192.168.1.33
Probe successful
root at glusterfs1:~# gluster volume add-brick test-volume 
192.168.1.32:/sharedspace 192.168.1.33:/sharedspace
Add Brick successful
root at glusterfs1:~# gluster volume info

Volume Name: test-volume
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.1.30:/sharedspace
Brick2: 192.168.1.31:/sharedspace
Brick3: 192.168.1.32:/sharedspace
Brick4: 192.168.1.33:/sharedspace
root at glusterfs1:~# gluster volume rebalance test-volume start
starting rebalance on volume test-volume has been successful
root at glusterfs1:~# gluster volume rebalance test-volume status
rebalance completed

Ok, so If I'm correct, data on Brick1 and Brick2 should be now also 
available from Brick3 and Brick4. But that's not the case :

root at glusterfs3:~# ll /sharedspace/
total 20
drwx------ 2 root root 16384 11 janv. 12:18 lost+found

root at glusterfs4:~# ll /sharedspace/
total 20
drwx------ 2 root root 16384 11 janv. 12:18 lost+found

I'm using Debian Sid with Glusterfs 3.2.1 from official Debian 
repository. Am I wrong somewhere ?

Regards,
Carl Chenet



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux