Removing bricks from a replicated setup completely brakes volume on Gluster 3.3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

I'm using the following glusterFS version:
	glusterfs 3.3.1 built on Oct 11 2012
I was successfully able to remove bricks from a 4-replica volume by reducing
the replica count to 3. My "gluster volume status" displayed the status of
the volume to be a 3-Mode Replicate volume. Further I removed another brick
by reducing the replica to 2. 

Later, added another node using add-brick and increasing the replica count
to 3. ALL WORKED FINE FOR ME. !!

Here are the commands I used:
1) gluster volume remove-brick Cloud-data replica 3 GSNODE01:/mnt/brick1
(Changed Replica count from 4 to 3)
2) gluster volume remove-brick Cloud-data replica 2 GSNODE01:/mnt/brick2
(Changed Replica count from 3 to 2)
3) gluster volume add-brick Cloud-data replica 3 GSNODE01:/brick4
(Changed Replica count from 2 to 3)

Thanks & Regards,

Bobby Jacob
Senior Technical Systems Engineer | eGroup

-----Original Message-----
From: gluster-users-bounces at gluster.org
[mailto:gluster-users-bounces at gluster.org] On Behalf Of Marc Seeger
Sent: Tuesday, June 11, 2013 3:42 PM
To: gluster-users at gluster.org
Subject: Removing bricks from a replicated setup completely
brakes volume on Gluster 3.3

Initial setup: A replicated volume with 3 bricks
Goal: Remove one of the bricks from it.
Version: # glusterfs 3.3git built on Jun  7 2013 14:38:02 (branch
release-3.3)

Initial setup: A replicated volume with 3 bricks
Goal: Remove one of the bricks from it.
Outcome: A completely broken volume


------------- Volume info -------------

root at fs-14.example:~# gluster volume info

Volume Name: test-fs-cluster-1
Type: Replicate
Volume ID: 752e7ffd-04bb-4234-8d16-d1f49ef510b7
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: fs-14.example.com:/mnt/brick21
Brick2: fs-15.example.com:/mnt/brick20
Brick3: fs-14.example.com:/mnt/brick33


------------- Trying to remove a brick -------------

fields-config-gluster.rb[5035]: Using commandline: gluster volume
remove-brick test-fs-cluster-1 replica 2 fs-14.example.com:/mnt/brick33
start
fields-config-gluster.rb[5035]: Command returned exit code 255: gluster
volume remove-brick test-fs-cluster-1 replica 2
fs-14.example.com:/mnt/brick33 start stdout was:

stderr was:
Remove Brick start unsuccessful




------------- Volume turned Distributed-Replicate ------------- [12:23:37]
root at fs-14.example:~# gluster volume info
 
Volume Name: test-fs-cluster-1
Type: Distributed-Replicate
Volume ID: 752e7ffd-04bb-4234-8d16-d1f49ef510b7
Status: Started
Number of Bricks: 1 x 2 = 3
Transport-type: tcp
Bricks:
Brick1: fs-14.example.com:/mnt/brick21
Brick2: fs-15.example.com:/mnt/brick20
Brick3: fs-14.example.com:/mnt/brick33


------------- Trying to remove brick again -------------

[12:26:20] root at fs-14.example:~# gluster volume remove-brick
test-fs-cluster-1 replica 2 fs-14.example.com:/mnt/brick33 start number of
bricks provided (1) is not valid. need at least 2 (or 2xN)

------------- Trying to stop volume -------------

[12:28:34] root at fs-14.example:~# gluster volume stop test-fs-cluster-1
Stopping volume will make its data inaccessible. Do you want to continue?
(y/n) y Stopping volume test-fs-cluster-1 has been successful


------------- Trying to start volume again ------------- [12:29:03]
root at fs-14.example:~# gluster volume start test-fs-cluster-1 Starting volume
test-fs-cluster-1 has been unsuccessful

------------- Trying to stop volume again -------------

[12:29:49] root at fs-14.example:~# gluster volume stop test-fs-cluster-1
Stopping volume will make its data inaccessible. Do you want to continue?
(y/n) y Volume test-fs-cluster-1 is not in the started state

------------- Trying to delete volume -------------

[12:29:55] root at fs-14.example:~# gluster volume delete test-fs-cluster-1
Deleting volume will erase all information about the volume. Do you want to
continue? (y/n) y Volume test-fs-cluster-1 has been started.Volume needs to
be stopped before deletion.

------------- Checking volume info ------------- # gluster volume info
 
Volume Name: test-fs-cluster-1
Type: Distributed-Replicate
Volume ID: 752e7ffd-04bb-4234-8d16-d1f49ef510b7
Status: Started
Number of Bricks: 1 x 2 = 3
Transport-type: tcp
Bricks:
Brick1: fs-14.example.com:/mnt/brick21
Brick2: fs-15.example.com:/mnt/brick20
Brick3: fs-14.example.com:/mnt/brick33

------------- Trying to stop volume again ------------- [12:30:50]
root at fs-14.example:~# gluster volume stop test-fs-cluster-1 Stopping volume
will make its data inaccessible. Do you want to continue? (y/n) y Volume
test-fs-cluster-1 is not in the started state



------------- Restarting glusterfs-server -------------

[12:38:05] root at fs-14.example:~# /etc/init.d/glusterfs-server restart
glusterfs-server start/running, process 6426

------------- Volume switched back to "Replicate" ------------- [12:38:33]
root at fs-14.example:~# gluster volume info
 
Volume Name: test-fs-cluster-1
Type: Replicate
Volume ID: 752e7ffd-04bb-4234-8d16-d1f49ef510b7
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: fs-14.example.com:/mnt/brick21
Brick2: fs-15.example.com:/mnt/brick20
Brick3: fs-14.example.com:/mnt/brick33


------------- Trying to stop volume again ------------- [12:38:39]
root at fs-14.example:~# gluster volume stop test-fs-cluster-1 Stopping volume
will make its data inaccessible. Do you want to continue? (y/n) y Volume
test-fs-cluster-1 is not in the started state



Any idea what's up with that?

Cheers,
Marc
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux