Re: Gluster pool help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 9/9/2014 7:12 PM, Michael Bushey wrote:
I'm currently using GlusterFS 3.5.2 on a pair of production servers to share uploaded files and it's been reliable and working well with just the two servers. I've done some local tests of trying to add and remove servers and the results have not been good. I'm starting to think I have the wrong idea of what GlusterFS does and I'm trying to stick the square peg in the round hole. Is there some config to say "number of replicas = number of servers"?

My need is to have a copy of the data local on each server. If I have 5 servers, I want five copies of the data. Basically each server should have it's own copy like it's a local ext4/zfs file-system, except it will immediately sync files added or removed on the other servers. I'm not sure the what the gluster definition of brick is, but I believe I want the number of bricks to equal the number of servers (at least for this particular file-system).

I've played around a bit and lost my data every time I've tried to add or remove a node. There is plenty of documentation, but it all says the same thing and doesn't answer the basics.

From `gluster volume info`: Number of Bricks: 1 x 2 = 2

I'm assuming the middle 2 is the number of bricks. I have no clue why we're multiplying by 1 to get itself.
http://gluster.org/community/documentation/index.php/Gluster_3.2:_Displaying_Volume_Information - does not show this magic multiplication. The page for 3.3 does not exist.

We're cloud based and we want to be able to add and remove servers on the fly. If for example I want to scale from 5 to 7 servers, it looks like I need to create a new storage pool with 7 replicas. Would it be the most logical to name my brick with the number of replicas? For example server1 to server5 already have /var/gluster/files5. I could then create /var/gluster/files7 on server1 to server7, create the pool with 7 replicas, then pick a machine like server1 to copy the data from /var/gluster/files5 to /var/gluster/files7, then destroy /var/gluster/files5. This seems extremely convoluted but it does not appear possible to expand a storage pool and expand the number of replicants with it.

Thanks in advance for help and your time to read this.
Michael
I have done this a few times, but it is not real recent, so I will try to explain in a way that is both correct and understandable.

1. As to the question about the "`gluster volume info`: Number of Bricks: 1 x 2 = 2"
The first number is the number of bricks that the data is distributed across.  If that number is 3, each of three bricks has 1/3 of the data.  This is the "Distribute" mode of operating gluster.  From what you describe, you are not interested in that.  You are interested in the "Replicate" mode.  The numbering was adopted because gluster makes it possible to use various combinations of Distribute and Replicate at the same time, and this numbering describes the arrangement very succinctly.

2. As to how to you add a brick to your volume, so you can add a server to your system.  You use the 'gluster volume add-brick' command.  You are telling gluster: You are currently a two-brick system and I want to make you a 3-brick system (each brick being on a different server) by adding 'server3'.  Typing 'gluster volume add-brick help' returns
gluster volume add-brick <VOLNAME> [<stripe|replica> <COUNT>] <NEW-BRICK> ... [force]
Assuming that the brick is ready and freshly formatted with xfs, the command ends up looking something like:
gluster volume add-brick upload replica 3 server3:/bricks/upload/upload
This will create the gluster structure on the brick, and will start the process of replicating the gluster data onto the brick.  The gluster mount should be usable at that point, pulling the data from other bricks if it has not been replicated onto the local brick yet.

If this server has been in service before, it is probably best to reformat the brick before putting it into service.  Otherwise gluster will see remnants of the past use on the brick, and not want to use it, assuming there may be valuable information already stored there.

CAUTION: Unless they have fixed this issue, the replication process can bring a server to its knees until it is fully replicated.

3. To remove a server from your system, the process is basically reversed.  If you want to go from 5 servers to 4, you issue the command:
gluster volume remove-brick upload replica 4 server5:/bricks/upload/upload
This tells gluster to drop the brick on server5 from the volume, and reassures gluster that you know that you will now have a replicated volume with 4 replicas on it.

Hope this helps.

Ted Miller
Elkhart, IN

P.S. I am not reading the list regularly, so if you need more info, best to copy me when sending to the list.  Just saw your request and said "I've done that, I can share some information."
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux