On 07/01/2014 05:13 PM, Jonathan Barber wrote:
Hello all,
I'm investigating GlusterFS+Swift for use in a "large" (starting at
~150TB) scale out file system for storing and serving photographic images.
Currently I'm thinking of using servers with JBODs and it's clear how to
use Gluster's replication sets to give resiliency at the server level.
However, I'd like to have multiple bricks per server (with a brick per
drive controller) and managing the replication sets starts to look more
complicated from a management point of view. Also,when it comes to
expanding the solution in the future, I reckon that I will be adding
bricks of different sizes with different numbers of bricks per server -
further complicating management.
So, I was wondering if there is support for (or plans for) failure
domains (like Oracle's ASM failure groups) which would allow you to
describe groups of bricks within which replicas can't be co-located?
(e.g. bricks from the same server are placed in the same failure domain,
meaning that no-two replicas are allowed on these groups of bricks).
A warning does already get displayed in CLI when bricks from the same
server are attempted to be part of the same replica set:
[root@deepthought lo]# gluster volume create myvol replica 2
deepthought:/d/brick1 deepthought:/d/brick2
Multiple bricks of a replicate volume are present on the same server.
This setup is not optimal.
Do you still want to continue creating the volume? (y/n) n
Volume create failed
What other policies would you be interested in associating with failure
domains?
Regards,
Vijay
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users