3.1.x Setup: distributed replicated volumes across 6 servers, each with 24 drives

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If I may ask, is there a reason you're not putting the 24 drives on each 
server into raid6/5 arrays and doing dist+rep over just:

clustr-01:/mnt/data clustr-02:/mnt/data clustr-03:/mnt/data
clustr-04:/mnt/data clustr-05:/mnt/data clustr-06:/mnt/data

But in answer to your question I think your config looks good and to 
mount you would just issue:

mount -t glusterfs clustr-0?:/dist-datastore/ /<mountpoint>


On 12/22/2010 07:38 AM, phil cryer wrote:
> I have 6 servers, each with 24 drives, and I'm upgrading to 3.1.x and
> want to redo my configuration from scratch. Really interested in some
> of the new options and configurations in 3.1, so now I want to get it
> setup right from the start. From this
> page:http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Configuring_Distributed_Replicated_Volumes
> I see this distributed, replicated, 6 server example:
> # gluster volume create test-volume replica 2 transport tcp
> server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5
> server6:/exp6
>
> Then from there I see an example more in line with what I'm trying to
> do here, but using 4 nodes:
> http://gluster.org/pipermail/gluster-users/2010-December/006001.html
> # #gluster volume create vol1 replica 2 transport tcp
> server1:/mnt/array1 server2:/mnt/array1 server3:/mnt/array1
> server4:/mnt/array1 server1:/mnt/array2 server2:/mnt/array2
> server3:/mnt/array2 server4:/mnt/array2 server1:/mnt/array3
> server2:/mnt/array3 server3:/mnt/array3 server4:/mnt/array3
> server1:/mnt/array4
>
> So, if I have the following servers:
> clustr-01
> clustr-02
> clustr-03
> clustr-04
> clustr-05
> clustr-06
>
> and all of my drives mounted under:
> /mnt/data01
> /mnt/data02
> /mnt/data03
> /mnt/data04
> /mnt/data05
> [...]
> /mnt/data24
>
> Should I issue a command like this to set it up:
>
> gluster volume create dist-datastore replica 2 transport tcp /
> clustr-01:/mnt/data01 clustr-02:/mnt/data01 clustr-03:/mnt/data01
> clustr-04:/mnt/data01 clustr-05:/mnt/data01 clustr-06:/mnt/data01 /
> clustr-01:/mnt/data02 clustr-02:/mnt/data02 clustr-03:/mnt/data02
> clustr-04:/mnt/data02 clustr-05:/mnt/data02 clustr-06:/mnt/data02 /
> clustr-01:/mnt/data03 clustr-02:/mnt/data03 clustr-03:/mnt/data03
> clustr-04:/mnt/data03 clustr-05:/mnt/data03 clustr-06:/mnt/data03 /
> [...]
> clustr-01:/mnt/data24 clustr-02:/mnt/data24 clustr-03:/mnt/data24
> clustr-04:/mnt/data24 clustr-05:/mnt/data24 clustr-06:/mnt/data24
>
> So that each /mnt/dataxx is replicated and distributed across all 6 nodes?
>
> Then, once this is completed successfully, how do I map all
> /mnt/data01-24 to one mount point, say /mnt/cluster for example?
> Before I would have added this to /etc/fstab and done `mount -a`
> /etc/glusterfs/glusterfs.vol  /mnt/cluster  glusterfs  defaults  0  0
>
> Is there a better way in 3.1.x, should I use mount.glusterfs or ?
>
> Thanks
>
> P
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux