Dear all, could someone please let me know the sha1 in the git repository on which release 3.0.5. is based? Is there something special one needs to do to use the git version. Thanks! ... Matt Background: Somehow the one I compiled all do not work for me .vol file though 3.0.5 does. However, patching 3.0.5 instead a git version with Jeff Darcy's nufa patch also did not work. On Jul 29, 2010, at 12:48 , Burnash, James wrote: > Thank you very much sir! > > -----Original Message----- > From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Raghavendra G > Sent: Thursday, July 29, 2010 1:54 AM > To: Gluster General Discussion List > Subject: Re: Mirror volumes with odd number of servers > > seems like there is some problem in sending attachments. Please find the configuration below, > > server.vol: > =========== > > # **** server1 spec file **** > > ### Export volume "brick" with the contents of "/home/export" directory. > volume posix1 > type storage/posix # POSIX FS translator > option directory /data/export/1 # Export this directory > end-volume > > ### Add POSIX record locking support to the storage brick > volume brick1 > type features/posix-locks > option mandatory on # enables mandatory locking on all files > subvolumes posix1 > end-volume > > ### Add network serving capability to above brick. > volume server-1 > type protocol/server > option transport-type tcp # For TCP/IP transport > option transport.socket.listen-port 6996 # Default is 6996 > # option client-volume-filename /etc/glusterfs/glusterfs-client.vol > subvolumes brick1 > option auth.addr.brick1.allow * # access to "brick" volume > end-volume > > > #========================================================================= > > # **** server2 spec file **** > volume posix2 > type storage/posix # POSIX FS translator > option directory /data/export/2 # Export this directory > end-volume > > ### Add POSIX record locking support to the storage brick > volume brick2 > type features/posix-locks > option mandatory on # enables mandatory locking on all files > subvolumes posix2 > end-volume > > ### Add network serving capability to above brick. > volume server-2 > type protocol/server > option transport-type tcp # For TCP/IP transport > option transport.socket.listen-port 6997 # Default is 6996 > subvolumes brick2 > option auth.addr.brick2.allow * # Allow access to "brick" volume > end-volume > > > #========================================================================= > > # **** server3 spec file **** > > volume posix3 > type storage/posix # POSIX FS translator > option directory /data/export/3 # Export this directory > end-volume > > ### Add POSIX record locking support to the storage brick > volume brick3 > type features/posix-locks > option mandatory on # enables mandatory locking on all files > subvolumes posix3 > end-volume > > ### Add network serving capability to above brick. > volume server-3 > type protocol/server > option transport-type tcp # For TCP/IP transport > option transport.socket.listen-port 6998 # Default is 6996 > subvolumes brick3 > option auth.addr.brick3.allow * # access to "brick" volume > end-volume > > #========================================================================= > # **** server4 spec file **** > ### Export volume "brick" with the contents of "/home/export" directory. > volume posix4 > type storage/posix # POSIX FS translator > option directory /data/export/4 # Export this directory > end-volume > > ### Add POSIX record locking support to the storage brick > volume brick4 > type features/posix-locks > option mandatory on # enables mandatory locking on all files > subvolumes posix4 > end-volume > > ### Add network serving capability to above brick. > volume server-4 > type protocol/server > option transport-type tcp # For TCP/IP transport > option transport.socket.listen-port 6999 # Default is 6996 > # option client-volume-filename /etc/glusterfs/glusterfs-client.vol > subvolumes brick4 > option auth.addr.brick4.allow * # access to "brick" volume > end-volume > > #========================================================================= > # **** server5 spec file **** > > ### Export volume "brick" with the contents of "/home/export" directory. > volume posix5 > type storage/posix # POSIX FS translator > option directory /data/export/5 # Export this directory > end-volume > > ### Add POSIX record locking support to the storage brick > volume brick5 > type features/posix-locks > option mandatory on # enables mandatory locking on all files > subvolumes posix5 > end-volume > > ### Add network serving capability to above brick. > volume server-5 > type protocol/server > option transport-type tcp # For TCP/IP transport > option transport.socket.listen-port 7000 # Default is 6996 > # option client-volume-filename /etc/glusterfs/glusterfs-client.vol > subvolumes brick5 > option auth.addr.brick5.allow * # access to "brick" volume > end-volume > > ============================================================================================================================= > > distributed-replicate.vol: > ========================== > > # **** Clustered Client config file **** > > ### Add client feature and attach to remote subvolume of server1 > volume client1 > type protocol/client > option transport-type tcp # for TCP/IP transport > option remote-host 127.0.0.1 # IP address of the remote brick > option transport.socket.remote-port 6996 # default server port is 6996 > option remote-subvolume brick1 # name of the remote volume > end-volume > > ### Add client feature and attach to remote subvolume of server2 > volume client2 > type protocol/client > option transport-type tcp # for TCP/IP transport > option remote-host 127.0.0.1 # IP address of the remote brick > option transport.socket.remote-port 6997 # default server port is 6996 > option remote-subvolume brick2 # name of the remote volume > end-volume > > volume client3 > type protocol/client > option transport-type tcp # for TCP/IP transport > option remote-host 127.0.0.1 # IP address of the remote brick > option transport.socket.remote-port 6998 # default server port is 6996 > option remote-subvolume brick3 # name of the remote volume > end-volume > > volume client4 > type protocol/client > option transport-type tcp # for TCP/IP transport > option remote-host 127.0.0.1 # IP address of the remote brick > option transport.socket.remote-port 6999 # default server port is 6996 > option remote-subvolume brick4 # name of the remote volume > end-volume > > volume client5 > type protocol/client > option transport-type tcp # for TCP/IP transport > option remote-host 127.0.0.1 # IP address of the remote brick > option transport.socket.remote-port 7000 # default server port is 6996 > option remote-subvolume brick5 # name of the remote volume > end-volume > > ## Add replicate feature. > volume replicate-1 > type cluster/replicate > subvolumes client1 client2 client3 > end-volume > > ## Add replicate feature. > volume replicate-2 > type cluster/replicate > subvolumes client4 client5 > end-volume > > volume distribute > type cluster/distribute > subvolumes replicate-1 replicate-2 > end-volume > > regards, > ----- Original Message ----- > From: "Raghavendra G" <raghavendra at gluster.com> > To: "Gluster General Discussion List" <gluster-users at gluster.org> > Sent: Thursday, July 29, 2010 9:36:42 AM > Subject: Re: Mirror volumes with odd number of servers > > previously sent volume specification files were of a replicated setup of 3 servers. The ones attached with this mail are examples of a distributed-replicated setup of 5 servers. > > regards, > ----- Original Message ----- > From: "Raghavendra G" <raghavendra at gluster.com> > To: "Gluster General Discussion List" <gluster-users at gluster.org> > Sent: Thursday, July 29, 2010 9:28:33 AM > Subject: Re: Mirror volumes with odd number of servers > > Hi James, > > Please find example volume specification files attached with this mail. > > regards, > ----- Original Message ----- > From: "James Burnash" <jburnash at knight.com> > To: "Gluster General Discussion List" <gluster-users at gluster.org> > Sent: Wednesday, July 28, 2010 8:41:08 PM > Subject: Re: Mirror volumes with odd number of servers > > Thanks Tejas. > > If an actual example of the glusterfs.vol was available showing this setup was available, that would be a valuable sanity check against what I will build. > > James Burnash, Unix Engineering > T. 201-239-2248 > jburnash at knight.com | www.knight.com > > 545 Washington Ave. | Jersey City, NJ > > > -----Original Message----- > From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Tejas N. Bhise > Sent: Wednesday, July 28, 2010 12:26 PM > To: Gluster General Discussion List > Subject: Re: Mirror volumes with odd number of servers > > James, > > You can do that, but you will have to hand craft the volume file. volgen looks for even numbers as you have already noticed. > > With hand crafting of volume files, you can even have a replicate distribute .. so e.g. 2 copies of file, and each replicate node of the graph can have a differing number of distribute servers under it. > > something like this can be done to have a replica count of two with 5 servers... > > > | R1 |----------R1D1 > mount--- |-----------|----------R1D2 > | > | R2 |-----------R2D1 > |-----------|-----------R2D2 > |-----------R2D3 > > > The design of translators is so modular that they can be used in any combination. This however used to lead to confusion and hence we developed volgen that produced easy to use and default best-fit configuration. > > From the perspective of "official" support, we typically only support configs that volgen produces. For other configs, we do fix the bugs, but how fast depends on the config and translator seeing the bug. > > Let me know if you have more questions about this. > > Regards, > Tejas Bhise. > > > ----- Original Message ----- > From: "James Burnash" <jburnash at knight.com> > To: "Gluster General Discussion List" <gluster-users at gluster.org> > Sent: Wednesday, July 28, 2010 6:41:02 PM > Subject: Re: Mirror volumes with odd number of servers > > Carl - could you possibly provide an example of a configuration using an odd number of servers? glusterfs-volgen is unhappy when you don't give an even number. > > Thanks! > > James Burnash, Unix Engineering > T. 201-239-2248 > jburnash at knight.com | www.knight.com > > 545 Washington Ave. | Jersey City, NJ > > > -----Original Message----- > From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Craig Carl > Sent: Wednesday, July 28, 2010 12:16 AM > To: Gluster General Discussion List > Subject: Re: Mirror volumes with odd number of servers > > Brock - > It is completely possible using Gluster File System. We haven't exposed that option via the Gluster Storage Platform GUI yet. > > Craig > > > > > -- > Craig Carl > > > > Gluster, Inc. > Cell - (408) 829-9953 (California, USA) > Gtalk - craig.carl at gmail.com > > > From: "brock brown" <brownbm at u.washington.edu> > To: gluster-users at gluster.org > Sent: Tuesday, July 27, 2010 12:51:19 PM > Subject: Mirror volumes with odd number of servers > > > All the documentation refers to using multiples of 2 when setting up mirrored volumes. Also the Gluster Storage Platform will not allow it. Is this impossible and if so why? > > > > Thanks, > > Brock > > _________________________________________________________________ > The New Busy is not the too busy. Combine all your e-mail accounts with Hotmail. > http://www.windowslive.com/campaign/thenewbusy?tile=multiaccount&ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_4 > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users > > > DISCLAIMER: > This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this in error, please immediately notify me and permanently delete the original and any copy of any e-mail and any printout thereof. E-mail transmission cannot be guaranteed to be secure or error-free. The sender therefore does not accept liability for any errors or omissions in the contents of this message which arise as a result of e-mail transmission. > NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its discretion, monitor and review the content of all e-mail communications. http://www.knight.com > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users Matt langelino at gmx.net