running glusterfsd and glusterfs on the same node is definitely not a bad practice. the nufa scheduler (to be used with the unify translator) is written with such a scenario in mind where a group of HPC machines export a 'piece' of the storage and all of them get a combined shared mount. to build both glusterfsd and glusterfs you just have to ./configure and make install. both are installed by default. avati 2007/6/4, Brandon Lamb <brandonlamb@xxxxxxxxx>:
On 6/3/07, James Porter <jameslporter@xxxxxxxxx> wrote: > that is a good question, and how would you compile glusterfs and glusterfsd > ? > > On 6/3/07, Brandon Lamb <brandonlamb@xxxxxxxxx > wrote: > > > > I was wondering if there was any input on best practices of setting up > > a 2 or 3 server cluster. > > > > My question has to do with where to run glusterfsd (server) and where > > to run glusterfs (mounting as a client). > > > > Should I keep the servers that are actually handling the drives and > > exporting the glusterfs ONLY act as servers? > > > > ie should i run glusterfsd on 2 or 3 servers and then need another 1+ > > client machines that mount? > > > > Or can I safely have the server and client running on the same machines? > > > > does that make sense? not sure how else to ask it haha > > > > > > _______________________________________________ > > Gluster-devel mailing list > > Gluster-devel@xxxxxxxxxx > > http://lists.nongnu.org/mailman/listinfo/gluster-devel I have a 2 server test setup and i just did a ./configure in the glusterfs-1.3.0-pre4 directory. Both run glusterfsd exporting /home/export and both connect as clients to themselves and the other and use the AFR translator using *:2 seems to work, but I dont know if this is a good practice or if i am asking for trouble or what. Also, does this list have a top or bottom post preference? Top is more natural but some lists are picky. =P _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxx http://lists.nongnu.org/mailman/listinfo/gluster-devel
-- Anand V. Avati