Newbee Question: GlusterFS on Compute Cluster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2013-05-10 at 18:56 -0400, Michael Brown wrote:
> OK, if you want to be cool, use an IPv6 multicast group :)
Hmmm... I'm embarrassed to say I haven't done enough IPv6 to know about
this sort of cool trick. Does this really work? Maybe you have more
info?

James

> 
> 
> M.
> -- 
> Michael Brown | `One of the main causes of the fall of
> Systems Consultant | the Roman Empire was that, lacking zero,
> Net Direct Inc. | they had no way to indicate successful
> ?: +1 519 883 1172 x5106 | termination of their C programs.' - Firth
> 
> 
> From: James
> Sent: Friday, May 10, 2013 18:54 PM
> To: mozes at ksu.edu
> Reply To: James
> Cc: gluster-users at gluster.org
> Subject: Re: Newbee Question: GlusterFS on Compute
> Cluster?
> 
> 
> On Fri, May 10, 2013 at 6:45 PM, Adam Tygart <mozes at k-state.edu>
> wrote:
> > Randy,
> >
> > On my compute cluster we use round-robin dns (for HA of the volume
> > definition) and mount the GlusterFS volume via the FUSE (native)
> client. All
> > of the I/O would go directly to the nodes, rather than through an
> > intermediary (NFS) server.
> I've mentioned this once before, but in my opinion, using something
> like vrrp (eg: keepalived) is better than using rr-dns. Also it's
> cooler.
> 
> James
> 
> >
> > --
> > Adam Tygart
> > Beocat Sysadmin
> > www.beocat.cis.ksu.edu
> >
> >
> > On Fri, May 10, 2013 at 5:38 PM, Bradley, Randy
> <Randy.Bradley at ars.usda.gov>
> > wrote:
> >>
> >>
> >> I've got a 24 node compute cluster. Each node has one extra
> terabyte
> >> drive. It seemed reasonable to install Gluster on each of the
> compute nodes
> >> and the head node. I created a volume from the head node:
> >>
> >> gluster volume create gv1 rep 2 transport tcp
> compute000:/export/brick1
> >> compute001:/export/brick1 compute002:/export/brick1
> >> compute003:/export/brick1 compute004:/export/brick1
> >> compute005:/export/brick1 compute006:/export/brick1
> >> compute007:/export/brick1 compute008:/export/brick1
> >> compute009:/export/brick1 compute010:/export/brick1
> >> compute011:/export/brick1 compute012:/export/brick1
> >> compute013:/export/brick1 compute014:/export/brick1
> >> compute015:/export/brick1 compute016:/export/brick1
> >> compute017:/export/brick1 compute018:/export/brick1
> >> compute019:/export/brick1 compute020:/export/brick1
> >> compute021:/export/brick1 compute022:/export/brick1
> >> compute023:/export/brick1
> >>
> >> And then I mounted the volume on the head node. So far, so good.
> Apx. 10
> >> TB available.
> >>
> >> Now I would like each compute node to be able to access files on
> this
> >> volume. Would this be done by NFS mount from the head node to the
> compute
> >> nodes or is there a better way?
> >>
> >>
> >> Thanks,
> >>
> >> Randy
> >>
> >>
> >>
> >>
> >> This electronic message contains information generated by the USDA
> solely
> >> for the intended recipients. Any unauthorized interception of this
> message
> >> or the use or disclosure of the information it contains may violate
> the law
> >> and subject the violator to civil or criminal penalties. If you
> believe you
> >> have received this message in error, please notify the sender and
> delete the
> >> email immediately.
> >>
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users at gluster.org
> >> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >
> >
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130510/84b763b2/attachment.sig>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux