Newbee Question: GlusterFS on Compute Cluster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Pointing the node to mount from itself will certainly work. I used rrdns
because I've seen some race conditions from glusterd being 'started' but
not ready to accept mount requests by the time the netmount service starts.

--
Adam


On Sun, May 12, 2013 at 11:14 AM, <elvinas.piliponis at barclays.com> wrote:

> Hello Adam, ****
>
> ** **
>
> > On my compute cluster we use round-robin dns (for HA of the volume
> definition) and mount the GlusterFS volume via the FUSE (native) client***
> *
>
> Can you please be more detailed on this? I have somewhat similar
> copute-storage setup and pointed each compute node to itself as a GlusterFS
> volume server via FUSE mount. Would be there any difference? ****
>
> ** **
>
> Thank you****
>
> *From:* gluster-users-bounces at gluster.org [mailto:
> gluster-users-bounces at gluster.org] *On Behalf Of *Adam Tygart
> *Sent:* 11 May 2013 01:46
> *To:* gluster-users at gluster.org
>
> *Subject:* Re: Newbee Question: GlusterFS on Compute
> Cluster?****
>
> ** **
>
> Randy,****
>
> ** **
>
> On my compute cluster we use round-robin dns (for HA of the volume
> definition) and mount the GlusterFS volume via the FUSE (native) client.
> All of the I/O would go directly to the nodes, rather than through an
> intermediary (NFS) server.****
>
> ** **
>
> --****
>
> Adam Tygart****
>
> Beocat Sysadmin****
>
> www.beocat.cis.ksu.edu****
>
> ** **
>
> On Fri, May 10, 2013 at 5:38 PM, Bradley, Randy <
> Randy.Bradley at ars.usda.gov> wrote:****
>
> ** **
>
> I've got a 24 node compute cluster.  Each node has one extra terabyte
> drive.  It seemed reasonable to install Gluster on each of the compute
> nodes and the head node.  I created a volume from the head node:****
>
> ** **
>
> gluster volume create gv1 rep 2 transport tcp compute000:/export/brick1
> compute001:/export/brick1 compute002:/export/brick1
> compute003:/export/brick1 compute004:/export/brick1
> compute005:/export/brick1 compute006:/export/brick1
> compute007:/export/brick1 compute008:/export/brick1
> compute009:/export/brick1 compute010:/export/brick1
> compute011:/export/brick1 compute012:/export/brick1
> compute013:/export/brick1 compute014:/export/brick1
> compute015:/export/brick1 compute016:/export/brick1
> compute017:/export/brick1 compute018:/export/brick1
> compute019:/export/brick1 compute020:/export/brick1
> compute021:/export/brick1 compute022:/export/brick1
> compute023:/export/brick1****
>
> ** **
>
> And then I mounted the volume on the head node.  So far, so good.  Apx. 10
> TB available.****
>
> ** **
>
> Now I would like each compute node to be able to access files on this
> volume.  Would this be done by NFS mount from the head node to the compute
> nodes or is there a better way?****
>
> ** **
>
> ** **
>
> Thanks,****
>
> ** **
>
> Randy****
>
>
>
>
>
> This electronic message contains information generated by the USDA solely
> for the intended recipients. Any unauthorized interception of this message
> or the use or disclosure of the information it contains may violate the law
> and subject the violator to civil or criminal penalties. If you believe you
> have received this message in error, please notify the sender and delete
> the email immediately. ****
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users****
>
> ** **
>
>
> This e-mail and any attachments are confidential and intended solely for
> the addressee and may also be privileged or exempt from disclosure under
> applicable law. If you are not the addressee, or have received this e-mail
> in error, please notify the sender immediately, delete it from your system
> and do not copy, disclose or otherwise act upon any part of this e-mail or
> its attachments.
>
> Internet communications are not guaranteed to be secure or virus-free.
> The Barclays Group does not accept responsibility for any loss arising
> from unauthorised access to, or interference with, any Internet
> communications by any third party, or from the transmission of any viruses.
> Replies to this e-mail may be monitored by the Barclays Group for
> operational or business reasons.
>
> Any opinion or other information in this e-mail or its attachments that
> does not relate to the business of the Barclays Group is personal to the
> sender and is not given or endorsed by the Barclays Group.
>
> Barclays Bank PLC.Registered in England and Wales (registered no. 1026167).
> Registered Office: 1 Churchill Place, London, E14 5HP, United Kingdom.
>
> Barclays Bank PLC is authorised and regulated by the Financial Services
> Authority.
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130512/74605d94/attachment.html>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux