Mounting Gluster volume on multiple clients.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



No. I write to the same Volume from many clients at the same time all day.

You just can't write to the same file in a Volume at the same time (without
using posix locks).


On Sat, Sep 28, 2013 at 9:37 PM, Bobby Jacob <bobby.jacob at alshaya.com>wrote:

> Hi,
>
> Again, my query is : "When multiple clients write to the same volume, will
> it create any issue ? "
>
> Thanks & Regards,
> Bobby Jacob
>
> -----Original Message-----
> From: gluster-users-bounces at gluster.org [mailto:
> gluster-users-bounces at gluster.org] On Behalf Of Robert Hajime Lanning
> Sent: Thursday, September 26, 2013 8:16 PM
> To: gluster-users at gluster.org
> Subject: Re: Mounting Gluster volume on multiple clients.
>
> On 09/26/13 07:51, Bobby Jacob wrote:
> > We will not have a situation where we need to access the same file
> > from different clients.
> > We are looking for a web application which will be deployed on 2
> > servers. Both these servers will mount the gluster volume and this
> > mount point will act as the data directory for the applications.
> > The users will login to the load balanced application servers and the
> > application creates separate folders for each user.
> >
> > My doubt is if multiple users login to both the application servers
> > and read/ write data. Will thus effect any synchronization of data
> > between the underlying bricks.
>
> In a healthy volume, the replication is synchronous. The client writes to
> all replicas at the same time.
>
> --
> Mr. Flibble
> King of the Potato People
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130929/6ea9fc04/attachment.html>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux