replica count documentation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>
> > > Assuming I have my replica count set to 2 with my 2 brick setup, it
> sure
> > > isn't strict. I've waited 15 mins after the write to one brick has
> > > finished before it was actually synced up on the second.
> >
> > There is no consistency level. Response will be returned to the user
> > after the slowest writes is done. Writes occur sync. to all the
> > servers in replica set.
> Something doesn't match up here. If it can take 15 minutes for a write to
> replicate to the second brick, according to the first comment, then it
> should take that whole 15 minutes for the control to be returned to the
> user, per the second comment.
>

You're right, and I think it was my fault. Say you make a volume like so:

gluster volume create test-volume replica 2 192.168.0.150:/test-volume
192.168.0.151:/test-volume

A naive person like myself would see the storage mounted at /test-volume on
the filesystem and think that they could write to it. I believe this is
wrong.

Someone please verify: You must mount the gluster volume on storage bricks
and access it through that mount point (eg /mnt/test-volume), and NOT the
"volume point" (eg /test-volume) which you declared when you created the
volume.

If this is the case, I really think having documentation that has examples
of creating volumes with targets such as /test-volume can really create
confusion.

---

Despite that I'm trying to work out the logic of how to put an HA KVM setup
> on a 2-unit replicated Gluster system.


Same use case here.

--

As for determining how many replicas a volume was created with, I've
discovered this thus far:

For a volume created without the replica param, gluster volume info shows:

Volume Name: test-volume2
> *Type: Distribute*
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.0.150:/test-volume2
> Brick2: 192.168.0.151:/test-volume2


 For a volume created with replica 2, gluster volume info shows the
following. Note the type difference.

Volume Name: test-volume
> *Type: Replicate*
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.0.150:/test-volume
> Brick2: 192.168.0.151:/test-volume



Now, if you are replicating, how do you know how many replicas?


Craig Younkins


On Thu, Apr 14, 2011 at 5:51 PM, Whit Blauvelt
<whit.gluster at transpect.com>wrote:

> On Thu, Apr 14, 2011 at 02:20:02PM -0700, Mohit Anchlia wrote:
>
> > Which similar systems are you referring to?
>
> DR:BD has an overlapping use case, although it's pretty much constrained to
> mirroring between two storage systems. And it has options to return to the
> user after either writing to the first system, or only after writing to the
> second - or even _before_ writing to the first if you trust that once it
> has
> the data it will get there.
>
> DR:BD's also more typically used in primary-secondary arrangements for
> failover than in primary-primary arrangements for load sharing. But there
> are ways to do the second. I've got one pair of servers mirrored through
> DR:BD proving file storage via NFS, and another pair now running Gluster,
> also providing NFS service. Both are doing well so far, although my Gluster
> use is only a few days, where DR:BD's been running happily for months.
>
> Gluster is easier to configure, as DR:BD takes cryptic commands, and
> requires combination with other daemons to get the job done. But DR:BD is
> very good at what it does, and well documented - except that you've got to
> cobble together its documentation with that of the other daemons you're
> integrating it with to get your result, which can be a bit of a mental
> stretch (at least for me).
>
> DR:BD also, being at the device level and in the kernel, has advantages in
> stacking with other stuff. I can put a KVM in an LVM on DR:BD - and KVMs do
> well as LVMs, both running efficiently in them and allowing LVM
> snapshotting
> - it's a better format than qcow. As I understand it I can't put an LVM on
> top of a Gluster - although it has no problem using ext4-formatted LVMs as
> bricks, I doubt it can work in any way with LVMs which are KVMs, with no
> file system in between.
>
> Despite that I'm trying to work out the logic of how to put an HA KVM setup
> on a 2-unit replicated Gluster system. It should be possible to get both
> failover, and live-migration failback going, once I get the concepts and
> incantations right. While KVM is solid, its docs are terse and incomplete.
> That seems to be a trend. Book publishers no longer rush out books on the
> latest tech, and free software creators who hold back instructions improve
> their chance of consulting contracts.
>
> Whit
>
>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux