Re: GlusterFS as virtual machine storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I remember seeing errors like "Transport endpoint not connected" in
the client logs after ping timeout even with arbiter. Arbiter does not
prevent this.

And if you end up in situation when arbiter blames the only running
brick for given file, you are doomed.
-ps


On Wed, Aug 23, 2017 at 9:26 PM,  <lemonnierk@xxxxxxxxx> wrote:
> Really ? I can't see why. But I've never used arbiter so you probably
> know more about this than I do.
>
> In any case, with replica 3, never had a problem.
>
> On Wed, Aug 23, 2017 at 09:13:28PM +0200, Pavel Szalbot wrote:
>> Hi, I believe it is not that simple. Even replica 2 + arbiter volume
>> with default network.ping-timeout will cause the underlying VM to
>> remount filesystem as read-only (device error will occur) unless you
>> tune mount options in VM's fstab.
>> -ps
>>
>>
>> On Wed, Aug 23, 2017 at 6:59 PM,  <lemonnierk@xxxxxxxxx> wrote:
>> > What he is saying is that, on a two node volume, upgrading a node will
>> > cause the volume to go down. That's nothing weird, you really should use
>> > 3 nodes.
>> >
>> > On Wed, Aug 23, 2017 at 06:51:55PM +0200, Gionatan Danti wrote:
>> >> Il 23-08-2017 18:14 Pavel Szalbot ha scritto:
>> >> > Hi, after many VM crashes during upgrades of Gluster, losing network
>> >> > connectivity on one node etc. I would advise running replica 2 with
>> >> > arbiter.
>> >>
>> >> Hi Pavel, this is bad news :(
>> >> So, in your case at least, Gluster was not stable? Something as simple
>> >> as an update would let it crash?
>> >>
>> >> > I once even managed to break this setup (with arbiter) due to network
>> >> > partitioning - one data node never healed and I had to restore from
>> >> > backups (it was easier and kind of non-production). Be extremely
>> >> > careful and plan for failure.
>> >>
>> >> I would use VM locking via sanlock or virtlock, so a split brain should
>> >> not cause simultaneous changes on both replicas. I am more concerned
>> >> about volume heal time: what will happen if the standby node
>> >> crashes/reboots? Will *all* data be re-synced from the master, or only
>> >> changed bit will be re-synced? As stated above, I would like to avoid
>> >> using sharding...
>> >>
>> >> Thanks.
>> >>
>> >>
>> >> --
>> >> Danti Gionatan
>> >> Supporto Tecnico
>> >> Assyoma S.r.l. - www.assyoma.it
>> >> email: g.danti@xxxxxxxxxx - info@xxxxxxxxxx
>> >> GPG public key ID: FF5F32A8
>> >> _______________________________________________
>> >> Gluster-users mailing list
>> >> Gluster-users@xxxxxxxxxxx
>> >> http://lists.gluster.org/mailman/listinfo/gluster-users
>> >
>> > _______________________________________________
>> > Gluster-users mailing list
>> > Gluster-users@xxxxxxxxxxx
>> > http://lists.gluster.org/mailman/listinfo/gluster-users
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux