Re: Gluster 3.8.10 rebalance VMs corruption

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Unfortunately, gandalf is precisely right with the point he made on data
consistency in GlusterFS.

> If gluster isn't able to ensure data consistency when doing it's
>     primary role, scaling up a storage, i'm sorry but it can't be
>     considered "enterprise" ready or production ready.

In my short experience with GlusterFS I have known it to fail PRECISELY
on data consistency (data representation consistency to be more
precise). Namely:

a) files partially or not at all replicated due to
b) errors such as: "Transport endpoint not connected"

with more or less random frequency.

I solved all these by disabling SSL. Since I disabled SSL, the system
APPEARS to be reliable.

To me, a system exhibiting such a behavior is not a solid system.

If it's "production ready" or not, now that's a more subjective topic
and I will leave it to the arm chair computer scientists and the
philosophers.




On 3/19/2017 12:53 AM, Krutika Dhananjay wrote:
> 
> 
> On Sat, Mar 18, 2017 at 11:15 PM, Gandalf Corvotempesta
> <gandalf.corvotempesta@xxxxxxxxx
> <mailto:gandalf.corvotempesta@xxxxxxxxx>> wrote:
> 
>     Krutika, it wasn't an attack directly to you.
>     It wasn't an attack at all.
> 
> 
>     Gluster is a "SCALE-OUT" software defined storage, the folllowing is
>     wrote in the middle of the homepage:
>     "GlusterFS is a scalable network filesystem"
> 
>     So, scaling a cluster is one of the primary goal of gluster.
> 
>     A critical bug that prevent gluster from being scaled without loosing
>     data was discovered 1 year ago, and took 1 year to be fixed. 
> 
> 
>     If gluster isn't able to ensure data consistency when doing it's
>     primary role, scaling up a storage, i'm sorry but it can't be
>     considered "enterprise" ready or production ready.
> 
> 
> That's not entirely true. VM use-case is just one of the many workloads
> users
> use Gluster for. I think I've clarified this before. The bug was in
> dht-shard interaction.
> And shard is *only* supported in VM use-case as of today. This means that
> scaling out has been working fine on all but the VM use-case.
> That doesn't mean that Gluster is not production-ready. At least users
> who've deployed Gluster
> in non-VM use-cases haven't complained of add-brick not working in the
> recent past.
> 
> 
> -Krutika
>  
> 
>     Maybe SOHO for small offices or home users, but in enterprises, data
>     consistency and reliability is the most important thing and gluster
>     isn't able to guarantee this even
>     doing a very basic routine procedure that should be considered as the
>     basis of the whole gluster project (as wrote on gluster's homepage)
> 
> 
>     2017-03-18 14:21 GMT+01:00 Krutika Dhananjay <kdhananj@xxxxxxxxxx
>     <mailto:kdhananj@xxxxxxxxxx>>:
>     >
>     >
>     > On Sat, Mar 18, 2017 at 3:18 PM, Gandalf Corvotempesta
>     > <gandalf.corvotempesta@xxxxxxxxx
>     <mailto:gandalf.corvotempesta@xxxxxxxxx>> wrote:
>     >>
>     >> 2017-03-18 2:09 GMT+01:00 Lindsay Mathieson
>     <lindsay.mathieson@xxxxxxxxx <mailto:lindsay.mathieson@xxxxxxxxx>>:
>     >> > Concerning, this was supposed to be fixed in 3.8.10
>     >>
>     >> Exactly. https://bugzilla.redhat.com/show_bug.cgi?id=1387878
>     <https://bugzilla.redhat.com/show_bug.cgi?id=1387878>
>     >> Now let's see how much time they require to fix another CRITICAL bug.
>     >>
>     >> I'm really curious.
>     >
>     >
>     > Hey Gandalf!
>     >
>     > Let's see. There have been plenty of occasions where I've sat and
>     worked on
>     > users' issues on weekends.
>     > And then again, I've got a life too outside of work (or at least I'm
>     > supposed to), you know.
>     > (And hey you know what! Today is Saturday and I'm sitting here and
>     > responding to your mail and collecting information
>     > on Mahdi's issue. Nobody asked me to look into it. I checked the
>     mail and I
>     > had a choice to ignore it and not look into it until Monday.)
>     >
>     > Is there a genuine problem Mahdi is facing? Without a doubt!
>     >
>     > Got a constructive feedback to give? Please do.
>     > Do you want to give back to the community and help improve
>     GlusterFS? There
>     > are plenty of ways to do that.
>     > One of them is testing out the releases and providing feedback.
>     Sharding
>     > wouldn't have worked today, if not for Lindsay's timely
>     > and regular feedback in several 3.7.x releases.
>     >
>     > But this kind of criticism doesn't help.
>     >
>     > Also, spending time on users' issues is only one of the many
>     > responsibilities we have as developers.
>     > So what you see on mailing lists is just the tip of the iceberg.
>     >
>     > I have personally tried several times to recreate the add-brick
>     bug on 3
>     > machines I borrowed from Kaleb. I haven't had success in
>     recreating it.
>     > Reproducing VM-related bugs, in my experience, wasn't easy. I
>     don't use
>     > Proxmox. Lindsay and Kevin did. There are a myriad qemu options
>     used when
>     > launching vms. Different VM management projects (ovirt/Proxmox) use
>     > different defaults for these options. There are too many variables
>     to be
>     > considered
>     > when debugging or trying to simulate the users' test.
>     >
>     > It's why I asked for Mahdi's help before 3.8.10 was out for
>     feedback on the
>     > fix:
>     >
>     http://lists.gluster.org/pipermail/gluster-users/2017-February/030112.html
>     <http://lists.gluster.org/pipermail/gluster-users/2017-February/030112.html>
>     >
>     > Alright. That's all I had to say.
>     >
>     > Happy weekend to you!
>     >
>     > -Krutika
>     >
>     >> _______________________________________________
>     >> Gluster-users mailing list
>     >> Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@xxxxxxxxxxx>
>     >> http://lists.gluster.org/mailman/listinfo/gluster-users
>     <http://lists.gluster.org/mailman/listinfo/gluster-users>
>     >
>     >
> 
> 
> 
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://lists.gluster.org/mailman/listinfo/gluster-users
> 

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux