Hi.
I'm also migrating to BeeGFS and CephFS (depending on usage).
What I liked most about Gluster was that files were easily recoverable
from bricks even in case of disaster and that it said it supported RDMA.
But I soon found that RDMA was being phased out, and I always find
entries that are not healing after a couple months of (not really heavy)
use, directories that can't be removed because not all files have been
deleted from all the bricks and files or directories that become
inaccessible with no apparent reason.
Given that I currently have 3 nodes with 30 12TB disks each in replica 3
arbiter 1 it's become a major showstopper: can't stop production, backup
everything and restart from scratch every 3-4 months. And there are no
tools helping, just log digging :( Even at version 9.6 seems it's not
really "production ready"... More like v0.9.6 IMVHO. And now it being
EOLed makes it way worse.
Diego
Il 27/10/2023 09:40, Zakhar Kirpichenko ha scritto:
Hi,
Red Hat Gluster Storage is EOL, Red Hat moved Gluster devs to other
projects, so Gluster doesn't get much attention. From my experience, it
has deteriorated since about version 9.0, and we're migrating to
alternatives.
/Z
On Fri, 27 Oct 2023 at 10:29, Marcus Pedersén <marcus.pedersen@xxxxxx
<mailto:marcus.pedersen@xxxxxx>> wrote:
Hi all,
I just have a general thought about the gluster
project.
I have got the feeling that things has slowed down
in the gluster project.
I have had a look at github and to me the project
seems to slow down, for gluster version 11 there has
been no minor releases, we are still on 11.0 and I have
not found any references to 11.1.
There is a milestone called 12 but it seems to be
stale.
I have hit the issue:
https://github.com/gluster/glusterfs/issues/4085
<https://github.com/gluster/glusterfs/issues/4085>
that seems to have no sollution.
I noticed when version 11 was released that you
could not bump OP version to 11 and reported this,
but this is still not available.
I am just wondering if I am missing something here?
We have been using gluster for many years in production
and I think that gluster is great!! It has served as well over
the years and we have seen some great improvments
of stabilility and speed increase.
So is there something going on or have I got
the wrong impression (and feeling)?
Best regards
Marcus
---
När du skickar e-post till SLU så innebär detta att SLU behandlar
dina personuppgifter. För att läsa mer om hur detta går till, klicka
här <https://www.slu.se/om-slu/kontakta-slu/personuppgifter/
<https://www.slu.se/om-slu/kontakta-slu/personuppgifter/>>
E-mailing SLU will result in SLU processing your personal data. For
more information on how this is done, click here
<https://www.slu.se/en/about-slu/contact-slu/personal-data/
<https://www.slu.se/en/about-slu/contact-slu/personal-data/>>
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
<https://meet.google.com/cpu-eiue-hvk>
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@xxxxxxxxxxx>
https://lists.gluster.org/mailman/listinfo/gluster-users
<https://lists.gluster.org/mailman/listinfo/gluster-users>
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users