Re: State of Gluster project

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hm... Actually comparing CEPH to Gluster is like to compare  apples to pears.

I  have played with both and I can say that CEPH is optimised for disk images - which gives benefit in Openstack and virtualizations,  but CEPH is harder to learn and lacks some features of Gluster.

For examlle Gluster has:
1. On Fedora-bases  systems, you can use Virtual Data Optimizer (VDO)  for data deduplication or data compression
On Debian Based - ZOL is an option
2. Geo-replication - very useful for setting up DR
3. In replica  and dispersed volumes , you can loose any node (sometimes more) without worrying about the type of the system - they have the same role and same purpose
4. Gluster clients are more diverse - it is nice that Windows(cifs),  Apple(whatever they call it), Linux(fuse/nfs) and BSDs(nfs) can use the volume.
5. Easier setup with less nodes - Gluster is easy to setup and  with sharding - is great even for virtualization. The  client uses  the algorithm to find the file without the necessity  to query another server.

Of course Gluster has it's own drawbacks - but the software is optimal for HPC and Archive storage.If you know the path to your file, Gluster will retrieve it quite fast.

I have to mention that in the past years (since Dec  2017 to be more precise)  , I had only 2 major issues - always during upgrades. When I compare another environment with enterprise-class  storages (in cluster mode) , which also failed once after an update (3  years period  of comparison) - the issue rate is not so big,  but the peice  for Gluster  is not millions :)

Best Regards,
Strahil Nikolov


На 17 юни 2020 г. 19:15:00 GMT+03:00, Erik Jacobson <erik.jacobson@xxxxxxx> написа:
>> It is very hard to compare them because they are structurally very
>different. For example, GlusterFS performance will depend *a lot* on
>the underlying file system performance. Ceph eliminated that factor by
>using Bluestore.
>> Ceph is very well performing for VM storage, since it's block based
>and as such optimized for that. I haven't tested CephFS a lot (I used
>it but only for very small storage) so I cannot speak for its
>performance, but I am guessing it's not ideal. For large amount of
>files thus GlusterFS is still a good choice.
>
>
>Was your experience above based on using a sharded volume or a normal
>one? When we worked with virtual machine images, we followed the volume
>sharding advice. I don't have a comparison for Ceph handy. I was just
>curious. It worked so well for us (but maybe our storage is "too good")
>that we found it hard to imagine it could be improved much. This was a
>simple case though of a single VM, 3 gluster servers, a sharded volume,
>and a raw virtual machine image. Probably a simpler case than yours.
>
>Thank you for writing this and take care,
>
>Erik
>
>> 
>> One *MAJOR* advantage of Ceph over GlusterFS is tooling. Ceph's
>self-analytics, status reporting and problem fixing toolset is just so
>far beyond GlusterFS that it's really hard for me to recommend
>GlusterFS for any but the most experienced sysadmins. It does come with
>the type of implementation Ceph has chosen that they have to have such
>good tooling (because honestly, poking around in binary data structures
>really wouldn't be practical for most users), but whenever I had a
>problem with Ceph the solution was just a couple of command line
>commands (even if it meant to remove a storage device, wipe it and add
>it back), where with GlusterFS it means poking around in the .glusterfs
>directory, looking up inode numbers, extended attributes etc. which is
>a real pain if you have a multi-million-file filesystem to work on. And
>that's not even with sharding or distributed volumes.
>> 
>> Also, Ceph has been a lot more stable that GlusterFS for us. The
>amount of hand-holding GlusterFS needs is crazy. With Ceph, there is
>this one bug (I think in certain Linux kernel versions) where it
>sometimes reads only zeroes from disk and complains about that and then
>you have to restart that OSD to not run into problems, but that's one
>"swatch" process on each machine that will do that automatically for
>us. I have run some Ceph clusters for several years now and only once
>or twice I had to deal with problems. The several GlusterFS clusters we
>operate constantly run into troubles. We now shut down all GlusterFS
>clients before we reboot any GlusterFS node because it was near
>impossible to reboot a single node without running into unrecoverable
>troubles (heal entries that will not heal etc.). With Ceph we can
>achieve 100% uptime, we regularly reboot our hosts one by one and some
>minutes later the Ceph cluster is clean again.
>> 
>> If others have more insights I'd be very happy to hear them.
>> 
>> Stefan
>> 
>> 
>> ----- Original Message -----
>> > Date: Tue, 16 Jun 2020 20:30:34 -0700
>> > From: Artem Russakovskii <archon810@xxxxxxxxx>
>> > To: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
>> > Cc: gluster-users <gluster-users@xxxxxxxxxxx>
>> > Subject: Re:  State of Gluster project
>> > Message-ID:
>>
>>	<CAD+dzQdf_TiPBSDj57hY=t8AQ=mACrxinPX7iU4hmuxNMo+omg@xxxxxxxxxxxxxx>
>> > Content-Type: text/plain; charset="utf-8"
>> > 
>> > Has anyone tried to pit Ceph against gluster? I'm curious what the
>ups and
>> > downs are.
>> > 
>> > On Tue, Jun 16, 2020, 4:32 PM Strahil Nikolov
><hunter86_bg@xxxxxxxxx> wrote:
>> > 
>> >> Hey Mahdi,
>> >>
>> >> For me it looks like Red Hat are focusing more  on CEPH  than on
>Gluster.
>> >> I hope the project remains active, cause it's very difficult to
>find a
>> >> Software-defined Storage as easy and as scalable as Gluster.
>> >>
>> >> Best Regards,
>> >> Strahil Nikolov
>> >>
>> >> ?? 17 ??? 2020 ?. 0:06:33 GMT+03:00, Mahdi Adnan <mahdi@xxxxxxxxx>
>??????:
>> >> >Hello,
>> >> >
>> >> > I'm wondering what's the current and future plan for Gluster
>project
>> >> >overall, I see that the project is not as busy as it was before
>"at
>> >> >least
>> >> >this is what I'm seeing" Like there are fewer blogs about what
>the
>> >> >roadmap
>> >> >or future plans of the project, the deprecation of Glusterd2,
>even Red
>> >> >Hat
>> >> >Openshift storage switched to Ceph.
>> >> >As the community of this project, do you feel the same? Is the
>> >> >deprecation
>> >> >of Glusterd2 concerning? Do you feel that the project is slowing
>down
>> >> >somehow? Do you think Red Hat is abandoning the project or giving
>fewer
>> >> >resources to Gluster?
>> >> ________
>> >>
>> >>
>> >>
>> >> Community Meeting Calendar:
>> >>
>> >> Schedule -
>> >> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> >> Bridge: https://bluejeans.com/441850968 
>> >>
>> >> Gluster-users mailing list
>> >> Gluster-users@xxxxxxxxxxx
>> >> https://lists.gluster.org/mailman/listinfo/gluster-users 
>> >>
>> > -------------- next part --------------
>> > An HTML attachment was scrubbed...
>> > URL:
>> >
><http://lists.gluster.org/pipermail/gluster-users/attachments/20200616/a1d0f142/attachment-0001.html
>>
>> ________
>> 
>> 
>> 
>> Community Meeting Calendar:
>> 
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968 
>> 
>> Gluster-users mailing list
>> Gluster-users@xxxxxxxxxxx
>> https://lists.gluster.org/mailman/listinfo/gluster-users 
>________
>
>
>
>Community Meeting Calendar:
>
>Schedule -
>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>Bridge: https://bluejeans.com/441850968
>
>Gluster-users mailing list
>Gluster-users@xxxxxxxxxxx
>https://lists.gluster.org/mailman/listinfo/gluster-users
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux