Re: Announcing Gluster for Container Storage (GCS)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2018-08-23 at 13:54 -0700, Joe Julian wrote:
> Personally, I'd like to see the glusterd service replaced by a k8s native controller (named "kluster").

If you are exclusively interested in gluster for kubernetes
storage, this might seem the right approach.  But I think
this is much too narrow. The standalone, non-k8s deployments
still are important and will be for some time.

So what we've always tried to achieve (this is my personal
very firm credo, and I think several of the other gluster
developers are on the same page), is to keep any business
logic of *how* to manage bricks, create volumes, how to do a
mount, how to grow, shrink and grow volumes and clusters,
etc... close to the core gluster project, so that these
features are usable irrespective of whether gluster is
used in kubernetes or not.

The kubernetes components just need to make use of these,
and so they can stay nicely small, too:

* The provisioners and csi drivers mainly do api translation
  between k8s and gluster(heketi in the old style) and are
  rather trivial.

* The operator would implement the logic "when" and "why"
  to invoke the gluster operations, but should imho not
  bother about the "how".

What can not be implemented with that nice separation
of responsibilies?


Thinking about this a bit more, I do actually feel
more and more that it would be wrong to put all of
gluster into k8s even if we were only interested
in k8s. And I'm really curious how you want to do
that: I think you would have to rewrite more parts
of how gluster actually works. Currently glusterd
mananges (spawns) other gluster processes. Clients
for mounting first connect to glusterd to get the
volfile and maintain a connection to glusterd
throughout the whole lifetime of the mount, etc...

Really interested to hear your thoughts about the above!


Cheers - Michael




> I'm hoping to use this vacation I'm currently on to write up a design doc.
> 
> On August 23, 2018 12:58:03 PM PDT, Michael Adam <obnox@xxxxxxxxx> wrote:
> >On 2018-07-25 at 06:38 -0700, Vijay Bellur wrote:
> >> Hi all,
> >
> >Hi Vijay,
> >
> >Thanks for announcing this to the public and making everyone
> >more aware of Gluster's focus on container storage!
> >
> >I would like to add an additional perspective to this,
> >giving some background about the history and origins:
> >
> >Integrating Gluster with kubernetes for providing
> >persistent storage for containerized applications is
> >not new. We have been working on this since more than
> >two years now, and it is used by many community users
> >and and many customers (of Red Hat) in production.
> >
> >The original software stack used heketi
> >(https://github.com/heketi/heketi) as a high level service
> >interface for gluster to facilitate the easy self-service for
> >provisioning volumes in kubernetes. Heketi implemented some ideas
> >that were originally part of the glusterd2 plans already in a
> >separate, much more narrowly scoped project to get us started
> >with these efforts in the first place, and also went beyond those
> >original ideas.  These features are now being merged into
> >glusterd2 which will in the future replace heketi in the
> >container storage stack.
> >
> >We were also working on kubernetes itself, writing the
> >privisioners for various forms of gluster volumes in kubernets
> >proper (https://github.com/kubernetes/kubernetes) and also the
> >external storage repo
> >(https://github.com/kubernetes-incubator/external-storage).
> >Those provisioners will eventually be replaced by the mentioned
> >csi drivers. The expertise of the original kubernetes
> >development is now flowing into the CSI drivers.
> >
> >The gluster-containers repository was created and used
> >for this original container-storage effort already.
> >
> >The mentioned https://github.com/gluster/gluster-kubernetes
> >repository was not only the place for storing the deployment
> >artefacts and tools, but it was actually intended to be the
> >upstream home of the gluster-container-storage project.
> >
> >In this view, I see the GCS project announced here
> >as a GCS version 2. The original first version,
> >even though never officially announced that widely in a formal
> >introduction like this, and never given a formal release
> >or version number (let me call it version one), was the
> >software stack described above and homed at the
> >gluster-kubernetes repository. If you look at this project
> >(and heketi), you see that it has a nice level of popularity.
> >
> >I think we should make use of this traction instead of
> >ignoring the legacy, and turn gluster-kubernetes into the
> >home of GCS (v2). In my view, GCS (v2) will be about:
> >
> >* replacing some of the components with newer, i.e.
> >  - i.e. glusterd2 instead of the heketi and glusterd1 combo
> >  - csi drivers (the new standard) instead of native
> >    kubernetes plugins
> >* adding the operator feature,
> >  (even though we are currently also working on an operator
> >  for the current stack with heketi and traditional gluster,
> >  since this will become important in production before
> >  this v2 will be ready.)
> >
> >These are my 2cents on this topic.
> >I hope someone finds them useful.
> >
> >I am very excited to (finally) see the broader gluster
> >community getting more aligned behind the idea of bringing
> >our great SDS system into the space of containers! :-)
> >
> >Cheers - Michael
> >
> >
> >
> >
> >
> >> We would like to let you  know that some of us have started focusing
> >on an
> >> initiative called ‘Gluster for Container Storage’ (in short GCS). As
> >of
> >> now, one can already use Gluster as storage for containers by making
> >use of
> >> different projects available in github repositories associated with
> >gluster
> >> <https://github.com/gluster> & Heketi
> ><https://github.com/heketi/heketi>.
> >> The goal of the GCS initiative is to provide an easier integration of
> >these
> >> projects so that they can be consumed together as designed. We are
> >> primarily focused on integration with Kubernetes (k8s) through this
> >> initiative.
> >> 
> >> Key projects for GCS include:
> >> Glusterd2 (GD2)
> >> 
> >> Repo: https://github.com/gluster/glusterd2
> >> 
> >> The challenge we have with current management layer of Gluster
> >(glusterd)
> >> is that it is not designed for a service oriented architecture.
> >Heketi
> >> overcame this limitation and made Gluster consumable in k8s by
> >providing
> >> all the necessary hooks needed for supporting Persistent Volume
> >Claims.
> >> 
> >> Glusterd2 provides a service oriented architecture for volume &
> >cluster
> >> management. Gd2 also intends to provide many of the Heketi
> >functionalities
> >> needed by Kubernetes natively. Hence we are working on merging Heketi
> >with
> >> gd2 and you can follow more of this action in the issues associated
> >with
> >> the gd2 github repository.
> >> gluster-block
> >> 
> >> Repo: https://github.com/gluster/gluster-block
> >> 
> >> This project intends to expose files in a gluster volume as block
> >devices.
> >> Gluster-block enables supporting ReadWriteOnce (RWO) PVCs and the
> >> corresponding workloads in Kubernetes using gluster as the underlying
> >> storage technology.
> >> 
> >> Gluster-block is intended to be consumed by stateful RWO applications
> >like
> >> databases and k8s infrastructure services like logging, metrics etc.
> >> gluster-block is more preferred than file based Persistent Volumes in
> >K8s
> >> for stateful/transactional workloads as it provides better
> >performance &
> >> consistency guarantees.
> >> anthill / operator
> >> 
> >> Repo: https://github.com/gluster/anthill
> >> 
> >> This project aims to add an operator for Gluster in Kubernetes.,
> >Since it
> >> is relatively new, there are areas where you can contribute to make
> >the
> >> operator experience better (please refer to the list of issues). This
> >> project intends to make the whole Gluster experience in k8s much
> >smoother
> >> by automatic management of operator tasks like installation, rolling
> >> upgrades etc.
> >> gluster-csi-driver
> >> 
> >> Repo: http://github.com/gluster/gluster-csi-driver
> >> 
> >> This project will provide CSI (Container Storage Interface) compliant
> >> drivers for GlusterFS & gluster-block in k8s.
> >> gluster-kubernetes
> >> 
> >> Repo: https://github.com/gluster/gluster-kubernetes
> >> 
> >> This project is intended to provide all the required installation and
> >> management steps for getting gluster up and running in k8s.
> >> GlusterFS
> >> 
> >> Repo: https://github.com/gluster/glusterfs
> >> 
> >> GlusterFS is the main and the core repository of Gluster. To support
> >> storage in container world, we don’t need all the features of
> >Gluster.
> >> Hence, we would be focusing on a stack which would be absolutely
> >required
> >> in k8s. This would allow us to plan and execute tests well, and also
> >> provide users with a setup which works without too many options to
> >tweak.
> >> 
> >> Notice that glusterfs default volumes would continue to work as of
> >now, but
> >> the translator stack which is used in GCS will be much leaner and
> >geared to
> >> work optimally in k8s.
> >> Monitoring
> >> Repo: https://github.com/gluster/gluster-prometheus
> >> 
> >> As k8s ecosystem provides its own native monitoring mechanisms, we
> >intend
> >> to have this project be the placeholder for required monitoring
> >plugins.
> >> The scope of this project is currently WIP and we welcome your inputs
> >to
> >> shape the project.
> >> 
> >> More details on this can be found at:
> >>
> >https://lists.gluster.org/pipermail/gluster-users/2018-July/034435.html
> >> 
> >> Gluster-Containers
> >> 
> >> *Repo: https://github.com/gluster/gluster-containers
> >> <https://github.com/gluster/gluster-containers>This repository
> >provides
> >> container specs / Dockerfiles that can be used with a container
> >runtime
> >> like cri-o & docker.Note that this is not an exhaustive or final list
> >of
> >> projects involved with GCS. We will continue to update the project
> >list
> >> depending on the new requirements and priorities that we discover in
> >this
> >> journey.*
> >> 
> >> *We welcome you to join this journey by looking up the repositories
> >and
> >> contributing to them. As always, we are happy to hear your thoughts
> >about
> >> this initiative and please stay tuned as we provide periodic updates
> >about
> >> GCS here!Regards,*
> >> 
> >> *Vijay*
> >> 
> >> *(on behalf of Gluster maintainers @ Red Hat)*
> >
> >> _______________________________________________
> >> Gluster-devel mailing list
> >> Gluster-devel@xxxxxxxxxxx
> >> https://lists.gluster.org/mailman/listinfo/gluster-devel

Attachment: signature.asc
Description: PGP signature

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux