Re: Kluster for Kubernetes (was Announcing Gluster for Container Storage)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/24/18 8:24 AM, Michael Adam wrote:
On 2018-08-23 at 13:54 -0700, Joe Julian wrote:
Personally, I'd like to see the glusterd service replaced by a k8s native controller (named "kluster").
If you are exclusively interested in gluster for kubernetes
storage, this might seem the right approach.  But I think
this is much too narrow. The standalone, non-k8s deployments
still are important and will be for some time.

So what we've always tried to achieve (this is my personal
very firm credo, and I think several of the other gluster
developers are on the same page), is to keep any business
logic of *how* to manage bricks, create volumes, how to do a
mount, how to grow, shrink and grow volumes and clusters,
etc... close to the core gluster project, so that these
features are usable irrespective of whether gluster is
used in kubernetes or not.

The kubernetes components just need to make use of these,
and so they can stay nicely small, too:

* The provisioners and csi drivers mainly do api translation
   between k8s and gluster(heketi in the old style) and are
   rather trivial.

* The operator would implement the logic "when" and "why"
   to invoke the gluster operations, but should imho not
   bother about the "how".

What can not be implemented with that nice separation
of responsibilies?


Thinking about this a bit more, I do actually feel
more and more that it would be wrong to put all of
gluster into k8s even if we were only interested
in k8s. And I'm really curious how you want to do
that: I think you would have to rewrite more parts
of how gluster actually works. Currently glusterd
mananges (spawns) other gluster processes. Clients
for mounting first connect to glusterd to get the
volfile and maintain a connection to glusterd
throughout the whole lifetime of the mount, etc...

Really interested to hear your thoughts about the above!


Cheers - Michael

To be clear, I'm not saying throw away glusterd and only do gluster for Kubernetes and nothing else. That would be silly.

On k8s, a native controller would still need to use some of what glusterd2 does as libraries, however things like spawning processes would be relegated to the scheduler. Glusterfsd, glustershd, gsyncd, etc. would just be pods in the cluster (probably with affinities set for storage localization). This allows better resource and fault management, better logging, and better monitoring.

Through Kubernetes custom resource definitions (CRDs), volumes would declarative and the controller would be responsible for converging the declaration and the state. I admit this goes opposite to what some developers in the gluster community have strong feelings about, but the industry has been moving away from having human managed resources and toward declarative state engines for good reason. It scales, is less prone to error, and allows for simpler interfaces.

Volume definitions (vol files, not the CRD) could be stored in ConfigMaps or Secrets. The client (both glusterfsd and glusterfs) could be made k8s aware and retrieve these directly or as an easier first step the cm/secret could be mounted into the pod and the client could load its vol from a file (the client would need to be altered to reload the graph if the file changes).

As an aside, the "maintained" connection to glusterd is only true as long as glusterd always lives at the same IP address. There's a long-standing bug where the client will never try to find another glusterd if the one it first connected to ever goes away.

There are still a lot of questions that I don't have answers to. I think this could be done in a way that's complementary with glusterd and does not create a bunch of double-work. Most importantly, I think this is something that could get community buy-in and would suit a need in Kubernetes that's not well supported at this time.

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux