Re: GlusterFS storage driver deprecation in Kubernetes.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


Thanks for the heads up Humble. This would help many of the gluster community users, who may not be following k8s threads actively to be planning their migration plans actively.

For all the users who are currently running heketi + glusterfs, starting from k8s v1.26, you CANNOT use heketi + glusterfs based storage in k8s.

Below are my personal suggestions for the users. Please treat these options as my personal opinion, and not an official stand of the gluster community.

0. Use an older (< 1.25) version of k8s, and keep using the setup :-)

1. Use current storage nodes as part of storage, but managed separately, and expose NFS and use NFS CSI to get the data in the pods. (Note the change over to new PV through CSI based PVC, which means applications need a migration). - I haven't tested this setup, hence can't vouch for this.

2. Use kadalu [6] operator to manage currently deployed glusterfs nodes as 'External' storage class, and use kadalu CSI (which uses native glusterfs mount as part of CSI node plugin) to get PV for your application pods. NOTE: here too, there is an application migration needed to use kadalu CSI based PVC. Suggested for those users with bigger PVs used in k8s setup already. There is a team to help with this migration if you wish to.

3. Use kadalu (or any 'other' CSI providers), provision a new storage, and copy over the data set to new storage: Would be an option if the storage is smaller in size. In this case, there would be extra time to do a copy of data through starting a pod with both existing PV and new PV added as mounts in the same pod, so you can copy off the data quickly.

In any case, considering you do not have a lot of time before kubernetes v1.26 comes out, please do start your migration plans soon.

For the developers of the glusterfs community, what are the thoughts you have on this? I know there is some effort started on keeping glusterfs-containers repo relevant, and I see PRs coming out. Happy to open up a discussion on the same.

Amar (@amarts)

[6] -

On Thu, Aug 11, 2022 at 5:47 PM Humble Chirammal <hchiramm@xxxxxxxxxx> wrote:

Hey Gluster Community,

As you might be aware, there is an effort in the kubernetes community to remove in-tree storage plugins to reduce external dependencies and security concerns in the core Kubernetes. Thus, we are in a process to gradually deprecate all the in-tree external storage plugins and eventually remove them from the core Kubernetes codebase.  GlusterFS is one of the very first dynamic provisioners which was made into kubernetes v1.4 ( 2016 ) release via . From then on many deployments were/are making use of this driver to consume GlusterFS volumes in Kubernetes/Openshift clusters. 

As part of this effort, we are planning to deprecate GlusterFS intree plugin in 1.25 release and planning to take out Heketi code from Kubernetes Code base in subsequent release. This code removal will not be following kubernetes' normal deprecation policy [1] and will be treated as an exception [2]. The main reason for this exception is that, Heketi is in "Deep Maintenance" [3], also please see [4] for the latest push back from the Heketi team on changes we would need to keep vendoring heketi into kubernetes/kubernetes. We cannot keep heketi in the kubernetes code base as heketi itself is literally going away. The current plan is to start declaring the deprecation in kubernetes v1.25 and code removal in v1.26.

If you are using GlusterFS driver in your cluster setup, please reply with  below info before 16-Aug-2022 to dev@xxxxxxxxxxxxx ML on thread ( Deprecation of intree GlusterFS driver in 1.25) or to this thread which can help us to make a decision on when to completely remove this code base from the repo.

- what version of kubernetes are you running in your setup ?

- how often do you upgrade your cluster?

- what vendor or distro you are using ? Is it any (downstream) product offering or upstream GlusterFS driver directly used in your setup?

Awaiting your feedback.










Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC

Gluster-devel mailing list

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux