Re: Upgrading from 3.5.3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Matt,

we went the same route a couple of weeks ago and managed to upgrade a legacy 3.5 installation to 3.8 on Gentoo Linux. Most of the upgrade steps were pretty straight forward, a few needed some special attention (such as op-code and volume parameter adaptions).

Foremost, we did the upgrade in offline mode, which means that there was no GlusterFS client running during the upgrade - I would definitely recommend you to do the same.

We basically performed the following steps:

Stopped all GlusterFS clients (those were QEMU/KVM VMs with FUSE and gfapi storage backends, some Linux GlusterFS FUSE clients and Windows NFS clients in our case).

Saved the existing GlusterFS quotas with the help of the pre-upgrade-script for quota:
https://github.com/gluster/glusterfs/blob/master/extras/pre-upgrade-script-for-quota.sh

Stopped all GlusterFS daemon on all involved nodes.

Created LVM snapshots of all existing bricks

Upgraded the GlusterFS software to 3.8

Run post-upgrade steps to adjust the vol-files:
glusterd --xlator-option *.upgrade=on -N
According to: https://bugzilla.redhat.com/show_bug.cgi?id=1191176

Started the GlusterFS daemon on all nodes

Checked the status of the cluster with the usual commands
gluster peer status
gluster volume list
gluster volume status
gluster volume info
gluster volume heal "<VOL-NAME>" info
gluster volume heal "<VOL-NAME>" info split-brain
tail -f /var/log/glusterfs/*.log /var/log/glusterfs/bricks/*.log


Updated the op-version:
gluster volume set all cluster.op-version 30800
gluster volume get "<VOL-NAME>" op-version


We then took the chance to add an arbiter node (switching to a "replica 3 arbiter 1" configuration).

Adapted the volume configuration to the new quorum and virtualisation options (for the KVM/QEMU GlusterFS backend volumes) according to https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Configuring_Red_Hat_Enterprise_Virtualization_with_Red_Hat_Gluster_Storage/chap-Hosting_Virtual_Machine_Images_on_Red_Hat_Storage_volumes.html

note, that we intentionally didn't activate sharding, as it wasn't regarded stable at the time we started to plan and test the upgrade path. This shouldn't be the case anymore, but I don't know how the upgrade path looks like.


Restored the quota value with the help of the post-upgrade script for quota:
https://github.com/gluster/glusterfs/blob/master/extras/post-upgrade-script-for-quota.sh


And finally upgraded all clients to 3.8 and re-enabled all services.


Important: Make sure to test the upgrade on a test environment which is as close as possible to your live environment in order to avoid risks and unpleasant surprises. As usual, make sure to have backups available and be prepared to use them.

Also check the upgrade guides available at:
https://gluster.readthedocs.io/en/latest/Upgrade-Guide/README/


Cheers and good luck with the upgrade
Chris


On 20.03.2017 15:27, Matthew Kettlewell wrote:
Hello -

We have several
​G​
luster clusters, with both clients and servers running 3.5.3.

Each cluster can have several hundred clients
​​
( so client updates will be particularly difficult )

We are looking at potential upgrade paths
​ ​
that would minimize our impact,
​ ​
and give us some of the benefits of more recent versions
( most notably we've seen self-heal issues when re-adding nodes )

Is the 3.5.3 client backwards compatible with future versions, and
​ ​
if so
​ ​
how far ahead?
( ie, will the 3.5.3 client work with 3.7.x server?
​ ​
3.8? 3.9 ?)

Are there any best practices guides out there or
recommendations/suggestions for
​ ​
upgrading from 3.5.3 to a more recent version?


​Any guidance on this matter would be greatly appreciated.

Matt
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux