Yeah I'll be doing a test upgrade and migration to make sure it works in the lap but my production stuff significantly more busy so we'll see if it folds under pressure. My biggest concern is when I'll have one node 3.5.2 and one node 3.6.3 in a replica set. I don't see any major reason why they wouldn't be compatible or cause data issues but I thought I would check with the list first.
From: "Pranith Kumar Karampuri" <pkarampu@xxxxxxxxxx>
To: "Josh Boon" <gluster@xxxxxxxxxxxx>, "Gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>
Sent: Tuesday, May 5, 2015 12:55:19 AM
Subject: Re: Gluster 3.5.2 upgrade to Gluster 3.6.3 QEMU gfafpi complications
Pranith
To: "Josh Boon" <gluster@xxxxxxxxxxxx>, "Gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>
Sent: Tuesday, May 5, 2015 12:55:19 AM
Subject: Re: Gluster 3.5.2 upgrade to Gluster 3.6.3 QEMU gfafpi complications
On 05/05/2015 02:27 AM, Josh Boon
wrote:
One way to gain confidence is to perform this on a test setup to know more about how your workload is affected by this upgrade?Hey folks,
I'll be doing an upgrade soon for my core hypervisors running qemu 2.0 built with Gluster 3.5.2 connecting to a replicated 3.5.2 volume.The upgrade path I'd like to do is:1. migrate all machines to node not being upgraded2. prevent client heals as documented over at http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.63. stop gluster server and gluster processes on node being upgraded4. upgrade kvm, gluster, and supporting packages to required to 3.6.35. restart node being upgraded6. Node joins pool again except one node will be running 3.6.3 and the other 3.5.27. perform heal to ensure data correct8. migrate all machines over to newly upgraded node9. repeat steps 3-5 for other node10. perform heal to ensure data correct11. rebalance machines as necessary12. upgrade complete
This method has the obvious issue of will the two nodes behave as expected when on different major versions with the gain of no downtime for vm's. Is this method too risky? Has anyone tried it? Would appreciate any input.
Pranith
Thanks,Josh
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users