Re: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 3, 2017 at 8:50 PM, Alastair Neil <ajneil.tech@xxxxxxxxx> wrote:
> Just so I am clear the upgrade process will be as follows:
>
> upgrade all clients to 4.0
>
> rolling upgrade all servers to 4.0 (with GD1)
>
> kill all GD1 daemons on all servers and run upgrade script (new clients
> unable to connect at this point)
>
> start GD2 ( necessary or does the upgrade script do this?)
>
>
> I assume that once the cluster had been migrated to GD2 the glusterd startup
> script will be smart enough to start the correct version?
>

This should be the process, mostly.

The upgrade script needs to GD2 running on all nodes before it can
begin migration.
But they don't need to have a cluster formed, the script should take
care of forming the cluster.


> -Thanks
>
>
>
>
>
> On 3 November 2017 at 04:06, Kaushal M <kshlmster@xxxxxxxxx> wrote:
>>
>> On Thu, Nov 2, 2017 at 7:53 PM, Darrell Budic <budic@xxxxxxxxxxxxxxxx>
>> wrote:
>> > Will the various client packages (centos in my case) be able to
>> > automatically handle the upgrade vs new install decision, or will we be
>> > required to do something manually to determine that?
>>
>> We should be able to do this with CentOS (and other RPM based distros)
>> which have well split glusterfs packages currently.
>> At this moment, I don't know exactly how much can be handled
>> automatically, but I expect the amount of manual intervention to be
>> minimal.
>> The least minimum amount of manual work needed would be enabling and
>> starting GD2 and starting the migration script.
>>
>> >
>> > It’s a little unclear that things will continue without interruption
>> > because
>> > of the way you describe the change from GD1 to GD2, since it sounds like
>> > it
>> > stops GD1.
>>
>> With the described upgrade strategy, we can ensure continuous volume
>> access to clients during the whole process (provided volumes have been
>> setup with replication or ec).
>>
>> During the migration from GD1 to GD2, any existing clients still
>> retain access, and can continue to work without interruption.
>> This is possible because gluster keeps the management  (glusterds) and
>> data (bricks and clients) parts separate.
>> So it is possible to interrupt the management parts, without
>> interrupting data access to existing clients.
>> Clients and the server side brick processes need GlusterD to start up.
>> But once they're running, they can run without GlusterD. GlusterD is
>> only required again if something goes wrong.
>> Stopping GD1 during the migration process, will not lead to any
>> interruptions for existing clients.
>> The brick process continue to run, and any connected clients continue
>> to remain connected to the bricks.
>> Any new clients which try to mount the volumes during this migration
>> will fail, as a GlusterD will not be available (either GD1 or GD2).
>>
>> > Early days, obviously, but if you could clarify if that’s what
>> > we’re used to as a rolling upgrade or how it works, that would be
>> > appreciated.
>>
>> A Gluster rolling upgrade process, allows data access to volumes
>> during the process, while upgrading the brick processes as well.
>> Rolling upgrades with uninterrupted access requires that volumes have
>> redundancy (replicate or ec).
>> Rolling upgrades involves upgrading servers belonging to a redundancy
>> set (replica set or ec set), one at a time.
>> One at a time,
>> - A server is picked from a redundancy set
>> - All Gluster processes are killed on the server, glusterd, bricks and
>> other daemons included.
>> - Gluster is upgraded and restarted on the server
>> - A heal is performed to heal new data onto the bricks.
>> - Move onto next server after heal finishes.
>>
>> Clients maintain uninterrupted access, because a full redundancy set
>> is never taken offline all at once.
>>
>> > Also clarification that we’ll be able to upgrade from 3.x
>> > (3.1x?) to 4.0, manually or automatically?
>>
>> Rolling upgrades from 3.1x to 4.0 are a manual process. But I believe,
>> gdeploy has playbooks to automate it.
>> At the end of this you will be left with a 4.0 cluster, but still be
>> running GD1.
>> Upgrading from GD1 to GD2, in 4.0 will be a manual process. A script
>> that automates this is planned only for 4.1.
>>
>> >
>> >
>> > ________________________________
>> > From: Kaushal M <kshlmster@xxxxxxxxx>
>> > Subject: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+
>> > Date: November 2, 2017 at 3:56:05 AM CDT
>> > To: gluster-users@xxxxxxxxxxx; Gluster Devel
>> >
>> > We're fast approaching the time for Gluster-4.0. And we would like to
>> > set out the expected upgrade strategy and try to polish it to be as
>> > user friendly as possible.
>> >
>> > We're getting this out here now, because there was quite a bit of
>> > concern and confusion regarding the upgrades between 3.x and 4.0+.
>> >
>> > ---
>> > ## Background
>> >
>> > Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
>> > which is backwards incompatible with the GlusterD (GD1) in
>> > GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
>> > established, rolling upgrades are not possible. This meant that
>> > upgrades from 3.x to 4.0 would require a volume downtime and possible
>> > client downtime.
>> >
>> > This was a cause of concern among many during the recently concluded
>> > Gluster Summit 2017.
>> >
>> > We would like to keep pains experienced by our users to a minimum, so
>> > we are trying to develop an upgrade strategy that avoids downtime as
>> > much as possible.
>> >
>> > ## (Expected) Upgrade strategy from 3.x to 4.0
>> >
>> > Gluster-4.0 will ship with both GD1 and GD2.
>> > For fresh installations, only GD2 will be installed and available by
>> > default.
>> > For existing installations (upgrades) GD1 will be installed and run by
>> > default. GD2 will also be installed simultaneously, but will not run
>> > automatically.
>> >
>> > GD1 will allow rolling upgrades, and allow properly setup Gluster
>> > volumes to be upgraded to 4.0 binaries, without downtime.
>> >
>> > Once the full pool is upgraded, and all bricks and other daemons are
>> > running 4.0 binaries, migration to GD2 can happen.
>> >
>> > To migrate to GD2, all GD1 processes in the cluster need to be killed,
>> > and GD2 started instead.
>> > GD2 will not automatically form a cluster. A migration script will be
>> > provided, which will form a new GD2 cluster from the existing GD1
>> > cluster information, and migrate volume information from GD1 into GD2.
>> >
>> > Once migration is complete, GD2 will pick up the running brick and
>> > other daemon processes and continue. This will only be possible if the
>> > rolling upgrade with GD1 happened successfully and all the processes
>> > are running with 4.0 binaries.
>> >
>> > During the whole migration process, the volume would still be online
>> > for existing clients, who can still continue to work. New clients will
>> > not be possible during this time.
>> >
>> > After migration, existing clients will connect back to GD2 for
>> > updates. GD2 listens on the same port as GD1 and provides the required
>> > SunRPC programs.
>> >
>> > Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
>> > versions. without volume downtime, will be possible.
>> >
>> > ### FAQ and additional info
>> >
>> > #### Both GD1 and GD2? What?
>> >
>> > While both GD1 and GD2 will be shipped, the GD1 shipped will
>> > essentially be the GD1 from the last 3.x series. It will not support
>> > any of the newer storage or management features being planned for 4.0.
>> > All new features will only be available from GD2.
>> >
>> > #### How long will GD1 be shipped/maintained for?
>> >
>> > We plan to maintain GD1 in the 4.x series for at least a couple of
>> > releases, at least 1 LTM release. Current plan is to maintain it till
>> > 4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
>> > then upgrade to newer releases.
>> >
>> > #### Migration script
>> >
>> > The GD1 to GD2 migration script and the required features in GD2 are
>> > being planned only for 4.1. This would technically mean most users
>> > will only be able to migrate from 3.x to 4.1. But users can still
>> > migrate from 3.x to 4.0 with GD1 and get many bug fixes and
>> > improvements. They would only be missing any new features. Users who
>> > live on the edge, should be able to the migration manually in 4.0.
>> >
>> > ---
>> >
>> > Please note that the document above gives the expected upgrade
>> > strategy, and is not final, nor complete. More details will be added
>> > and steps will be expanded upon, as we move forward.
>> >
>> > To move forward, we need your participation. Please reply to this
>> > thread with any comments you have. We will try to answer and solve any
>> > questions or concerns. If there a good new ideas/suggestions, they
>> > will be integrated. If you just like it as is, let us know any way.
>> >
>> > Thanks.
>> >
>> > Kaushal and Gluster Developers.
>> > _______________________________________________
>> > Gluster-users mailing list
>> > Gluster-users@xxxxxxxxxxx
>> > http://lists.gluster.org/mailman/listinfo/gluster-users
>> >
>> >
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel@xxxxxxxxxxx
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxxx
> http://lists.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux