Re: Rolling upgrade from 3.6.3 to 3.10.5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Diego,

Thanks for the information. I tried only setting 'allow-insecure on' but nada.
The sentence "If you are using GlusterFS version 3.4.x or below, you can upgrade it to following" in documentation is surely misleading.
So would you suggest creating a new 3.10 cluster from scratch then rsync(?) the data from old cluster to the new?

On Fri, Aug 25, 2017 at 7:53 PM, Diego Remolina <dijuremo@xxxxxxxxx> wrote:
You cannot do a rolling upgrade from 3.6.x to 3.10.x You will need downtime.

Even 3.6 to 3.7 was not possible... see some references to it below:

https://marc.info/?l=gluster-users&m=145136214452772&w=2
https://gluster.readthedocs.io/en/latest/release-notes/3.7.1/

# gluster volume set <volname> server.allow-insecure on Edit
/etc/glusterfs/glusterd.vol to contain this line: option
rpc-auth-allow-insecure on

Post 1, restarting the volume would be necessary:

# gluster volume stop <volname>
# gluster volume start <volname>


HTH,

Diego

On Fri, Aug 25, 2017 at 7:46 AM, Yong Tseng <yongtw123@xxxxxxxxx> wrote:
> Hi all,
>
> I'm currently in process of upgrading a replicated cluster (1 x 4) from
> 3.6.3 to 3.10.5. The nodes run CentOS 6. However after upgrading the first
> node, the said node fails to connect to other peers (as seen via 'gluster
> peer status'), but somehow other non-upgraded peers can still see the
> upgraded peer as connected.
>
> Writes to the Gluster volume via local mounts of non-upgraded peers are
> replicated to the upgraded peer, but I can't write via the upgraded peer as
> the local mount seems forced to read-only.
>
> Launching heal operations from non-upgraded peers will output 'Commit failed
> on <upgraded peer IP>. Please check log for details'.
>
> In addition, during upgrade process there were warning messages about my old
> vol files renamed with .rpmsave extension. I tried starting Gluster with my
> old vol files but the problem persisted. I tried generating new vol files
> with 'glusterd --xlator-option "*.upgrade=on" -N', still no avail.
>
> Also I checked the brick log it had several messages about "failed to get
> client opversion". I don't know if this is pertinent. Could it be that the
> upgraded node cannot connect to older nodes but still can receive
> instructions from them?
>
> Below are command outputs; some data are masked.
> I'd provide more information if required.
> Thanks in advance.
>
> ===> 'gluster volume status' ran on non-upgraded peers
>
> Status of volume: gsnfs
> Gluster process                                         Port    Online  Pid
> ------------------------------------------------------------------------------
> Brick gs-nfs01:/ftpdata                                 49154   Y       2931
> Brick gs-nfs02:/ftpdata                                 49152   Y
> 29875
> Brick gs-nfs03:/ftpdata                                 49153   Y       6987
> Brick gs-nfs04:/ftpdata                                 49153   Y
> 24768
> Self-heal Daemon on localhost                           N/A     Y       2938
> Self-heal Daemon on gs-nfs04                            N/A     Y
> 24788
> Self-heal Daemon on gs-nfs03                            N/A     Y       7007
> Self-heal Daemon on <IP>                      N/A     Y       29866
>
> Task Status of Volume gsnfs
> ------------------------------------------------------------------------------
> There are no active volume tasks
>
>
>
> ===> 'gluster volume status' on upgraded peer
>
> Gluster process                             TCP Port  RDMA Port  Online  Pid
> ------------------------------------------------------------------------------
> Brick gs-nfs02:/ftpdata                     49152     0          Y
> 29875
> Self-heal Daemon on localhost               N/A       N/A        Y
> 29866
>
> Task Status of Volume gsnfs
> ------------------------------------------------------------------------------
> There are no active volume tasks
>
>
>
> ===> 'gluster peer status' on non-upgraded peer
>
> Number of Peers: 3
>
> Hostname: gs-nfs03
> Uuid: 4c1544e6-550d-481a-95af-2a1da32d10ad
> State: Peer in Cluster (Connected)
>
> Hostname: <IP>
> Uuid: 17d554fd-9181-4b53-9521-55acf69ac35f
> State: Peer in Cluster (Connected)
> Other names:
> gs-nfs02
>
> Hostname: gs-nfs04
> Uuid: c6d165e6-d222-414c-b57a-97c64f06c5e9
> State: Peer in Cluster (Connected)
>
>
>
> ===> 'gluster peer status' on upgraded peer
>
> Number of Peers: 3
>
> Hostname: gs-nfs03
> Uuid: 4c1544e6-550d-481a-95af-2a1da32d10ad
> State: Peer in Cluster (Disconnected)
>
> Hostname: gs-nfs01
> Uuid: 90d3ed27-61ac-4ad3-93a9-3c2b68f41ecf
> State: Peer in Cluster (Disconnected)
> Other names:
> <IP>
>
> Hostname: gs-nfs04
> Uuid: c6d165e6-d222-414c-b57a-97c64f06c5e9
> State: Peer in Cluster (Disconnected)
>
>
> --
> - Yong
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://lists.gluster.org/mailman/listinfo/gluster-users



--
- Yong
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux