On 25/06/2014, at 6:21 AM, Franco Broi wrote: > Ok, I'm going to try this tomorrow. Anyone have anything else to add?? Um, "let us know how it goes". :) > What's the worst that can happen? Kicks off robot armageddon? + Justin > On Mon, 2014-06-23 at 20:11 +0530, Kaushal M wrote: >> On Wed, Jun 18, 2014 at 6:58 PM, Justin Clift <justin@xxxxxxxxxxx> wrote: >>> On 18/06/2014, at 9:36 AM, Kaushal M wrote: >>>> You are right. Since you had installed gluster-3.5, the operating >>>> version had changed to 3, which is the operating version of gluster >>>> 3.5. This op-version is saved in the glusterd.info file in >>>> /var/lib/glusterd. >>>> The op-version of gluster-3.4 is just 2. So when you downgraded, >>>> glusterd refused to start as it couldn't support features that could >>>> have been enabled when you were running gluster-3.5. >>> >>> >>> What's the workaround? Delete /var/lib/glusterd/glusterd.info after >>> the downgrade, before starting the 3.4 daemons? >>> >> >> Deleting glusterd.info would be a bad idea, as it would cause the UUID >> to be regenerated. Instead, you could just edit the glusterd.info >> file, and set operating-version to 2. This would allow glusterd to >> start up. I would also suggest that the volfiles be regenerated as >> well, by running >> 'glusterd --xlator-option *.upgrade=on -N'. >> If volfiles were generated with newer features/xlators etc., while >> running at the higher version, this would make sure that the volfiles >> are again compliant with 3.4. This would only work if no new features >> have been explicitly enabled. In that case you'd need to edit the >> volinfo files and remove any new options that were enabled, before >> regenerating the volfiles. >> >> ~kaushal >> >>> + Justin >>> >>> -- >>> GlusterFS - http://www.gluster.org >>> >>> An open source, distributed file system scaling to several >>> petabytes, and handling thousands of clients. >>> >>> My personal twitter: twitter.com/realjustinclift >>> > > -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users