Hi Milos,
You can do this already, by changing the baseurl format to look like this. Note the 3.4 between glusterfs and latest.
I tend not to have yum auto updates on anything production because even minor version upgrades can cause unforeseen problems.
J.
On Mon, Aug 4, 2014 at 4:01 PM, Milos Kozak <milos.kozak@xxxxxxxxx> wrote:
Let me contribute to the upgrade process. In my case I ended up with the same problem, but in my case it was on testing setup. In that case the problem was caused by automatic night upgrade, which I have turned on on my Centos servers. Everytime you release new RPMs my servers automatically upgrade - with minor version it is not problem usually, but major one..
So I would like to suggest to make directory hierarchy according to version .. To provide folders 3.4 / 3.5 / 3.6 / Latest in your repository as other projects do.
This wont resolve this kind of issue, but when you are releasing 3.6 my servers are not going to upgrade automatically in the middle of the night.
Thanks MIlos
On 8/2/2014 4:37 PM, Pranith Kumar Karampuri wrote:
_______________________________________________
On 08/03/2014 01:43 AM, Tiemen Ruiten wrote:
On 08/02/14 20:12, Pranith Kumar Karampuri wrote:I guess we need to document upgrade process if not already done.
On 08/02/2014 06:50 PM, Tiemen Ruiten wrote:Yes...
Hello,Did the upgrade happen when the volume is still running?
I'm cross-posting this from ovirt-users:
I have an oVirt environment backed by a two-node Gluster-cluster.
Yesterday I decided to upgrade to from GlusterFS 3.5.1 to 3.5.2, but
that caused the gluster daemon to stop and now I have several lines
like
this in my log for the volume that hosts the VM images, called vmimage:
One more documentation bug :-(OK, I will try that.[2014-08-02 12:56:20.994767] EThis is the document that talks about how to resolve split-brains in
[afr-self-heal-common.c:233:afr_sh_print_split_brain_log]
0-vmimage-replicate-0: Unable to self-heal contents of
'f09c211d-eb49-4715-8031-85a5a8f39f18' (possible split-brain). Please
delete the file from all but the preferred subvolume.- Pending
matrix: [
[ 0 408 ] [ 180 0 ] ]
gluster.
https://github.com/gluster/glusterfs/blob/master/doc/split-brain.md
That surprise me: in the man page I find this:What I would like to do is the following, since I'm not 100% happyIs this a gluster volume? gluster volumes can't be renamed..
anyway with how the volume is setup:
- Stop VDSM on the oVirt hosts / unmount the volume
- Stop the current vmimage volume and rename it
volume rename <VOLNAME> <NEW-VOLNAME>
Rename the specified volume.
Let us know if you have any doubts in this document.
OK, I will try to resolve with the guide for split-brain scenarios first.- Create a new vmimage volumewhere will these images be copied to? on to the gluster mount? if yes
- Copy the images from one of the nodes
then there is no need to sync.
- Start the volume and let it sync
- Restart VDSM / mount the volume
Is this going to work? Or is there critical metadata that will not be
transferred with these steps?
Tiemen
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users