Jiri,
I’ve updated the op-version yesterday online without any problems, so I hope to be able to migrate my old bricks to the new tomorrow night without hassle using the remove brick command once all new bricks are added.
My new bricks are smaller than the current ones but higher in number so I could’t use the replace brick in any case…
thanks for the support….
Met vriendelijke groet / kind regards,
Sander Zijlstra
Hi Sander,
Sorry for not getting back to you.
I guess, when you don’t use quota you do not need to run the scripts.
I do not have any experience changing the op-version on a running glusterfs cluster. But looking at some threads, it should be possible to change it on a running glusterfs cluster. But I think, only when all clients are the same version has the server.
And good luck this weekend.
Grtz, Jiri
Jiri,
thanks, I totally missed the op-version part as it’s not mentioned in the upgrade instructions as per the link you send. Actually I read that link and because I do not use quota I didn’t use that script either.
Can I update the op-version when the volume is online and currently doing a rebalance or shall I stop the rebalance, set the new op-version and then start the rebalance again?
many thanks for all the input….
Met vriendelijke groet / kind regards,
Sander Zijlstra
Hi Sander,
Then operating-version=2 is for glusterfs version 3.4, so I guess you still will be using the old style.
And don’t forget to upgrade the clients also.
Grtz, Jiri
Jiri,
thanks for the information, I just commented on a question about op-version….
I upgraded all systems to 3.6.2 does this mean they all will use the correct op-version and will not revert to old style behaviour?
Met vriendelijke groet / kind regards,
Sander Zijlstra
Hi Sander,
- Since version 3.6 the remove brick command migrates the data away from the brick being removed, right?
It should :) I think this is the most complete documentation.
- When I have replicated bricks (replica 2), I also need to do "remove brick <volume> replica 2 brick1 brick2 …. , right?
Yes, you need to remove both replica’s at the same time.
Last but not least, is there any way to tell how long a “remove brick” will take when it’s moving the data? I have dual 10GB ethernet between the cluster members and the brick storage is a RAID-6 set which can read 400-600MB/sec without any problems.
Depends on the size of the disk, the number of files and type of file. Network speed is less a issu, then the IO on the disks/brick. To migratie data from one disk to a other (is like self-healing) GlusterFS will do a scan of all files on the disk, which can cause a high IO on the disk.
Because you had also some performance issues, when you added some bricks, I will expect the same issue with remove brick. So do this at night if possible.
Grtz, Jiri
LS,
I’m planning to decommission a few servers from my cluster, so to confirm:
- Since version 3.6 the remove brick command migrates the data away from the brick being removed, right?
- When I have replicated bricks (replica 2), I also need to do "remove brick <volume> replica 2 brick1 brick2 …. , right?
Last but not least, is there any way to tell how long a “remove brick” will take when it’s moving the data? I have dual 10GB ethernet between the cluster members and the brick storage is a RAID-6 set which can read 400-600MB/sec without any problems.
Met vriendelijke groet / kind regards,
Sander Zijlstra
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxxhttp://www.gluster.org/mailman/listinfo/gluster-users
|