No, the amount of data is still the same and the files are identical. I'm just running another rebalance now, with 3.4.2 packages that I've compiled myself on our build server. So I'll see if that's any different. Also since I'm in it. I'm just looking on the 'rebalance status' and I would expect it to be the same on all the bricks (servers). However the output is quite different. On the 'primary' server under 'Node' are all 4 servers: localhost and 3 hostnames. On its replica there are: 3 primaries + one hostname randomly changing (two hostnames from replica 2) gluster08 - primary Node --------- localhost gluster07.uat gluster01.uat gluster02.uat gluster07 - replica with 08 Node --------- localhost gluster08.uat gluster08.uat gluster08.uat gluster02.uat On the second replica there are: 4 localhosts + one hostname randomly changing (two hostnames from replica 1 + its own replica hostname) gluster01 Node --------- localhost localhost localhost localhost gluster08.uat gluster02 - replica with 01 Node --------- localhost localhost localhost localhost gluster01.uat Is this right? v On Fri 28 Feb 2014 08:45:37, Vijay Bellur wrote: > On 02/28/2014 07:04 AM, Viktor Villafuerte wrote: > >Also I should add here that I'm doing this on VMs. However the rebalance > >with 3.2.5 was done on the same VMs > > > >v > > > Load on the VMs and hypervisors hosting the VMs could also have a > bearing. Algorithmically we are much better than 3.2.5 in 3.4.2 and > our practical experience also seems to corroborate that. > > Has the amount of data involved in rebalancing changed since the > last time this test was run? > > -Vijay > > > > > > >On Thu 27 Feb 2014 17:16:55, Viktor Villafuerte wrote: > >>Hi Shylesh, > >> > >>yes the log showing files being processed and eventually the rebalance > >>completed (with skipped files) but it took much much longer than with > >>3.2.5 which I tested intially. > >> > >>v > >> > >> -- Regards Viktor Villafuerte Optus Internet Engineering t: 02 808-25265 _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users