Sweet! Here is the baseline:
[root@gqas001 ~]# gluster v rebalance testvol status
Node Rebalanced-files size scanned failures skipped status run time in secs
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
localhost 1328575 81.1GB 9402953 0 0 completed 98500.00
gqas012.sbu.lab.eng.bos.redhat.com 0 0Bytes 8000011 0 0 completed 51982.00
gqas003.sbu.lab.eng.bos.redhat.com 0 0Bytes 8000011 0 0 completed 51982.00
gqas004.sbu.lab.eng.bos.redhat.com 1326290 81.0GB 9708625 0 0 completed 98500.00
gqas013.sbu.lab.eng.bos.redhat.com 0 0Bytes 8000011 0 0 completed 51982.00
gqas014.sbu.lab.eng.bos.redhat.com 0 0Bytes 8000011 0 0 completed 51982.00
volume rebalance: testvol: success:
I'll have a run on the patch started tomorrow.
-b
On Wed, Apr 29, 2015 at 12:51 PM, Nithya Balachandran <nbalacha@xxxxxxxxxx> wrote:
Doh my mistake, I thought it was merged. I was just running with the
upstream 3.7 daily. Can I use this run as my baseline and then I can run
next time on the patch to show the % improvement? I'll wipe everything and
try on the patch, any idea when it will be merged?
Yes, it would be very useful to have this run as the baseline. The patch has just been merged in master. It should be backported to 3.7 in a day or so.
Regards,
Nithya
> > > >
> > > > >
> > > > > On Wed, Apr 22, 2015 at 1:10 AM, Nithya Balachandran
> > > > > <nbalacha@xxxxxxxxxx>
> > > > > wrote:
> > > > >
> > > > > > That sounds great. Thanks.
> > > > > >
> > > > > > Regards,
> > > > > > Nithya
> > > > > >
> > > > > > ----- Original Message -----
> > > > > > From: "Benjamin Turner" <bennyturns@xxxxxxxxx>
> > > > > > To: "Nithya Balachandran" <nbalacha@xxxxxxxxxx>
> > > > > > Cc: "Susant Palai" <spalai@xxxxxxxxxx>, "Gluster Devel" <
> > > > > > gluster-devel@xxxxxxxxxxx>
> > > > > > Sent: Wednesday, 22 April, 2015 12:14:14 AM
> > > > > > Subject: Re: Rebalance improvement design
> > > > > >
> > > > > > I am setting up a test env now, I'll have some feedback for you
> this
> > > > > > week.
> > > > > >
> > > > > > -b
> > > > > >
> > > > > > On Tue, Apr 21, 2015 at 11:36 AM, Nithya Balachandran
> > > > > > <nbalacha@xxxxxxxxxx
> > > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Ben,
> > > > > > >
> > > > > > > Did you get a chance to try this out?
> > > > > > >
> > > > > > > Regards,
> > > > > > > Nithya
> > > > > > >
> > > > > > > ----- Original Message -----
> > > > > > > From: "Susant Palai" <spalai@xxxxxxxxxx>
> > > > > > > To: "Benjamin Turner" <bennyturns@xxxxxxxxx>
> > > > > > > Cc: "Gluster Devel" <gluster-devel@xxxxxxxxxxx>
> > > > > > > Sent: Monday, April 13, 2015 9:55:07 AM
> > > > > > > Subject: Re: Rebalance improvement design
> > > > > > >
> > > > > > > Hi Ben,
> > > > > > > Uploaded a new patch here:
> http://review.gluster.org/#/c/9657/.
> > > > > > > We
> > > > > > > can
> > > > > > > start perf test on it. :)
> > > > > > >
> > > > > > > Susant
> > > > > > >
> > > > > > > ----- Original Message -----
> > > > > > > From: "Susant Palai" <spalai@xxxxxxxxxx>
> > > > > > > To: "Benjamin Turner" <bennyturns@xxxxxxxxx>
> > > > > > > Cc: "Gluster Devel" <gluster-devel@xxxxxxxxxxx>
> > > > > > > Sent: Thursday, 9 April, 2015 3:40:09 PM
> > > > > > > Subject: Re: Rebalance improvement design
> > > > > > >
> > > > > > > Thanks Ben. RPM is not available and I am planning to refresh
> the
> > > > > > > patch
> > > > > > in
> > > > > > > two days with some more regression fixes. I think we can run
> the
> > > > > > > tests
> > > > > > post
> > > > > > > that. Any larger data-set will be good(say 3 to 5 TB).
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Susant
> > > > > > >
> > > > > > > ----- Original Message -----
> > > > > > > From: "Benjamin Turner" <bennyturns@xxxxxxxxx>
> > > > > > > To: "Vijay Bellur" <vbellur@xxxxxxxxxx>
> > > > > > > Cc: "Susant Palai" <spalai@xxxxxxxxxx>, "Gluster Devel" <
> > > > > > > gluster-devel@xxxxxxxxxxx>
> > > > > > > Sent: Thursday, 9 April, 2015 2:10:30 AM
> > > > > > > Subject: Re: Rebalance improvement design
> > > > > > >
> > > > > > >
> > > > > > > I have some rebalance perf regression stuff I have been
> working on,
> > > > > > > is
> > > > > > > there an RPM with these patches anywhere so that I can try it
> on my
> > > > > > > systems? If not I'll just build from:
> > > > > > >
> > > > > > >
> > > > > > > git fetch git:// review.gluster.org/glusterfs
> > > > > > > refs/changes/57/9657/8
> > > > > > > &&
> > > > > > > git cherry-pick FETCH_HEAD
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > I will have _at_least_ 10TB of storage, how many TBs of data
> should
> > > > > > > I
> > > > > > > run
> > > > > > > with?
> > > > > > >
> > > > > > >
> > > > > > > -b
> > > > > > >
> > > > > > >
> > > > > > > On Tue, Apr 7, 2015 at 9:07 AM, Vijay Bellur <
> vbellur@xxxxxxxxxx >
> > > > > > wrote:
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On 04/07/2015 03:08 PM, Susant Palai wrote:
> > > > > > >
> > > > > > >
> > > > > > > Here is one test performed on a 300GB data set and around
> 100%(1/2
> > > > > > > the
> > > > > > > time) improvement was seen.
> > > > > > >
> > > > > > > [root@gprfs031 ~]# gluster v i
> > > > > > >
> > > > > > > Volume Name: rbperf
> > > > > > > Type: Distribute
> > > > > > > Volume ID: 35562662-337e-4923-b862- d0bbb0748003
> > > > > > > Status: Started
> > > > > > > Number of Bricks: 4
> > > > > > > Transport-type: tcp
> > > > > > > Bricks:
> > > > > > > Brick1: gprfs029-10ge:/bricks/ gprfs029/brick1
> > > > > > > Brick2: gprfs030-10ge:/bricks/ gprfs030/brick1
> > > > > > > Brick3: gprfs031-10ge:/bricks/ gprfs031/brick1
> > > > > > > Brick4: gprfs032-10ge:/bricks/ gprfs032/brick1
> > > > > > >
> > > > > > >
> > > > > > > Added server 32 and started rebalance force.
> > > > > > >
> > > > > > > Rebalance stat for new changes:
> > > > > > > [root@gprfs031 ~]# gluster v rebalance rbperf status
> > > > > > > Node Rebalanced-files size scanned failures skipped status run
> time
> > > > > > > in
> > > > > > secs
> > > > > > > --------- ----------- ----------- ----------- -----------
> > > > > > > -----------
> > > > > > > ------------ --------------
> > > > > > > localhost 74639 36.1GB 297319 0 0 completed 1743.00
> > > > > > > 172.17.40.30 67512 33.5GB 269187 0 0 completed 1395.00
> > > > > > > gprfs029-10ge 79095 38.8GB 284105 0 0 completed 1559.00
> > > > > > > gprfs032-10ge 0 0Bytes 0 0 0 completed 402.00
> > > > > > > volume rebalance: rbperf: success:
> > > > > > >
> > > > > > > Rebalance stat for old model:
> > > > > > > [root@gprfs031 ~]# gluster v rebalance rbperf status
> > > > > > > Node Rebalanced-files size scanned failures skipped status run
> time
> > > > > > > in
> > > > > > secs
> > > > > > > --------- ----------- ----------- ----------- -----------
> > > > > > > -----------
> > > > > > > ------------ --------------
> > > > > > > localhost 86493 42.0GB 634302 0 0 completed 3329.00
> > > > > > > gprfs029-10ge 94115 46.2GB 687852 0 0 completed 3328.00
> > > > > > > gprfs030-10ge 74314 35.9GB 651943 0 0 completed 3072.00
> > > > > > > gprfs032-10ge 0 0Bytes 594166 0 0 completed 1943.00
> > > > > > > volume rebalance: rbperf: success:
> > > > > > >
> > > > > > >
> > > > > > > This is interesting. Thanks for sharing & well done! Maybe we
> > > > > > > should
> > > > > > > attempt a much larger data set and see how we fare there :).
> > > > > > >
> > > > > > > Regards,
> > > > > > >
> > > > > > >
> > > > > > > Vijay
> > > > > > >
> > > > > > >
> > > > > > > ______________________________ _________________
> > > > > > > Gluster-devel mailing list
> > > > > > > Gluster-devel@xxxxxxxxxxx
> > > > > > > http://www.gluster.org/ mailman/listinfo/gluster-devel
> > > > > > >
> > > > > > > _______________________________________________
> > > > > > > Gluster-devel mailing list
> > > > > > > Gluster-devel@xxxxxxxxxxx
> > > > > > > http://www.gluster.org/mailman/listinfo/gluster-devel
> > > > > > > _______________________________________________
> > > > > > > Gluster-devel mailing list
> > > > > > > Gluster-devel@xxxxxxxxxxx
> > > > > > > http://www.gluster.org/mailman/listinfo/gluster-devel
> > > > > > >
> > > > > >
> > > > >
> > > > _______________________________________________
> > > > Gluster-devel mailing list
> > > > Gluster-devel@xxxxxxxxxxx
> > > > http://www.gluster.org/mailman/listinfo/gluster-devel
> > > >
> > > _______________________________________________
> > > Gluster-devel mailing list
> > > Gluster-devel@xxxxxxxxxxx
> > > http://www.gluster.org/mailman/listinfo/gluster-devel
> > >
> >
>
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel