Re: Rebalance is not working in single node cluster environment.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sent from Samsung Galaxy S4
On 13 Jun 2015 14:11, "Raghavendra Talur" <raghavendra.talur@xxxxxxxxx> wrote:
>
>
>
> On Sat, Jun 13, 2015 at 1:36 PM, Niels de Vos <ndevos@xxxxxxxxxx> wrote:
>>
>> On Sat, Jun 13, 2015 at 01:15:04PM +0530, Raghavendra Talur wrote:
>> > On Sat, Jun 13, 2015 at 1:00 PM, Atin Mukherjee <atin.mukherjee83@xxxxxxxxx>
>> > wrote:
>> >
>> > > Sent from Samsung Galaxy S4
>> > > On 13 Jun 2015 12:58, "Anand Nekkunti" <anekkunt@xxxxxxxxxx> wrote:
>> > > >
>> > > > Hi All
>> > > >    Rebalance is not working in single node cluster environment ( current
>> > > test frame work ).  I am getting error in below test , it seems re-balance
>> > > is not migrated to  current cluster test framework.
>> > > Could you pin point which test case fails and what log do you see?
>> > > >
>> > > > cleanup;
>> > > > TEST launch_cluster 2;
>> > > > TEST $CLI_1 peer probe $H2;
>> > > >
>> > > > EXPECT_WITHIN $PROBE_TIMEOUT 1 check_peers
>> > > >
>> > > > $CLI_1 volume create $V0 $H1:$B1/$V0  $H2:$B2/$V0
>> > > > EXPECT 'Created' volinfo_field $V0 'Status';
>> > > >
>> > > > $CLI_1 volume start $V0
>> > > > EXPECT 'Started' volinfo_field $V0 'Status';
>> > > >
>> > > > #Mount FUSE
>> > > > TEST glusterfs -s $H1 --volfile-id=$V0 $M0;
>> > > >
>> > > > TEST mkdir $M0/dir{1..4};
>> > > > TEST touch $M0/dir{1..4}/files{1..4};
>> > > >
>> > > > TEST $CLI_1 volume add-brick $V0 $H1:$B1/${V0}1 $H2:$B2/${V0}1
>> > > >
>> > > > TEST $CLI_1 volume rebalance $V0  start
>> > > >
>> > > > EXPECT_WITHIN 60 "completed" CLI_1_rebalance_status_field $V0
>> > > >
>> > > > $CLI_2 volume status $V0
>> > > > EXPECT 'Started' volinfo_field $V0 'Status';
>> > > >
>> > > > cleanup;
>> > > >
>> > > > Regards
>> > > > Anand.N
>> > > >
>> > > >
>> > > >
>> > > > _______________________________________________
>> > > > Gluster-devel mailing list
>> > > > Gluster-devel@xxxxxxxxxxx
>> > > > http://www.gluster.org/mailman/listinfo/gluster-devel
>> > > >
>> > >
>> > > _______________________________________________
>> > > Gluster-devel mailing list
>> > > Gluster-devel@xxxxxxxxxxx
>> > > http://www.gluster.org/mailman/listinfo/gluster-devel
>> > >
>> > >
>> > If it is a crash of glusterd when you do rebalance start, it is because of
>> > FORTIFY_FAIL in libc.
>> > Here is the patch that Susant has already sent:
>> > http://review.gluster.org/#/c/11090/
>> >
>> > You can verify that it is the same crash by checking the core in gdb; a
>> > SIGABRT would be raised
>> > after strncpy.
>>
>> Sounds like we should use _FORTIFY_SOURCE for running our regression
>> tests? Patches for build.sh or one of the other scripts are welcome!
>>
>> You can get them here:
>>     https://github.com/gluster/glusterfs-patch-acceptance-tests/
>>
>> Thanks,
>> Niels
>
>
> Yes, Kaushal and Vijay also agreed to have our regression use this flag.
>
> I have discovered a problem though. For glibc to detect these possible overflows,
> we need to have -D_FORTIFY_SOURCE at level 2 and -O optimization flag at minimum of
> 1 with 2 as recommended.
> Read this for more info: https://gcc.gnu.org/ml/gcc-patches/2004-09/msg02055.html
>
>
> Not sure if having -O2 will lead to debugging other cores difficult.
>
>
> If nobody objects to O2, I think I have created a pull request correctly.
> Please merge.
> https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/1
I feel we should try to maintain uniformity at all the places as far as compilation flags are concerned.
>
>
> --
> Raghavendra Talur 
>

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux