Re: Core generated by trash.t

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I should have said the regression link is irrelevant here. Try running
this test on your local setup multiple times on mainline. I do believe
you should see the crash.

~Atin

On 04/20/2016 03:36 PM, Anoop C S wrote:
> On Wed, 2016-04-20 at 13:25 +0530, Anoop C S wrote:
>> On Wed, 2016-04-20 at 01:21 +0530, Atin Mukherjee wrote:
>>>
>>> Regression run [1] failed from trash.t, however the same doesn't
>>> talk
>>> about any core file, but when I run it with and with out my changes
>>> the same generates a core.
>> on which platform you ran trash.t, NetBSD or Linux?
>>
>>>
>>>
>>> [1]
>>> https://build.gluster.org/job/rackspace-netbsd7-regression-triggere
>>> d/
>>> 15971/consoleFull
>>>
>> not ok 62 , LINENUM:245
>> FAILED COMMAND: start_vol patchy /mnt/glusterfs/0
>> /mnt/glusterfs/0/abc
>>
>> start_vol function basically performs volume start command and
>> repeatedly checks for the presence of directory named 'abc' under
>> root
>> of the volume. I see the following glusterd errors from archived
>> logs(build-install-etc-glusterfs-glusterd.vol.log):
>>
>>>
>>> [2016-04-19 15:56:48.722283] W [common-
>>> utils.c:1805:gf_string2boolean] (-->0xb9c31f47
>>> <glusterd_op_start_volume+0x3f4> at
>>> /build/install/lib/glusterfs/3.8dev/xlator/mgmt/glusterd.so --
>>>>
>>>> 0xbb733b56 <gf_string2boolean+0x77> at
>>> /build/install/lib/libglusterfs.so.0 ) 0-management: argument
>>> invalid
>>> [Invalid argument]
>>> [2016-04-19 15:56:48.766453] I [MSGID: 106144] [glusterd-
>>> pmap.c:270:pmap_registry_remove] 0-pmap: removing brick
>>> /d/backends/patchy11 on port 49153
>>> [2016-04-19 15:56:48.771041] E [MSGID: 106005] [glusterd-
>>> utils.c:4689:glusterd_brick_start] 0-management: Unable to start
>>> brick nbslave75.cloud.gluster.org:/d/backends/patchy1
>>> [2016-04-19 15:56:48.771132] E [MSGID: 106123] [glusterd-
>>> mgmt.c:306:gd_mgmt_v3_commit_fn] 0-management: Volume start commit
>>> failed.
>>> [2016-04-19 15:56:48.771161] E [MSGID: 106123] [glusterd-
>>> mgmt.c:1423:glusterd_mgmt_v3_commit] 0-management: Commit failed
>>> for
>>> operation Start on local node
>>> [2016-04-19 15:56:48.771188] E [MSGID: 106123] [glusterd-
>>> mgmt.c:2014:glusterd_mgmt_v3_initiate_all_phases] 0-management:
>>> Commit Op Failed
>> Brick errors from bricks/d-backends-patchy1.log:
>>
>>>
>>> [2016-04-19 15:56:48.763066] I
>>> [rpcsvc.c:2218:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service:
>>> Configured rpc.outstanding-rpc-limit with value 64
>>> [2016-04-19 15:56:48.763127] W [MSGID: 101002]
>>> [options.c:954:xl_opt_validate] 0-patchy-server: option 'listen-
>>> port' 
>>> is deprecated, preferred is 'transport.socket.listen-port',
>>> continuing with correction
>>> [2016-04-19 15:56:48.763273] E [socket.c:765:__socket_server_bind]
>>> 0-
>>> tcp.patchy-server: binding to  failed: Address already in use
>>> [2016-04-19 15:56:48.763293] E [socket.c:768:__socket_server_bind]
>>> 0-
>>> tcp.patchy-server: Port is already in use
>>> [2016-04-19 15:56:48.763314] W
>>> [rpcsvc.c:1600:rpcsvc_transport_create] 0-rpc-service: listening on
>>> transport failed
>>> [2016-04-19 15:56:48.763332] W [MSGID: 115045] [server.c:1061:init]
>>> 0-patchy-server: creation of listener failed
>>> [2016-04-19 15:56:48.763351] E [MSGID: 101019]
>>> [xlator.c:430:xlator_init] 0-patchy-server: Initialization of
>>> volume
>>> 'patchy-server' failed, review your volfile again
>>> [2016-04-19 15:56:48.763368] E [MSGID: 101066]
>>> [graph.c:324:glusterfs_graph_init] 0-patchy-server: initializing
>>> translator failed
>>> [2016-04-19 15:56:48.763383] E [MSGID: 101176]
>>> [graph.c:670:glusterfs_graph_activate] 0-graph: init failed
>>> [2016-04-19 15:56:48.766235] W [glusterfsd.c:1265:cleanup_and_exit]
>>> (-->0x8050b83 <glusterfs_process_volfp+0x1a3> at
>>> /build/install/sbin/glusterfsd -->0x804e8e7 <cleanup_and_exit+0x8d>
>>> at /build/install/sbin/glusterfsd ) 0-: received signum (0),
>>> shutting
>>> down
> 
> I found the following BZ with exactly similar brick error messages. Can
> you please confirm whether they are related or not?
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1322805
> 
>> Is this something related to the
>> patch(http://review.gluster.org/#/c/10
>> 785/) for which the this regression was run against? Because I don't
>> expect volume start to fail.
>>
>> If you have the coredump, can you please share the back trace for
>> further analysis?
>>
>> Thanks,
>> --Anoop C S. 
>>
>>>
>>> ~Atin
>>> _______________________________________________
>>> Gluster-devel mailing list
>>> Gluster-devel@xxxxxxxxxxx
>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel@xxxxxxxxxxx
>> http://www.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux