---------- Forwarded message ----------
From: Kaushal M <kshlmster@xxxxxxxxx>
Date: Thu, Jul 5, 2012 at 12:46 PM
Subject: Re: Fwd: Bug#679767: glusterfs-server: Crash when creating new volume with 'gluster volume create'
To: Louis Zuckerman <glusterdevel@xxxxxxxxxxxxxxxxxx>
Hi guys,
From: Kaushal M <kshlmster@xxxxxxxxx>
Date: Thu, Jul 5, 2012 at 12:46 PM
Subject: Re: Fwd: Bug#679767: glusterfs-server: Crash when creating new volume with 'gluster volume create'
To: Louis Zuckerman <glusterdevel@xxxxxxxxxxxxxxxxxx>
Hi guys,
This looks like its caused by the optimizations done by gcc-4.7. This occuring when gluster is compiled with the default -O2 optimization. -O0 doesn't cause this. Can you confirm?
- Kaushal
On Wed, Jul 4, 2012 at 6:31 PM, Louis Zuckerman <glusterdevel@xxxxxxxxxxxxxxxxxx> wrote:
Hi J.J.B,
Thanks for reporting the bug.
I can confirm this is very easy to reproduce...
Install the 3.2.7 package from Wheezy/Sid, try to create a volume
(with a single brick on the local machine) and glusterd crashes.
Restart glusterd and you can start the volume, but glusterd crashes
again. You can restart glusterd again to stop the volume, but another
crash. Then restarting glusterd again allows you to delete the
volume, but still another crash.
I'll check the glusterfs bugzilla for related issues & open a bug if
there's not one already. Will follow up later today with the link.
Also, want to clear this up:
> > - if not, is the created volume working?
> The volume is created and working, but I cannot stop it's process with
> the init-script (/etc/init.d/glusterfs-server stop). The init-script
> will only stop the management-daemon and I have to kill the volume
> manually.
That is expected behavior. The glusterfs-server initscript only
controls glusterd, the management daemon. Stopping & starting the
glusterfsd brick export daemons for bricks in a volume is done using
"gluster volume stop/start" commands in the gluster CLI and has effect
on all bricks in the volume across all servers.
HTH
-louis
> _______________________________________________
On Tue, Jul 3, 2012 at 1:11 PM, Patrick Matthäi <pmatthaei@xxxxxxxxxx> wrote:
> Hello gluster guys,
>
> we have found a bug, where glusterd crashes everytime where a volume is
> created or deleted. Full information and backtraces here:
> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=679767
>
> Any idea?
>
>
> Package: glusterfs-server
> Version: 3.2.7-1
> Severity: normal
>
> Dear Maintainer,
>
> After installing glusterfs-server, when I try to create a volume
> "wheezy" the glusterd-daemon crashes. It seems that something goes wrong
> in the communication between "gluster" and "glusterd". A request is
> sent, but no reply arrives (checked this with wireshark).
>
> Command executed:
> # gluster volume create wheezy wheezy:/tmp
>
> Trace of glusterd:
> # gdb --args /usr/sbin/glusterd --debug -p /var/run/glusterd.pid
> --volfile=/etc/glusterfs/glusterd.vol
> GNU gdb (GDB) 7.4.1-debian
> Copyright (C) 2012 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later
> <http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law. Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-linux-gnu".
> For bug reporting instructions, please see:
> <http://www.gnu.org/software/gdb/bugs/>...
> Reading symbols from /usr/sbin/glusterd...Reading symbols from
> /usr/lib/debug/usr/sbin/glusterfsd...done.
> done.
> (gdb) run
> Starting program: /usr/sbin/glusterd --debug -p /var/run/glusterd.pid
> --volfile=/etc/glusterfs/glusterd.vol
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
> [2012-07-01 14:47:13.420700] I [glusterfsd.c:1493:main]
> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.2.7
> [2012-07-01 14:47:13.420888] D
> [glusterfsd.c:1235:glusterfs_pidfile_update] 0-glusterfsd: pidfile
> /var/run/glusterd.pid updated with pid 2511
> [New Thread 0x7ffff6194700 (LWP 2514)]
> [2012-07-01 14:47:13.422436] D [glusterfsd.c:374:get_volfp]
> 0-glusterfsd: loading volume file /etc/glusterfs/glusterd.vol
> [2012-07-01 14:47:13.459256] D [xlator.c:1302:xlator_dynload] 0-xlator:
> dlsym(reconfigure) on /usr/lib/glusterfs/3.2.7/xlator/mgmt/glusterd.so:
> undefined symbol: reconfigure -- neglecting
> [2012-07-01 14:47:13.459350] D [xlator.c:1308:xlator_dynload] 0-xlator:
> dlsym(validate_options) on
> /usr/lib/glusterfs/3.2.7/xlator/mgmt/glusterd.so: undefined symbol:
> validate_options -- neglecting
> [2012-07-01 14:47:13.459561] I [glusterd.c:550:init] 0-management: Using
> /etc/glusterd as working directory
> [2012-07-01 14:47:13.459641] D
> [glusterd.c:242:glusterd_rpcsvc_options_build] 0-: listen-backlog value: 128
> [2012-07-01 14:47:13.460080] D [rpcsvc.c:1771:rpcsvc_init]
> 0-rpc-service: RPC service inited.
> [2012-07-01 14:47:13.460136] D [rpcsvc.c:1568:rpcsvc_program_register]
> 0-rpc-service: New program registered: GF-DUMP, Num: 123451501, Ver: 1,
> Port: 0
> [2012-07-01 14:47:13.460223] D [rpc-transport.c:673:rpc_transport_load]
> 0-rpc-transport: attempt to load file
> /usr/lib/glusterfs/3.2.7/rpc-transport/socket.so
> [2012-07-01 14:47:13.466313] D
> [rpc-transport.c:97:__volume_option_value_validate] 0-socket.management:
> no range check required for 'option transport.socket.listen-backlog 128'
> [2012-07-01 14:47:13.466582] D
> [rpc-transport.c:97:__volume_option_value_validate] 0-socket.management:
> no range check required for 'option transport.socket.keepalive-interval 2'
> [2012-07-01 14:47:13.466777] D
> [rpc-transport.c:97:__volume_option_value_validate] 0-socket.management:
> no range check required for 'option transport.socket.keepalive-time 10'
> [2012-07-01 14:47:13.466976] D [name.c:552:server_fill_address_family]
> 0-socket.management: option address-family not specified, defaulting to
> inet/inet6
> [2012-07-01 14:47:13.467371] D [rpc-transport.c:673:rpc_transport_load]
> 0-rpc-transport: attempt to load file
> /usr/lib/glusterfs/3.2.7/rpc-transport/rdma.so
> [2012-07-01 14:47:13.475817] C [rdma.c:3934:rdma_init]
> 0-rpc-transport/rdma: Failed to get IB devices
> [2012-07-01 14:47:13.476685] E [rdma.c:4813:init] 0-rdma.management:
> Failed to initialize IB Device
> [2012-07-01 14:47:13.477039] E [rpc-transport.c:742:rpc_transport_load]
> 0-rpc-transport: 'rdma' initialization failed
> [2012-07-01 14:47:13.477339] W [rpcsvc.c:1288:rpcsvc_transport_create]
> 0-rpc-service: cannot create listener, initing the transport failed
> [2012-07-01 14:47:13.477721] D [rpcsvc.c:1568:rpcsvc_program_register]
> 0-rpc-service: New program registered: GlusterD0.0.1, Num: 1298433, Ver:
> 1, Port: 0
> [2012-07-01 14:47:13.478053] D [rpcsvc.c:1568:rpcsvc_program_register]
> 0-rpc-service: New program registered: GlusterD svc cli, Num: 1238463,
> Ver: 1, Port: 0
> [2012-07-01 14:47:13.478378] D [rpcsvc.c:1568:rpcsvc_program_register]
> 0-rpc-service: New program registered: GlusterD svc mgmt, Num: 1238433,
> Ver: 1, Port: 0
> [2012-07-01 14:47:13.478742] D [rpcsvc.c:1568:rpcsvc_program_register]
> 0-rpc-service: New program registered: Gluster Portmap, Num: 34123456,
> Ver: 1, Port: 0
> [2012-07-01 14:47:13.479008] D [rpcsvc.c:1568:rpcsvc_program_register]
> 0-rpc-service: New program registered: GlusterFS Handshake, Num:
> 14398633, Ver: 1, Port: 0
> [2012-07-01 14:47:13.479308] D
> [glusterd-utils.c:3136:glusterd_sm_tr_log_init] 0-: returning 0
> [2012-07-01 14:47:13.479654] D
> [glusterd-store.c:1134:glusterd_store_handle_new] 0-: Returning 0
> [2012-07-01 14:47:13.479960] D
> [glusterd-store.c:1155:glusterd_store_handle_retrieve] 0-: Returning 0
> [2012-07-01 14:47:13.480298] D
> [glusterd-store.c:1038:glusterd_store_retrieve_value] 0-: key UUID read
> [2012-07-01 14:47:13.480589] D
> [glusterd-store.c:1041:glusterd_store_retrieve_value] 0-: key UUID found
> [2012-07-01 14:47:13.480898] D
> [glusterd-store.c:1272:glusterd_retrieve_uuid] 0-: Returning 0
> [2012-07-01 14:47:13.481186] I [glusterd.c:88:glusterd_uuid_init]
> 0-glusterd: retrieved UUID: 46aa3f36-9c98-4668-aff4-7234ef2b217e
> [2012-07-01 14:47:13.534063] D
> [glusterd.c:302:glusterd_check_gsync_present] 0-: Returning 0
> [2012-07-01 14:47:13.534159] D
> [glusterd.c:361:glusterd_crt_georep_folders] 0-: Returning 0
> [2012-07-01 14:47:14.678812] D
> [glusterd-store.c:1914:glusterd_store_retrieve_volumes] 0-: Returning with 0
> [2012-07-01 14:47:14.678939] D
> [glusterd-store.c:2262:glusterd_store_retrieve_peers] 0-: Returning with 0
> [2012-07-01 14:47:14.678967] D
> [glusterd-store.c:2292:glusterd_resolve_all_bricks] 0-: Returning with 0
> [2012-07-01 14:47:14.678994] D [glusterd-store.c:2319:glusterd_restore]
> 0-: Returning 0
> Given volfile:
> +------------------------------------------------------------------------------+
> 1: volume management
> 2: type mgmt/glusterd
> 3: option working-directory /etc/glusterd
> 4: option transport-type socket,rdma
> 5: option transport.socket.keepalive-time 10
> 6: option transport.socket.keepalive-interval 2
> 7: end-volume
> 8:
>
> +------------------------------------------------------------------------------+
> [2012-07-01 14:47:18.301475] D
> [glusterd-op-sm.c:8544:glusterd_op_set_cli_op] 0-: Returning 0
> [2012-07-01 14:47:18.301537] I
> [glusterd-handler.c:900:glusterd_handle_create_volume] 0-glusterd:
> Received create volume req
> [2012-07-01 14:47:18.301607] D
> [glusterd-utils.c:493:glusterd_check_volume_exists] 0-: Volume wheezy
> does not exist.stat failed with errno : 2 on path: /etc/glusterd/vols/wheezy
> [2012-07-01 14:47:18.301687] D
> [glusterd-utils.c:630:glusterd_brickinfo_new] 0-: Returning 0
> [2012-07-01 14:47:18.301700] D
> [glusterd-utils.c:687:glusterd_brickinfo_from_brick] 0-: Returning 0
> [2012-07-01 14:47:18.302509] D
> [glusterd-utils.c:2755:glusterd_friend_find_by_hostname] 0-glusterd:
> Unable to find friend: wheezy
> [2012-07-01 14:47:18.302631] D
> [glusterd-utils.c:211:glusterd_is_local_addr] 0-glusterd: wheezy is local
> [2012-07-01 14:47:18.302652] D
> [glusterd-utils.c:2789:glusterd_hostname_to_uuid] 0-: returning 0
> [2012-07-01 14:47:18.302661] D
> [glusterd-utils.c:642:glusterd_resolve_brick] 0-: Returning 0
> [2012-07-01 14:47:18.302673] D
> [glusterd-utils.c:2927:glusterd_new_brick_validate] 0-: returning 0
> [2012-07-01 14:47:18.302682] D
> [glusterd-utils.c:760:glusterd_volume_brickinfo_get] 0-: Returning -1
> [2012-07-01 14:47:18.302699] I [glusterd-utils.c:243:glusterd_lock]
> 0-glusterd: Cluster lock held by 46aa3f36-9c98-4668-aff4-7234ef2b217e
> [2012-07-01 14:47:18.302709] I
> [glusterd-handler.c:420:glusterd_op_txn_begin] 0-glusterd: Acquired
> local lock
> [2012-07-01 14:47:18.302722] D
> [glusterd-op-sm.c:8393:glusterd_op_sm_inject_event] 0-glusterd:
> Enqueuing event: 'GD_OP_EVENT_START_LOCK'
> [2012-07-01 14:47:18.302731] D
> [glusterd-handler.c:424:glusterd_op_txn_begin] 0-glusterd: Returning 0
> [2012-07-01 14:47:18.302756] D
> [glusterd-utils.c:577:glusterd_volume_brickinfos_delete] 0-: Returning 0
> [2012-07-01 14:47:18.302769] D [glusterd-op-sm.c:8449:glusterd_op_sm]
> 0-: Dequeued event of type: 'GD_OP_EVENT_START_LOCK'
> [2012-07-01 14:47:18.302779] D
> [glusterd-op-sm.c:8393:glusterd_op_sm_inject_event] 0-glusterd:
> Enqueuing event: 'GD_OP_EVENT_ALL_ACC'
> [2012-07-01 14:47:18.302787] D
> [glusterd-op-sm.c:180:glusterd_op_sm_inject_all_acc] 0-: Returning 0
> [2012-07-01 14:47:18.302797] D
> [glusterd-op-sm.c:6462:glusterd_op_ac_send_lock] 0-: Returning with 0
> [2012-07-01 14:47:18.302806] D
> [glusterd-utils.c:3182:glusterd_sm_tr_log_transition_add] 0-glusterd:
> Transitioning from 'Default' to 'Lock sent' due to event
> 'GD_OP_EVENT_START_LOCK'
> [2012-07-01 14:47:18.302815] D
> [glusterd-utils.c:3184:glusterd_sm_tr_log_transition_add] 0-: returning 0
> [2012-07-01 14:47:18.302823] D [glusterd-op-sm.c:8449:glusterd_op_sm]
> 0-: Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
> [2012-07-01 14:47:18.302849] D
> [glusterd-utils.c:493:glusterd_check_volume_exists] 0-: Volume wheezy
> does not exist.stat failed with errno : 2 on path: /etc/glusterd/vols/wheezy
> [2012-07-01 14:47:18.302867] D
> [glusterd-utils.c:630:glusterd_brickinfo_new] 0-: Returning 0
> [2012-07-01 14:47:18.302878] D
> [glusterd-utils.c:687:glusterd_brickinfo_from_brick] 0-: Returning 0
> [2012-07-01 14:47:18.302918] D
> [glusterd-utils.c:2755:glusterd_friend_find_by_hostname] 0-glusterd:
> Unable to find friend: wheezy
> [2012-07-01 14:47:18.302955] D
> [glusterd-utils.c:211:glusterd_is_local_addr] 0-glusterd: wheezy is local
> [2012-07-01 14:47:18.302965] D
> [glusterd-utils.c:2789:glusterd_hostname_to_uuid] 0-: returning 0
> [2012-07-01 14:47:18.302973] D
> [glusterd-utils.c:642:glusterd_resolve_brick] 0-: Returning 0
> [2012-07-01 14:47:18.302984] D
> [glusterd-utils.c:3013:glusterd_brick_create_path] 0-: returning 0
> [2012-07-01 14:47:18.302993] D
> [glusterd-op-sm.c:386:glusterd_op_stage_create_volume] 0-: Returning 0
> [2012-07-01 14:47:18.303001] D
> [glusterd-op-sm.c:7584:glusterd_op_stage_validate] 0-: Returning 0
> [2012-07-01 14:47:18.303014] I
> [glusterd-op-sm.c:6737:glusterd_op_ac_send_stage_op] 0-glusterd: Sent op
> req to 0 peers
> [2012-07-01 14:47:18.303027] D
> [glusterd-op-sm.c:8393:glusterd_op_sm_inject_event] 0-glusterd:
> Enqueuing event: 'GD_OP_EVENT_ALL_ACC'
> [2012-07-01 14:47:18.303039] D
> [glusterd-op-sm.c:180:glusterd_op_sm_inject_all_acc] 0-: Returning 0
> [2012-07-01 14:47:18.303047] D
> [glusterd-op-sm.c:6742:glusterd_op_ac_send_stage_op] 0-: Returning with 0
> [2012-07-01 14:47:18.303055] D
> [glusterd-utils.c:3182:glusterd_sm_tr_log_transition_add] 0-glusterd:
> Transitioning from 'Lock sent' to 'Stage op sent' due to event
> 'GD_OP_EVENT_ALL_ACC'
> [2012-07-01 14:47:18.303064] D
> [glusterd-utils.c:3184:glusterd_sm_tr_log_transition_add] 0-: returning 0
> [2012-07-01 14:47:18.303077] D [glusterd-op-sm.c:8449:glusterd_op_sm]
> 0-: Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
> [2012-07-01 14:47:18.303099] D
> [glusterd-op-sm.c:8092:glusterd_op_bricks_select] 0-: Returning 0
> [2012-07-01 14:47:18.303112] D
> [glusterd-rpc-ops.c:1903:glusterd3_1_brick_op] 0-glusterd: Sent op req
> to 0 bricks
> [2012-07-01 14:47:18.303120] D
> [glusterd-rpc-ops.c:1911:glusterd3_1_brick_op] 0-glusterd: Returning 0
> [2012-07-01 14:47:18.303129] D
> [glusterd-op-sm.c:8393:glusterd_op_sm_inject_event] 0-glusterd:
> Enqueuing event: 'GD_OP_EVENT_ALL_ACK'
> [2012-07-01 14:47:18.303137] D
> [glusterd-op-sm.c:8007:glusterd_op_ac_send_brick_op] 0-: Returning with 0
> [2012-07-01 14:47:18.303145] D
> [glusterd-utils.c:3182:glusterd_sm_tr_log_transition_add] 0-glusterd:
> Transitioning from 'Stage op sent' to 'Brick op sent' due to event
> 'GD_OP_EVENT_ALL_ACC'
> [2012-07-01 14:47:18.303154] D
> [glusterd-utils.c:3184:glusterd_sm_tr_log_transition_add] 0-: returning 0
> [2012-07-01 14:47:18.303161] D [glusterd-op-sm.c:8449:glusterd_op_sm]
> 0-: Dequeued event of type: 'GD_OP_EVENT_ALL_ACK'
> [2012-07-01 14:47:18.303179] D
> [glusterd-utils.c:538:glusterd_volinfo_new] 0-: Returning 0
> [2012-07-01 14:47:18.303198] D
> [glusterd-utils.c:630:glusterd_brickinfo_new] 0-: Returning 0
> [2012-07-01 14:47:18.303208] D
> [glusterd-utils.c:687:glusterd_brickinfo_from_brick] 0-: Returning 0
> [2012-07-01 14:47:18.303244] D
> [glusterd-utils.c:2755:glusterd_friend_find_by_hostname] 0-glusterd:
> Unable to find friend: wheezy
> [2012-07-01 14:47:18.303278] D
> [glusterd-utils.c:211:glusterd_is_local_addr] 0-glusterd: wheezy is local
> [2012-07-01 14:47:18.303289] D
> [glusterd-utils.c:2789:glusterd_hostname_to_uuid] 0-: returning 0
> [2012-07-01 14:47:18.303297] D
> [glusterd-utils.c:642:glusterd_resolve_brick] 0-: Returning 0
> [2012-07-01 14:47:18.303382] D
> [glusterd-store.c:608:glusterd_store_create_volume_dir] 0-: Returning with 0
> [2012-07-01 14:47:18.303424] D
> [glusterd-store.c:1134:glusterd_store_handle_new] 0-: Returning 0
> [2012-07-01 14:47:18.303448] D
> [glusterd-store.c:1134:glusterd_store_handle_new] 0-: Returning 0
> [2012-07-01 14:47:18.303492] D
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0
> [2012-07-01 14:47:18.303561] D
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0
> [2012-07-01 14:47:18.303581] D
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0
> [2012-07-01 14:47:18.303596] D
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0
> [2012-07-01 14:47:18.303610] D
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0
> [2012-07-01 14:47:18.303625] D
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0
> [2012-07-01 14:47:18.303643] D
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0
> [2012-07-01 14:47:18.303651] D
> [glusterd-store.c:632:glusterd_store_volinfo_write] 0-: Returning 0
> [2012-07-01 14:47:18.303674] D
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0
> [2012-07-01 14:47:18.303733] D
> [glusterd-store.c:1134:glusterd_store_handle_new] 0-: Returning 0
> [2012-07-01 14:47:18.303771] D
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0
> [2012-07-01 14:47:18.303788] D
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0
> [2012-07-01 14:47:18.303803] D
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0
> [2012-07-01 14:47:18.303817] D
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0
> [2012-07-01 14:47:18.303825] D
> [glusterd-store.c:292:glusterd_store_brickinfo_write] 0-: Returning 0
> [2012-07-01 14:47:18.304055] D
> [glusterd-store.c:319:glusterd_store_perform_brick_store] 0-: Returning 0
> [2012-07-01 14:47:18.304075] D
> [glusterd-store.c:349:glusterd_store_brickinfo] 0-: Returning with 0
> [2012-07-01 14:47:18.304086] D
> [glusterd-store.c:710:glusterd_store_brickinfos] 0-: Returning 0
> [2012-07-01 14:47:18.304153] D
> [glusterd-store.c:808:glusterd_store_perform_volume_store] 0-: Returning 0
> [2012-07-01 14:47:18.304201] D
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0
> [2012-07-01 14:47:18.304211] D
> [glusterd-store.c:749:glusterd_store_rbstate_write] 0-management:
> Returning 0
> [2012-07-01 14:47:18.304254] D
> [glusterd-store.c:777:glusterd_store_perform_rbstate_store] 0-: Returning 0
> [2012-07-01 14:47:18.308029] D
> [glusterd-utils.c:1348:glusterd_volume_compute_cksum] 0-: Returning with 0
> [2012-07-01 14:47:18.308124] D
> [glusterd-store.c:860:glusterd_store_volinfo] 0-: Returning 0
> [2012-07-01 14:47:18.308180] D
> [glusterd-volgen.c:2342:generate_brick_volfiles] 0-: Found a brick -
> wheezy:/tmp
> [2012-07-01 14:47:18.308263] D
> [glusterd-volgen.c:1311:server_check_marker_off] 0-: Returning 0
> [2012-07-01 14:47:18.308409] D
> [glusterd-volgen.c:2353:generate_brick_volfiles] 0-: Returning 0
> [2012-07-01 14:47:18.312437] D
> [glusterd-utils.c:1348:glusterd_volume_compute_cksum] 0-: Returning with 0
> [2012-07-01 14:47:18.312526] D
> [glusterd-op-sm.c:7664:glusterd_op_commit_perform] 0-: Returning 0
> [2012-07-01 14:47:18.312557] I
> [glusterd-op-sm.c:6854:glusterd_op_ac_send_commit_op] 0-glusterd: Sent
> op req to 0 peers
> [2012-07-01 14:47:18.312589] D
> [glusterd-op-sm.c:8393:glusterd_op_sm_inject_event] 0-glusterd:
> Enqueuing event: 'GD_OP_EVENT_ALL_ACC'
> [2012-07-01 14:47:18.312611] D
> [glusterd-op-sm.c:180:glusterd_op_sm_inject_all_acc] 0-: Returning 0
> [2012-07-01 14:47:18.312629] D
> [glusterd-op-sm.c:6875:glusterd_op_ac_send_commit_op] 0-: Returning with 0
> [2012-07-01 14:47:18.312655] D
> [glusterd-utils.c:3182:glusterd_sm_tr_log_transition_add] 0-glusterd:
> Transitioning from 'Brick op sent' to 'Commit op sent' due to event
> 'GD_OP_EVENT_ALL_ACK'
> [2012-07-01 14:47:18.312679] D
> [glusterd-utils.c:3184:glusterd_sm_tr_log_transition_add] 0-: returning 0
> [2012-07-01 14:47:18.312711] D [glusterd-op-sm.c:8449:glusterd_op_sm]
> 0-: Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
> [2012-07-01 14:47:18.312733] D
> [glusterd-op-sm.c:8393:glusterd_op_sm_inject_event] 0-glusterd:
> Enqueuing event: 'GD_OP_EVENT_ALL_ACC'
> [2012-07-01 14:47:18.312752] D
> [glusterd-op-sm.c:180:glusterd_op_sm_inject_all_acc] 0-: Returning 0
> [2012-07-01 14:47:18.312770] D
> [glusterd-op-sm.c:6509:glusterd_op_ac_send_unlock] 0-: Returning with 0
> [2012-07-01 14:47:18.312788] D
> [glusterd-utils.c:3182:glusterd_sm_tr_log_transition_add] 0-glusterd:
> Transitioning from 'Commit op sent' to 'Unlock sent' due to event
> 'GD_OP_EVENT_ALL_ACC'
> [2012-07-01 14:47:18.312809] D
> [glusterd-utils.c:3184:glusterd_sm_tr_log_transition_add] 0-: returning 0
> [2012-07-01 14:47:18.312827] D [glusterd-op-sm.c:8449:glusterd_op_sm]
> 0-: Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
> [2012-07-01 14:47:18.312848] I
> [glusterd-op-sm.c:7250:glusterd_op_txn_complete] 0-glusterd: Cleared
> local lock
>
> Program received signal SIGSEGV, Segmentation fault.
> 0x00007ffff7021bf1 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> (gdb) bt
> #0 0x00007ffff7021bf1 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> #1 0x00007ffff70a4357 in xdr_string () from /lib/x86_64-linux-gnu/libc.so.6
> #2 0x00007ffff7756061 in xdr_gf1_cli_create_vol_rsp
> (xdrs=xdrs@entry=0x7fffffffd150, objp=objp@entry=0x7fffffffd2b0) at
> cli1-xdr.c:279
> #3 0x00007ffff796ff11 in xdr_serialize_generic (outmsg=...,
> res=0x7fffffffd2b0, proc=0x7ffff7756010 <xdr_gf1_cli_create_vol_rsp>) at
> rpc-common.c:36
> #4 0x00007ffff5751906 in glusterd_serialize_reply
> (req=req@entry=0x7ffff7f37024, arg=0x7fffffffd2b0, sfunc=0x7ffff7757250
> <gf_xdr_serialize_cli_create_vol_rsp>,
> outmsg=outmsg@entry=0x7fffffffd1e0) at glusterd-utils.c:402
> #5 0x00007ffff5751a25 in glusterd_submit_reply
> (req=req@entry=0x7ffff7f37024, arg=<optimized out>,
> payload=payload@entry=0x0, payloadcount=payloadcount@entry=0,
> iobref=0x5555557916f0, iobref@entry=0x0, sfunc=<optimized out>)
> at glusterd-utils.c:444
> #6 0x00007ffff576027f in glusterd_op_send_cli_response
> (op=op@entry=GD_OP_CREATE_VOLUME, op_ret=op_ret@entry=0,
> op_errno=op_errno@entry=0, req=req@entry=0x7ffff7f37024,
> op_ctx=op_ctx@entry=0x555555789f40, op_errstr=op_errstr@entry=0x0)
> at glusterd-rpc-ops.c:414
> #7 0x00007ffff574f84c in glusterd_op_txn_complete () at
> glusterd-op-sm.c:7278
> #8 0x00007ffff574fb39 in glusterd_op_ac_unlocked_all (event=<optimized
> out>, ctx=<optimized out>) at glusterd-op-sm.c:7304
> #9 0x00007ffff57489d2 in glusterd_op_sm () at glusterd-op-sm.c:8458
> #10 0x00007ffff5730f60 in glusterd_handle_create_volume
> (req=0x7ffff7f37024) at glusterd-handler.c:1047
> #11 0x00007ffff79671ff in rpcsvc_handle_rpc_call (svc=0x555555789c50,
> trans=trans@entry=0x55555578e370, msg=msg@entry=0x555555784270) at
> rpcsvc.c:480
> #12 0x00007ffff796778b in rpcsvc_notify (trans=0x55555578e370,
> mydata=<optimized out>, event=<optimized out>, data="" at
> rpcsvc.c:576
> #13 0x00007ffff796af13 in rpc_transport_notify
> (this=this@entry=0x55555578e370,
> event=event@entry=RPC_TRANSPORT_MSG_RECEIVED, data="" out>) at
> rpc-transport.c:919
> #14 0x00007ffff54fb224 in socket_event_poll_in
> (this=this@entry=0x55555578e370) at socket.c:1647
> #15 0x00007ffff54fb565 in socket_event_handler (fd=<optimized out>,
> idx=<optimized out>, data="" poll_in=1, poll_out=0,
> poll_err=0) at socket.c:1762
> #16 0x00007ffff7bb64c8 in event_dispatch_epoll_handler (i=<optimized
> out>, events=0x55555578d750, event_pool=0x5555557823a0) at event.c:794
> #17 event_dispatch_epoll (event_pool=0x5555557823a0) at event.c:856
> #18 0x0000555555558a6b in main (argc=5, argv=0x7fffffffe648) at
> glusterfsd.c:1509
> (gdb)
>
> -- System Information:
> Debian Release: wheezy/sid
> APT prefers testing
> APT policy: (500, 'testing'), (400, 'unstable')
> Architecture: amd64 (x86_64)
>
> Kernel: Linux 3.2.0-2-amd64 (SMP w/3 CPU cores)
> Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8)
> Shell: /bin/sh linked to /bin/dash
>
> Versions of packages glusterfs-server depends on:
> ii glusterfs-client 3.2.7-1
> ii glusterfs-common 3.2.7-1
> ii libc6 2.13-33
> ii libncurses5 5.9-9
> ii libreadline6 6.2-8
> ii libtinfo5 5.9-9
> ii lsb-base 4.1+Debian7
>
> glusterfs-server recommends no packages.
>
> Versions of packages glusterfs-server suggests:
> ii glusterfs-examples 3.2.7-1
> ii nfs-common 1:1.2.6-2
>
> -- Configuration Files:
> /etc/glusterfs/glusterd.vol unchanged
>
> -- no debconf information
>
>
>
>
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxx
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
>
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
https://lists.nongnu.org/mailman/listinfo/gluster-devel