Re: Q'apla brick does not come online with gluster 5.0, even with fresh install

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Thu, Nov 1, 2018 at 10:08 AM Computerisms Corporation <bob@xxxxxxxxxxxxxxx> wrote:
My troubleshooting took me to confirming that all my package versions
were lined up and I came to realized that I had gotten version 5.0 from
the debian repos instead of the repo at download.gluster.org.  I
downgraded everything to 4.1.5-1 from gluster.org, rebooted, and messed
around a bit, and my gluster is back online.

Are you running centos or other distros apart from Debian? If so why don't you retry going to 5.0 with the correct base?




On 2018-10-31 10:32 a.m., Computerisms Corporation wrote:
> forgot to add output of glusterd console when starting the volume:
>
> [2018-10-31 17:31:33.887923] D [MSGID: 0]
> [glusterd-volume-ops.c:572:__glusterd_handle_cli_start_volume]
> 0-management: Received start vol req for volume moogle-gluster
> [2018-10-31 17:31:33.887976] D [MSGID: 0]
> [glusterd-locks.c:573:glusterd_mgmt_v3_lock] 0-management: Trying to
> acquire lock of vol moogle-gluster for
> bb8c61eb-f321-4485-8a8d-ddc369ac2203 as moogle-gluster_vol
> [2018-10-31 17:31:33.888171] D [MSGID: 0]
> [glusterd-locks.c:657:glusterd_mgmt_v3_lock] 0-management: Lock for vol
> moogle-gluster successfully held by bb8c61eb-f321-4485-8a8d-ddc369ac2203
> [2018-10-31 17:31:33.888189] D [MSGID: 0]
> [glusterd-locks.c:519:glusterd_multiple_mgmt_v3_lock] 0-management:
> Returning 0
> [2018-10-31 17:31:33.888204] D [MSGID: 0]
> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
> moogle-gluster found
> [2018-10-31 17:31:33.888213] D [MSGID: 0]
> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
> [2018-10-31 17:31:33.888229] D [MSGID: 0]
> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
> moogle-gluster found
> [2018-10-31 17:31:33.888237] D [MSGID: 0]
> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
> [2018-10-31 17:31:33.888247] D [MSGID: 0]
> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
> moogle-gluster found
> [2018-10-31 17:31:33.888256] D [MSGID: 0]
> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
> [2018-10-31 17:31:33.888269] D [MSGID: 0]
> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
> moogle-gluster found
> [2018-10-31 17:31:33.888277] D [MSGID: 0]
> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
> [2018-10-31 17:31:33.888294] D [MSGID: 0]
> [glusterd-utils.c:1142:glusterd_resolve_brick] 0-management: Returning 0
> [2018-10-31 17:31:33.888318] D [MSGID: 0]
> [glusterd-mgmt.c:223:gd_mgmt_v3_pre_validate_fn] 0-management: OP = 5.
> Returning 0
> [2018-10-31 17:31:33.888668] D [MSGID: 0]
> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
> moogle-gluster found
> [2018-10-31 17:31:33.888682] D [MSGID: 0]
> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
> [2018-10-31 17:31:33.888719] E [MSGID: 101012]
> [common-utils.c:4070:gf_is_service_running] 0-: Unable to read pidfile:
> /var/run/gluster/vols/moogle-gluster/sand1lian.computerisms.ca-var-GlusterBrick-moogle-gluster.pid
>
> [2018-10-31 17:31:33.888757] I
> [glusterd-utils.c:6300:glusterd_brick_start] 0-management: starting a
> fresh brick process for brick /var/GlusterBrick/moogle-gluster
> [2018-10-31 17:31:33.898943] D [logging.c:1998:_gf_msg_internal]
> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
> About to flush least recently used log message to disk
> [2018-10-31 17:31:33.888780] E [MSGID: 101012]
> [common-utils.c:4070:gf_is_service_running] 0-: Unable to read pidfile:
> /var/run/gluster/vols/moogle-gluster/sand1lian.computerisms.ca-var-GlusterBrick-moogle-gluster.pid
>
> [2018-10-31 17:31:33.898942] E [MSGID: 106005]
> [glusterd-utils.c:6305:glusterd_brick_start] 0-management: Unable to
> start brick sand1lian.computerisms.ca:/var/GlusterBrick/moogle-gluster
> [2018-10-31 17:31:33.899068] D [MSGID: 0]
> [glusterd-utils.c:6315:glusterd_brick_start] 0-management: returning -107
> [2018-10-31 17:31:33.899088] E [MSGID: 106122]
> [glusterd-mgmt.c:308:gd_mgmt_v3_commit_fn] 0-management: Volume start
> commit failed.
> [2018-10-31 17:31:33.899100] D [MSGID: 0]
> [glusterd-mgmt.c:392:gd_mgmt_v3_commit_fn] 0-management: OP = 5.
> Returning -107
> [2018-10-31 17:31:33.899114] E [MSGID: 106122]
> [glusterd-mgmt.c:1557:glusterd_mgmt_v3_commit] 0-management: Commit
> failed for operation Start on local node
> [2018-10-31 17:31:33.899128] D [MSGID: 0]
> [glusterd-op-sm.c:5109:glusterd_op_modify_op_ctx] 0-management: op_ctx
> modification not required
> [2018-10-31 17:31:33.899140] E [MSGID: 106122]
> [glusterd-mgmt.c:2160:glusterd_mgmt_v3_initiate_all_phases]
> 0-management: Commit Op Failed
> [2018-10-31 17:31:33.899168] D [MSGID: 0]
> [glusterd-locks.c:785:glusterd_mgmt_v3_unlock] 0-management: Trying to
> release lock of vol moogle-gluster for
> bb8c61eb-f321-4485-8a8d-ddc369ac2203 as moogle-gluster_vol
> [2018-10-31 17:31:33.899195] D [MSGID: 0]
> [glusterd-locks.c:834:glusterd_mgmt_v3_unlock] 0-management: Lock for
> vol moogle-gluster successfully released
> [2018-10-31 17:31:33.899211] D [MSGID: 0]
> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
> moogle-gluster found
> [2018-10-31 17:31:33.899221] D [MSGID: 0]
> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
> [2018-10-31 17:31:33.899232] D [MSGID: 0]
> [glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock] 0-management:
> Returning 0
> [2018-10-31 17:31:33.899314] D [MSGID: 0]
> [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] 0-management:
> Returning 0
> [2018-10-31 17:31:33.900750] D [socket.c:2927:socket_event_handler]
> 0-transport: EPOLLERR - disconnecting (sock:7) (non-SSL)
> [2018-10-31 17:31:33.900809] E [MSGID: 101191]
> [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to
> dispatch handler
>
>
> On 2018-10-31 10:19 a.m., Computerisms Corporation wrote:
>> Hi,
>>
>> it occurs maybe the previous email was too many words and not enough
>> data.  so will try to display the issue differently.
>>
>> gluster created (single brick volume following advice from
>> https://lists.gluster.org/pipermail/gluster-users/2016-October/028821.html):
>>
>>
>> root@sand1lian:~# gluster volume create moogle-gluster
>> sand1lian.computerisms.ca:/var/GlusterBrick/moogle-gluster
>>
>> Gluster was started from cli with --debug, console reports the
>> following with creation of the volume:
>>
>> [2018-10-31 17:00:51.555918] D [MSGID: 0]
>> [glusterd-volume-ops.c:328:__glusterd_handle_create_volume]
>> 0-management: Received create volume req
>> [2018-10-31 17:00:51.555963] D [MSGID: 0]
>> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning -1
>> [2018-10-31 17:00:51.556072] D [MSGID: 0]
>> [glusterd-op-sm.c:209:glusterd_generate_txn_id] 0-management:
>> Transaction_id = 3f5d14c9-ee08-493c-afac-d04d53c12aad
>> [2018-10-31 17:00:51.556090] D [MSGID: 0]
>> [glusterd-op-sm.c:302:glusterd_set_txn_opinfo] 0-management:
>> Successfully set opinfo for transaction ID :
>> 3f5d14c9-ee08-493c-afac-d04d53c12aad
>> [2018-10-31 17:00:51.556099] D [MSGID: 0]
>> [glusterd-op-sm.c:309:glusterd_set_txn_opinfo] 0-management: Returning 0
>> [2018-10-31 17:00:51.556108] D [MSGID: 0]
>> [glusterd-syncop.c:1809:gd_sync_task_begin] 0-management: Transaction
>> ID : 3f5d14c9-ee08-493c-afac-d04d53c12aad
>> [2018-10-31 17:00:51.556127] D [MSGID: 0]
>> [glusterd-locks.c:573:glusterd_mgmt_v3_lock] 0-management: Trying to
>> acquire lock of vol moogle-gluster for
>> bb8c61eb-f321-4485-8a8d-ddc369ac2203 as moogle-gluster_vol
>> [2018-10-31 17:00:51.556293] D [MSGID: 0]
>> [glusterd-locks.c:657:glusterd_mgmt_v3_lock] 0-management: Lock for
>> vol moogle-gluster successfully held by
>> bb8c61eb-f321-4485-8a8d-ddc369ac2203
>> [2018-10-31 17:00:51.556333] D [MSGID: 0]
>> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning -1
>> [2018-10-31 17:00:51.556368] D [logging.c:1998:_gf_msg_internal]
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
>> About to flush least recently used log message to disk
>> [2018-10-31 17:00:51.556345] D [MSGID: 0]
>> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning -1
>> [2018-10-31 17:00:51.556368] D [MSGID: 0]
>> [glusterd-utils.c:1094:glusterd_brickinfo_new] 0-management: Returning 0
>> [2018-10-31 17:00:51.556608] D [MSGID: 0]
>> [glusterd-utils.c:1308:glusterd_brickinfo_new_from_brick]
>> 0-management: Returning 0
>> [2018-10-31 17:00:51.556656] D [MSGID: 0]
>> [glusterd-utils.c:678:glusterd_volinfo_new] 0-management: Returning 0
>> [2018-10-31 17:00:51.556669] D [MSGID: 0]
>> [store.c:473:gf_store_handle_destroy] 0-: Returning 0
>> [2018-10-31 17:00:51.556681] D [MSGID: 0]
>> [glusterd-utils.c:990:glusterd_volume_brickinfos_delete] 0-management:
>> Returning 0
>> [2018-10-31 17:00:51.556690] D [MSGID: 0]
>> [store.c:473:gf_store_handle_destroy] 0-: Returning 0
>> [2018-10-31 17:00:51.556699] D [logging.c:1998:_gf_msg_internal]
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
>> About to flush least recently used log message to disk
>> The message "D [MSGID: 0] [store.c:473:gf_store_handle_destroy] 0-:
>> Returning 0" repeated 3 times between [2018-10-31 17:00:51.556690] and
>> [2018-10-31 17:00:51.556698]
>> [2018-10-31 17:00:51.556699] D [MSGID: 0]
>> [glusterd-utils.c:1042:glusterd_volinfo_delete] 0-management: Returning 0
>> [2018-10-31 17:00:51.556728] D [MSGID: 0]
>> [glusterd-utils.c:1094:glusterd_brickinfo_new] 0-management: Returning 0
>> [2018-10-31 17:00:51.556738] D [MSGID: 0]
>> [glusterd-utils.c:1308:glusterd_brickinfo_new_from_brick]
>> 0-management: Returning 0
>> [2018-10-31 17:00:51.556752] D [MSGID: 0]
>> [glusterd-utils.c:678:glusterd_volinfo_new] 0-management: Returning 0
>> [2018-10-31 17:00:51.556764] D [MSGID: 0]
>> [store.c:473:gf_store_handle_destroy] 0-: Returning 0
>> [2018-10-31 17:00:51.556772] D [MSGID: 0]
>> [glusterd-utils.c:990:glusterd_volume_brickinfos_delete] 0-management:
>> Returning 0
>> [2018-10-31 17:00:51.556781] D [MSGID: 0]
>> [store.c:473:gf_store_handle_destroy] 0-: Returning 0
>> [2018-10-31 17:00:51.556791] D [logging.c:1998:_gf_msg_internal]
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
>> About to flush least recently used log message to disk
>> The message "D [MSGID: 0] [store.c:473:gf_store_handle_destroy] 0-:
>> Returning 0" repeated 3 times between [2018-10-31 17:00:51.556781] and
>> [2018-10-31 17:00:51.556790]
>> [2018-10-31 17:00:51.556791] D [MSGID: 0]
>> [glusterd-utils.c:1042:glusterd_volinfo_delete] 0-management: Returning 0
>> [2018-10-31 17:00:51.556818] D [MSGID: 0]
>> [glusterd-utils.c:1094:glusterd_brickinfo_new] 0-management: Returning 0
>> [2018-10-31 17:00:51.556955] D [MSGID: 0]
>> [glusterd-peer-utils.c:130:glusterd_peerinfo_find_by_hostname]
>> 0-management: Unable to find friend: sand1lian.computerisms.ca
>> [2018-10-31 17:00:51.557033] D [MSGID: 0]
>> [common-utils.c:3590:gf_is_local_addr] 0-management: 192.168.25.52
>> [2018-10-31 17:00:51.557140] D [MSGID: 0]
>> [common-utils.c:3478:gf_interface_search] 0-management: 192.168.25.52
>> is local address at interface eno1
>> [2018-10-31 17:00:51.557154] D [MSGID: 0]
>> [glusterd-peer-utils.c:165:glusterd_hostname_to_uuid] 0-management:
>> returning 0
>> [2018-10-31 17:00:51.557172] D [MSGID: 0]
>> [glusterd-utils.c:1308:glusterd_brickinfo_new_from_brick]
>> 0-management: Returning 0
>> [2018-10-31 17:00:51.557183] D [MSGID: 0]
>> [glusterd-utils.c:1142:glusterd_resolve_brick] 0-management: Returning 0
>> [2018-10-31 17:00:51.557198] D [MSGID: 0]
>> [glusterd-utils.c:7558:glusterd_new_brick_validate] 0-management:
>> returning 0
>> [2018-10-31 17:00:51.557207] D [MSGID: 0]
>> [glusterd-utils.c:1142:glusterd_resolve_brick] 0-management: Returning 0
>> [2018-10-31 17:00:51.557392] D [MSGID: 0]
>> [glusterd-peer-utils.c:130:glusterd_peerinfo_find_by_hostname]
>> 0-management: Unable to find friend: sand1lian.computerisms.ca
>> [2018-10-31 17:00:51.557468] D [MSGID: 0]
>> [common-utils.c:3590:gf_is_local_addr] 0-management: 192.168.25.52
>> [2018-10-31 17:00:51.557542] D [MSGID: 0]
>> [common-utils.c:3478:gf_interface_search] 0-management: 192.168.25.52
>> is local address at interface eno1
>> [2018-10-31 17:00:51.557554] D [MSGID: 0]
>> [glusterd-peer-utils.c:165:glusterd_hostname_to_uuid] 0-management:
>> returning 0
>> [2018-10-31 17:00:51.557573] D [MSGID: 0]
>> [store.c:473:gf_store_handle_destroy] 0-: Returning 0
>> [2018-10-31 17:00:51.557586] D [MSGID: 0]
>> [glusterd-volume-ops.c:1467:glusterd_op_stage_create_volume]
>> 0-management: Returning 0
>> [2018-10-31 17:00:51.557595] D [MSGID: 0]
>> [glusterd-op-sm.c:6014:glusterd_op_stage_validate] 0-management: OP =
>> 1. Returning 0
>> [2018-10-31 17:00:51.557610] D [MSGID: 0]
>> [glusterd-op-sm.c:7659:glusterd_op_bricks_select] 0-management:
>> Returning 0
>> [2018-10-31 17:00:51.557620] D [MSGID: 0]
>> [glusterd-syncop.c:1751:gd_brick_op_phase] 0-management: Sent op req
>> to 0 bricks
>> [2018-10-31 17:00:51.557663] D [MSGID: 0]
>> [glusterd-utils.c:678:glusterd_volinfo_new] 0-management: Returning 0
>> [2018-10-31 17:00:51.557693] D [MSGID: 0]
>> [glusterd-utils.c:1094:glusterd_brickinfo_new] 0-management: Returning 0
>> [2018-10-31 17:00:51.557771] D [MSGID: 0]
>> [glusterd-peer-utils.c:130:glusterd_peerinfo_find_by_hostname]
>> 0-management: Unable to find friend: sand1lian.computerisms.ca
>> [2018-10-31 17:00:51.557844] D [MSGID: 0]
>> [common-utils.c:3590:gf_is_local_addr] 0-management: 192.168.25.52
>> [2018-10-31 17:00:51.557917] D [MSGID: 0]
>> [common-utils.c:3478:gf_interface_search] 0-management: 192.168.25.52
>> is local address at interface eno1
>> [2018-10-31 17:00:51.557931] D [MSGID: 0]
>> [glusterd-peer-utils.c:165:glusterd_hostname_to_uuid] 0-management:
>> returning 0
>> [2018-10-31 17:00:51.557947] D [MSGID: 0]
>> [glusterd-utils.c:1308:glusterd_brickinfo_new_from_brick]
>> 0-management: Returning 0
>> [2018-10-31 17:00:51.557957] D [MSGID: 0]
>> [glusterd-utils.c:1142:glusterd_resolve_brick] 0-management: Returning 0
>> [2018-10-31 17:00:51.558393] D [MSGID: 0]
>> [xlator.c:218:xlator_volopt_dynload] 0-xlator: Returning 0
>> [2018-10-31 17:00:51.558409] D [MSGID: 0]
>> [glusterd-volgen.c:3140:_get_xlator_opt_key_from_vme] 0-glusterd:
>> Returning 0
>> [2018-10-31 17:00:51.558495] W [MSGID: 101095]
>> [xlator.c:180:xlator_volopt_dynload] 0-xlator:
>> /usr/lib/x86_64-linux-gnu/glusterfs/5.0/xlator/nfs/server.so: cannot
>> open shared object file: No such file or directory
>> [2018-10-31 17:00:51.558509] D [MSGID: 0]
>> [xlator.c:218:xlator_volopt_dynload] 0-xlator: Returning -1
>> [2018-10-31 17:00:51.558566] D [MSGID: 0]
>> [glusterd-store.c:1107:glusterd_store_create_volume_dir] 0-management:
>> Returning with 0
>> [2018-10-31 17:00:51.558593] D [MSGID: 0]
>> [glusterd-store.c:1125:glusterd_store_create_volume_run_dir]
>> 0-management: Returning with 0
>> [2018-10-31 17:00:51.899586] D [MSGID: 0]
>> [store.c:432:gf_store_handle_new] 0-: Returning 0
>> [2018-10-31 17:00:51.930562] D [logging.c:1998:_gf_msg_internal]
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
>> About to flush least recently used log message to disk
>> [2018-10-31 17:00:51.930485] D [MSGID: 0]
>> [store.c:432:gf_store_handle_new] 0-: Returning 0
>> [2018-10-31 17:00:51.930561] D [MSGID: 0]
>> [store.c:386:gf_store_save_value] 0-management: returning: 0
>> [2018-10-31 17:00:51.932563] D [logging.c:1998:_gf_msg_internal]
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
>> About to flush least recently used log message to disk
>> The message "D [MSGID: 0] [store.c:386:gf_store_save_value]
>> 0-management: returning: 0" repeated 19 times between [2018-10-31
>> 17:00:51.930561] and [2018-10-31 17:00:51.930794]
>> [2018-10-31 17:00:51.932562] D [MSGID: 0]
>> [store.c:432:gf_store_handle_new] 0-: Returning 0
>> [2018-10-31 17:00:51.932688] D [MSGID: 0]
>> [store.c:386:gf_store_save_value] 0-management: returning: 0
>> [2018-10-31 17:00:51.932709] D [MSGID: 0]
>> [glusterd-store.c:457:glusterd_store_snapd_write] 0-management:
>> Returning 0
>> [2018-10-31 17:00:51.935196] D [MSGID: 0]
>> [glusterd-store.c:521:glusterd_store_perform_snapd_store]
>> 0-management: Returning 0
>> [2018-10-31 17:00:51.935226] D [MSGID: 0]
>> [glusterd-store.c:585:glusterd_store_snapd_info] 0-management:
>> Returning with 0
>> [2018-10-31 17:00:51.935251] D [MSGID: 0]
>> [glusterd-store.c:788:_storeopts] 0-management: Storing in
>> volinfo:key= transport.address-family, val=inet
>> [2018-10-31 17:00:51.935290] D [MSGID: 0]
>> [store.c:386:gf_store_save_value] 0-management: returning: 0
>> [2018-10-31 17:00:51.935314] D [MSGID: 0]
>> [glusterd-store.c:788:_storeopts] 0-management: Storing in
>> volinfo:key= nfs.disable, val=on
>> [2018-10-31 17:00:51.935344] D [MSGID: 0]
>> [store.c:386:gf_store_save_value] 0-management: returning: 0
>> [2018-10-31 17:00:51.935360] D [MSGID: 0]
>> [glusterd-store.c:1174:glusterd_store_volinfo_write] 0-management:
>> Returning 0
>> [2018-10-31 17:00:51.935382] D [MSGID: 0]
>> [store.c:386:gf_store_save_value] 0-management: returning: 0
>> [2018-10-31 17:00:51.936584] D [MSGID: 0]
>> [store.c:432:gf_store_handle_new] 0-: Returning 0
>> [2018-10-31 17:00:51.936685] D [MSGID: 0]
>> [store.c:386:gf_store_save_value] 0-management: returning: 0
>> [2018-10-31 17:00:51.936807] D [logging.c:1998:_gf_msg_internal]
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
>> About to flush least recently used log message to disk
>> The message "D [MSGID: 0] [store.c:386:gf_store_save_value]
>> 0-management: returning: 0" repeated 10 times between [2018-10-31
>> 17:00:51.936685] and [2018-10-31 17:00:51.936806]
>> [2018-10-31 17:00:51.936807] D [MSGID: 0]
>> [glusterd-store.c:430:glusterd_store_brickinfo_write] 0-management:
>> Returning 0
>> [2018-10-31 17:00:51.936833] D [MSGID: 0]
>> [glusterd-store.c:481:glusterd_store_perform_brick_store]
>> 0-management: Returning 0
>> [2018-10-31 17:00:51.936841] D [MSGID: 0]
>> [glusterd-store.c:550:glusterd_store_brickinfo] 0-management:
>> Returning with 0
>> [2018-10-31 17:00:51.936848] D [MSGID: 0]
>> [glusterd-store.c:1394:glusterd_store_brickinfos] 0-management:
>> Returning 0
>> [2018-10-31 17:00:51.936856] D [MSGID: 0]
>> [glusterd-store.c:1620:glusterd_store_perform_volume_store]
>> 0-management: Returning 0
>> [2018-10-31 17:00:51.958353] D [MSGID: 0]
>> [store.c:386:gf_store_save_value] 0-management: returning: 0
>> [2018-10-31 17:00:51.958494] D [logging.c:1998:_gf_msg_internal]
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
>> About to flush least recently used log message to disk
>> The message "D [MSGID: 0] [store.c:386:gf_store_save_value]
>> 0-management: returning: 0" repeated 9 times between [2018-10-31
>> 17:00:51.958353] and [2018-10-31 17:00:51.958493]
>> [2018-10-31 17:00:51.958493] D [MSGID: 0]
>> [glusterd-store.c:1558:glusterd_store_node_state_write] 0-management:
>> Returning 0
>> [2018-10-31 17:00:51.960449] D [MSGID: 0]
>> [glusterd-store.c:1592:glusterd_store_perform_node_state_store]
>> 0-management: Returning 0
>> [2018-10-31 17:00:51.960683] D [MSGID: 0]
>> [glusterd-utils.c:2840:glusterd_volume_compute_cksum] 0-management:
>> Returning with 0
>> [2018-10-31 17:00:51.960699] D [MSGID: 0]
>> [glusterd-store.c:1832:glusterd_store_volinfo] 0-management: Returning 0
>> [2018-10-31 17:00:51.960797] D [MSGID: 0]
>> [glusterd-utils.c:181:_brick_for_each] 0-management: Found a brick -
>> sand1lian.computerisms.ca:/var/GlusterBrick/moogle-gluster
>> [2018-10-31 17:00:51.961200] D [MSGID: 0]
>> [glusterd-volgen.c:1309:server_check_marker_off] 0-glusterd: Returning 0
>> [2018-10-31 17:00:51.961529] D [MSGID: 0]
>> [glusterd-volgen.c:5816:generate_brick_volfiles] 0-management:
>> Returning 0
>> [2018-10-31 17:00:51.961681] D [MSGID: 0]
>> [glusterd-peer-utils.c:130:glusterd_peerinfo_find_by_hostname]
>> 0-management: Unable to find friend: sand1lian.computerisms.ca
>> [2018-10-31 17:00:51.961756] D [MSGID: 0]
>> [common-utils.c:3590:gf_is_local_addr] 0-management: 192.168.25.52
>> [2018-10-31 17:00:51.961832] D [MSGID: 0]
>> [common-utils.c:3478:gf_interface_search] 0-management: 192.168.25.52
>> is local address at interface eno1
>> [2018-10-31 17:00:51.961846] D [MSGID: 0]
>> [glusterd-peer-utils.c:165:glusterd_hostname_to_uuid] 0-management:
>> returning 0
>> [2018-10-31 17:00:51.961855] D [MSGID: 0]
>> [glusterd-utils.c:1668:glusterd_volume_brickinfo_get] 0-management:
>> Found brick sand1lian.computerisms.ca:/var/GlusterBrick/moogle-gluster
>> in volume moogle-gluster
>> [2018-10-31 17:00:51.961864] D [MSGID: 0]
>> [glusterd-utils.c:1677:glusterd_volume_brickinfo_get] 0-management:
>> Returning 0
>> [2018-10-31 17:00:51.963126] D [MSGID: 0]
>> [glusterd-peer-utils.c:130:glusterd_peerinfo_find_by_hostname]
>> 0-management: Unable to find friend: sand1lian.computerisms.ca
>> [2018-10-31 17:00:51.963203] D [MSGID: 0]
>> [common-utils.c:3590:gf_is_local_addr] 0-management: 192.168.25.52
>> [2018-10-31 17:00:51.963280] D [MSGID: 0]
>> [common-utils.c:3478:gf_interface_search] 0-management: 192.168.25.52
>> is local address at interface eno1
>> [2018-10-31 17:00:51.963298] D [MSGID: 0]
>> [glusterd-peer-utils.c:165:glusterd_hostname_to_uuid] 0-management:
>> returning 0
>> [2018-10-31 17:00:51.963308] D [MSGID: 0]
>> [glusterd-utils.c:1668:glusterd_volume_brickinfo_get] 0-management:
>> Found brick sand1lian.computerisms.ca:/var/GlusterBrick/moogle-gluster
>> in volume moogle-gluster
>> [2018-10-31 17:00:51.963316] D [MSGID: 0]
>> [glusterd-utils.c:1677:glusterd_volume_brickinfo_get] 0-management:
>> Returning 0
>> [2018-10-31 17:00:51.964038] D [MSGID: 0]
>> [glusterd-peer-utils.c:130:glusterd_peerinfo_find_by_hostname]
>> 0-management: Unable to find friend: sand1lian.computerisms.ca
>> [2018-10-31 17:00:51.964112] D [MSGID: 0]
>> [common-utils.c:3590:gf_is_local_addr] 0-management: 192.168.25.52
>> [2018-10-31 17:00:51.964186] D [MSGID: 0]
>> [common-utils.c:3478:gf_interface_search] 0-management: 192.168.25.52
>> is local address at interface eno1
>> [2018-10-31 17:00:51.964200] D [MSGID: 0]
>> [glusterd-peer-utils.c:165:glusterd_hostname_to_uuid] 0-management:
>> returning 0
>> [2018-10-31 17:00:51.964211] D [MSGID: 0]
>> [glusterd-utils.c:1668:glusterd_volume_brickinfo_get] 0-management:
>> Found brick sand1lian.computerisms.ca:/var/GlusterBrick/moogle-gluster
>> in volume moogle-gluster
>> [2018-10-31 17:00:51.964226] D [MSGID: 0]
>> [glusterd-utils.c:1677:glusterd_volume_brickinfo_get] 0-management:
>> Returning 0
>> [2018-10-31 17:00:51.965159] D [MSGID: 0]
>> [glusterd-op-sm.c:6150:glusterd_op_commit_perform] 0-management:
>> Returning 0
>> [2018-10-31 17:00:51.965177] D [MSGID: 0]
>> [glusterd-utils.c:9664:glusterd_aggr_brick_mount_dirs] 0-management:
>> No brick_count present
>> [2018-10-31 17:00:51.965193] D [MSGID: 0]
>> [glusterd-op-sm.c:5109:glusterd_op_modify_op_ctx] 0-management: op_ctx
>> modification not required
>> [2018-10-31 17:00:51.965219] D [MSGID: 0]
>> [glusterd-locks.c:785:glusterd_mgmt_v3_unlock] 0-management: Trying to
>> release lock of vol moogle-gluster for
>> bb8c61eb-f321-4485-8a8d-ddc369ac2203 as moogle-gluster_vol
>> [2018-10-31 17:00:51.966350] D [MSGID: 0]
>> [glusterd-locks.c:834:glusterd_mgmt_v3_unlock] 0-management: Lock for
>> vol moogle-gluster successfully released
>> [2018-10-31 17:00:51.966462] D [MSGID: 0]
>> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
>> moogle-gluster found
>> [2018-10-31 17:00:51.966479] D [MSGID: 0]
>> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
>> [2018-10-31 17:00:51.966509] D [MSGID: 0]
>> [glusterd-op-sm.c:248:glusterd_get_txn_opinfo] 0-management:
>> Successfully got opinfo for transaction ID :
>> 3f5d14c9-ee08-493c-afac-d04d53c12aad
>> [2018-10-31 17:00:51.966532] D [MSGID: 0]
>> [glusterd-op-sm.c:252:glusterd_get_txn_opinfo] 0-management: Returning 0
>> [2018-10-31 17:00:51.966551] D [MSGID: 0]
>> [glusterd-op-sm.c:352:glusterd_clear_txn_opinfo] 0-management:
>> Successfully cleared opinfo for transaction ID :
>> 3f5d14c9-ee08-493c-afac-d04d53c12aad
>> [2018-10-31 17:00:51.966668] D [logging.c:1998:_gf_msg_internal]
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
>> About to flush least recently used log message to disk
>> [2018-10-31 17:00:51.966561] D [MSGID: 0]
>> [glusterd-op-sm.c:356:glusterd_clear_txn_opinfo] 0-management:
>> Returning 0
>> [2018-10-31 17:00:51.966667] D [MSGID: 0]
>> [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] 0-management:
>> Returning 0
>> [2018-10-31 17:00:51.968134] D [socket.c:2927:socket_event_handler]
>> 0-transport: EPOLLERR - disconnecting (sock:7) (non-SSL)
>> [2018-10-31 17:00:51.968183] E [MSGID: 101191]
>> [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to
>> dispatch handler
>> grep: /var/lib/glusterd/vols/moogle-gluster/bricks/*: No such file or
>> directory
>> [2018-10-31 17:00:51.975661] I [run.c:242:runner_log]
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/5.0/xlator/mgmt/glusterd.so(+0xe0dbe)
>> [0x7f3f248dbdbe]
>> -->/usr/lib/x86_64-linux-gnu/glusterfs/5.0/xlator/mgmt/glusterd.so(+0xe07fe)
>> [0x7f3f248db7fe]
>> -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(runner_log+0x105)
>> [0x7f3f28ac35a5] ) 0-management: Ran script:
>> /var/lib/glusterd/hooks/1/create/post/S10selinux-label-brick.sh
>> --volname=moogle-gluster
>> [2018-10-31 17:01:12.466614] D
>> [logging.c:1871:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer
>> timed out. About to flush outstanding messages if present
>> [2018-10-31 17:01:12.466667] D
>> [logging.c:1833:__gf_log_inject_timer_event] 0-logging-infra: Starting
>> timer now. Timeout = 120, current buf size = 5
>> [2018-10-31 17:03:12.492414] D
>> [logging.c:1871:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer
>> timed out. About to flush outstanding messages if present
>> [2018-10-31 17:03:12.492447] D
>> [logging.c:1833:__gf_log_inject_timer_event] 0-logging-infra: Starting
>> timer now. Timeout = 120, current buf size = 5
>>
>> Not sure about the unable to find friend message:
>>
>> root@sand1lian:~# dig +short sand1lian.computerisms.ca
>> 192.168.25.52
>>
>> start the volume:
>>
>> root@sand1lian:~# gluster v start moogle-gluster
>> volume start: moogle-gluster: failed: Commit failed on localhost.
>> Please check log file for details.
>>
>> output of cli.log while issuing start command:
>>
>> [2018-10-31 17:08:49.019079] I [cli.c:764:main] 0-cli: Started running
>> gluster with version 5.0
>> [2018-10-31 17:08:49.021694] W [socket.c:3365:socket_connect]
>> 0-glusterfs: Error disabling sockopt IPV6_V6ONLY: "Operation not
>> supported"
>> [2018-10-31 17:08:49.021924] W [socket.c:3365:socket_connect]
>> 0-glusterfs: Error disabling sockopt IPV6_V6ONLY: "Operation not
>> supported"
>> [2018-10-31 17:08:49.101120] I [MSGID: 101190]
>> [event-epoll.c:622:event_dispatch_epoll_worker] 0-epoll: Started
>> thread with index 1
>> [2018-10-31 17:08:49.101231] E [MSGID: 101191]
>> [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to
>> dispatch handler
>> [2018-10-31 17:08:49.113485] I
>> [cli-rpc-ops.c:1419:gf_cli_start_volume_cbk] 0-cli: Received resp to
>> start volume
>> [2018-10-31 17:08:49.113626] I [input.c:31:cli_batch] 0-: Exiting
>> with: -1
>>
>> and output of brick log while starting volume:
>>
>> [2018-10-31 17:08:49.107966] I [MSGID: 100030]
>> [glusterfsd.c:2691:main] 0-/usr/sbin/glusterfsd: Started running
>> /usr/sbin/glusterfsd version 5.0 (args: /usr/sbin/glusterfsd -s
>> sand1lian.computerisms.ca --volfile-id
>> moogle-gluster.sand1lian.computerisms.ca.var-GlusterBrick-moogle-gluster
>> -p
>> /var/run/gluster/vols/moogle-gluster/sand1lian.computerisms.ca-var-GlusterBrick-moogle-gluster.pid
>> -S /var/run/gluster/f41bfcfaf40deb7d.socket --brick-name
>> /var/GlusterBrick/moogle-gluster -l
>> /var/log/glusterfs/bricks/var-GlusterBrick-moogle-gluster.log
>> --xlator-option
>> *-posix.glusterd-uuid=bb8c61eb-f321-4485-8a8d-ddc369ac2203
>> --process-name brick --brick-port 49157 --xlator-option
>> moogle-gluster-server.listen-port=49157)
>> [2018-10-31 17:08:49.112123] E [socket.c:3466:socket_connect]
>> 0-glusterfs: connection attempt on  failed, (Invalid argument)
>> [2018-10-31 17:08:49.112293] I [MSGID: 101190]
>> [event-epoll.c:622:event_dispatch_epoll_worker] 0-epoll: Started
>> thread with index 1
>> [2018-10-31 17:08:49.112374] I
>> [glusterfsd-mgmt.c:2424:mgmt_rpc_notify] 0-glusterfsd-mgmt:
>> disconnected from remote-host: sand1lian.computerisms.ca
>> [2018-10-31 17:08:49.112399] I
>> [glusterfsd-mgmt.c:2444:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted
>> all volfile servers
>> [2018-10-31 17:08:49.112656] W [glusterfsd.c:1481:cleanup_and_exit]
>> (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xf023) [0x7f3466c12023]
>> -->/usr/sbin/glusterfsd(+0x1273e) [0x557f4ea6373e]
>> -->/usr/sbin/glusterfsd(cleanup_and_exit+0x54) [0x557f4ea5be94] ) 0-:
>> received signum (1), shutting down
>> [2018-10-31 17:08:49.112973] E [socket.c:3466:socket_connect]
>> 0-glusterfs: connection attempt on  failed, (Invalid argument)
>> [2018-10-31 17:08:49.112996] W [rpc-clnt.c:1683:rpc_clnt_submit]
>> 0-glusterfs: error returned while attempting to connect to
>> host:(null), port:0
>> [2018-10-31 17:08:49.113007] I
>> [socket.c:3710:socket_submit_outgoing_msg] 0-glusterfs: not connected
>> (priv->connected = 0)
>> [2018-10-31 17:08:49.113016] W [rpc-clnt.c:1695:rpc_clnt_submit]
>> 0-glusterfs: failed to submit rpc-request (unique: 0, XID: 0x2
>> Program: Gluster Portmap, ProgVers: 1, Proc: 5) to rpc-transport
>> (glusterfs)
>>
>>
>> still seeing the empty pid file and the connection attempt on  failed,
>> (Invalid argument) as the mostly likely culprits, but have read
>> everything of relevance I have found on google and not discovered a
>> solution yet...
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On 2018-10-30 9:15 p.m., Computerisms Corporation wrote:
>>> Hi,
>>>
>>> Fortunately I am playing in a sandbox right now, but I am good and
>>> stuck and hoping someone can point me in the right direction.
>>>
>>> I have been playing for about 3 months with a gluster that currently
>>> has one brick.  The idea is that I have a server with data, I need to
>>> migrate that server onto the new gluster-capable server, then I can
>>> use the original server to make a 2nd brick, then I will be able to
>>> make some room on a 3rd server for an arbiter brick.  So I am
>>> building and testing to be sure it all works before I try it in
>>> production.
>>>
>>> Yesterday morning I was plugging away at figuring out how to make
>>> stuff work on the new gluster server when I ran into an issue trying
>>> to rm -rf a directory and it telling me it wasn't empty when ls -al
>>> showed that it was.  This has happened to me before, and what I did
>>> to fix it before was unmount the Glusterfs, go into the brick, delete
>>> the files, and remount the Glusterfs.  I did that and it appeared to
>>> mount fine, but when I tried to access the gluster mount, it gave me
>>> an error that there were too many levels of symlinks.
>>>
>>> I spent my day yesterday trying pretty much everything I could find
>>> on google and a few things I couldn't.  In the past when stuff has
>>> gone funny with gluster on this box, I have always shut everything
>>> down and checked if there was a new version of gluster, and indeed
>>> there was version 5.0 available.  So I did the upgrade quite early in
>>> the day. Sadly it didn't fix my problem, but it did give me an error
>>> that led me to modifying my hosts file to be ipv6 resolvable.  Also
>>> after that, the only time the gluster would mount was at reboot, but
>>> always with the symlinks error, and it wasn't really mounted as
>>> reported by mount, but the directory could be unmounted.
>>>
>>> Having struck out completely yesterday, today I decided to start with
>>> a new machine.  I kept a history of the commands I had used to build
>>> the gluster a few months back and pasted them all in.  Found that the
>>> 5.0 package does not enable systemd, found that I needed the ipv6
>>> entries in the hosts file again, and also found the same problem in
>>> that the glusterfs would not mount, the symlinks error at reboot, and
>>> the same log entries.
>>>
>>> I am still pretty new with gluster, so my best may not be that good,
>>> but as best as I can tell the issue is that the brick will not start,
>>> even with the force option.  I think the problem boils down to one or
>>> both of two lines in the logs.  In the glusterd.log I have a line:
>>>
>>> 0-: Unable to read pidfile:
>>> /var/run/gluster/vols/moogle-gluster/sand1lian.computerisms.ca-var-GlusterBrick-moogle-gluster.pid
>>>
>>>
>>> The file exists, and I can't see anything wrong with permissions on
>>> the file or the file tree leading to it, but it is a zero-bit file,
>>> so I am thinking the problem is not the file, but that it can't read
>>> the contents of the file because there aren't any.
>>>
>>> The other log entry is in the brick log:
>>>
>>> 0-glusterfs: connection attempt on  failed, (Invalid argument)
>>>
>>> When I looked this up, it seems in my case there should be an attempt
>>> to connect on 127.0.0.1, but given the double space I am thinking the
>>> host argument is null, hence the invalid argument.  It occurs that
>>> maybe I still need some other entry in my hosts file to satisfy this,
>>> but I can't think what it would be.  I have created DNS entries; dig
>>> works, and both hostname and FQDN resolve.
>>>
>>> I have tried to change a lot of things today, so probably things are
>>> buggered up beyond hope right now so even if I do find the solution
>>> maybe it won't work.  will wipe the new machine and start over again
>>> tomorrow.
>>>
>>> I realize the post is kinda long, sorry for that, but I want to make
>>> sure I get every thing important.  In fairness, though, I could
>>> easily double the length of this post with possibly relevant things
>>> (if you are interested).  If you are still reading, thank you so
>>> much, I would appreciate anything, even a wild guess, as to how to
>>> move forward on this?
>>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users@xxxxxxxxxxx
>> https://lists.gluster.org/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux