what I've just notice - the brick in question does show up as:
Brick 10.5.6.32:/__.aLocalStorages/0/0-GLUS
TERs/0GLUSTER-GROUP-WORK N/A N/A
N N/A
for one particular vol. Status for other vols(so far) shows
it ok.
Would this be volume problem or brick problem, or both?
And most importantly, how to troubleshoot it?
many thanks, L.
On 02/08/17 02:19, Atin Mukherjee wrote:
This means shd client is not able to establish the
connection with the brick on port 49155. Now this could
happen if glusterd has ended up providing a stale port
back which is not what brick is listening to. If you had
killed any brick process using sigkill signal instead of
sigterm this is expected as portmap_signout is not
received by glusterd in the former case and the old
portmap entry is never wiped off.
Please restart glusterd service. This should fix the problem.
On Tue, 1 Aug 2017 at 23:03, peljasz <peljasz@xxxxxxxxxxx
<mailto:peljasz@xxxxxxxxxxx>> wrote:
how critical is above?
I get plenty of these on all three peers.
hi guys
I've recently upgraded from 3.8 to 3.10 and I'm seeing
weird
behavior.
I see: $gluster vol status $_vol detail; takes long
timeand
mostly times out.
I do:
$ gluster vol heal $_vol info
and I see:
Brick
10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-CYTO-DATA
Status: Transport endpoint is not connected
Number of entries: -
Brick
10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-CYTO-DATA
Status: Connected
Number of entries: 0
Brick
10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-CYTO-DATA
Status: Transport endpoint is not connected
Number of entries: -
Ibegin to worry that 3.10 @centos7.3might have not been a
good idea.
many thanks.
L.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
<mailto:Gluster-users@xxxxxxxxxxx>
http://lists.gluster.org/mailman/listinfo/gluster-users
--
- Atin (atinm)
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users