Re: consistency between nodes?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sent from one plus one
On Jun 20, 2015 6:03 PM, "David Roundy" <roundyd@xxxxxxxxxxxxxxxxxxxxxxx> wrote:
>
> Here are gluster peer status on the four nodes.  The fifth peer is my laptop (10.214.70.62), which I have peer probed, but not done anything else with.  The other news is that I have (since emailing yesterday) rebooted most of the nodes, which fixed the "missing files" problem.  Before doing this, I noticed that not all peers agreed in their peer status.  Shouldn't peer status of all-connected on one node mean that they are all connected?
All nodes should have showed as connected. I would need all the glusterd and cmd history log files to analyze it.
>
> wentworth:~# gluster peer status
> Number of Peers: 4
>
> Hostname: bennet.physics.oregonstate.edu
> Uuid: 15dcda25-8043-4243-bbdc-168a73c91bcc
> State: Peer in Cluster (Connected)
> Other names:
> 128.193.96.83
>
> Hostname: knightley.physics.oregonstate.edu
> Uuid: 9f61673a-7215-4f07-9e75-4ed3dbeaff7e
> State: Peer in Cluster (Connected)
>
> Hostname: 10.214.70.62
> Uuid: 7bdfc78b-82a9-45a3-94fd-a1f14c04a8ff
> State: Peer in Cluster (Disconnected)
>
> Hostname: 128.193.96.93
> Uuid: 1b436ed4-44c9-414b-9cc6-74a38d97d66e
> State: Peer in Cluster (Connected)
> Other names:
> elliot.physics.oregonstate.edu
>
> elliot:~# gluster peer status
> Number of Peers: 4
>
> Hostname: bennet.physics.oregonstate.edu
> Uuid: 15dcda25-8043-4243-bbdc-168a73c91bcc
> State: Peer in Cluster (Connected)
> Other names:
> 128.193.96.83
>
> Hostname: knightley.physics.oregonstate.edu
> Uuid: 9f61673a-7215-4f07-9e75-4ed3dbeaff7e
> State: Peer in Cluster (Connected)
>
> Hostname: 10.214.70.62
> Uuid: 7bdfc78b-82a9-45a3-94fd-a1f14c04a8ff
> State: Peer in Cluster (Disconnected)
>
> Hostname: wentworth.physics.oregonstate.edu
> Uuid: 6f463d9d-c32e-4973-bbc5-06b782678ee7
> State: Peer in Cluster (Connected)
>
> knightley:~# gluster peer status
> Number of Peers: 4
>
> Hostname: 10.214.70.62
> Uuid: 7bdfc78b-82a9-45a3-94fd-a1f14c04a8ff
> State: Peer in Cluster (Disconnected)
>
> Hostname: wentworth.physics.oregonstate.edu
> Uuid: 6f463d9d-c32e-4973-bbc5-06b782678ee7
> State: Peer in Cluster (Connected)
>
> Hostname: 128.193.96.93
> Uuid: 1b436ed4-44c9-414b-9cc6-74a38d97d66e
> State: Peer in Cluster (Connected)
> Other names:
> elliot.physics.oregonstate.edu
>
> Hostname: bennet.physics.oregonstate.edu
> Uuid: 15dcda25-8043-4243-bbdc-168a73c91bcc
> State: Peer in Cluster (Connected)
> Other names:
> 128.193.96.83
>
>
> bennet:~# gluster peer status
> Number of Peers: 4
>
> Hostname: knightley.physics.oregonstate.edu
> Uuid: 9f61673a-7215-4f07-9e75-4ed3dbeaff7e
> State: Peer in Cluster (Connected)
>
> Hostname: 128.193.96.93
> Uuid: 1b436ed4-44c9-414b-9cc6-74a38d97d66e
> State: Peer in Cluster (Connected)
> Other names:
> elliot.physics.oregonstate.edu
>
> Hostname: wentworth.physics.oregonstate.edu
> Uuid: 6f463d9d-c32e-4973-bbc5-06b782678ee7
> State: Peer in Cluster (Connected)
>
> Hostname: 10.214.70.62
> Uuid: 7bdfc78b-82a9-45a3-94fd-a1f14c04a8ff
> State: Peer in Cluster (Disconnected)
>
> wentworth:~# gluster volume status
> Status of volume: austen
> Gluster process Port Online Pid
> ------------------------------------------------------------------------------
> Brick elliot.physics.oregonstate.edu:/srv/brick 49152 Y 2272
> Brick wentworth.physics.oregonstate.edu:/srv/brick 49152 Y 1566
> Brick bennet:/srv/brick 49152 Y 1516
> Brick knightley:/srv/brick 49152 Y 2153
> NFS Server on localhost 2049 Y 1568
> Self-heal Daemon on localhost N/A Y 1570
> NFS Server on bennet.physics.oregonstate.edu 2049 Y 1523
> Self-heal Daemon on bennet.physics.oregonstate.edu N/A Y 1528
> NFS Server on 128.193.96.93 2049 Y 2279
> Self-heal Daemon on 128.193.96.93 N/A Y 2286
> NFS Server on knightley.physics.oregonstate.edu 2049 Y 2160
> Self-heal Daemon on knightley.physics.oregonstate.edu N/A Y 2165
>  
> Task Status of Volume austen
> ------------------------------------------------------------------------------
> There are no active volume tasks
>
>
> wentworth:~# gluster --version
> glusterfs 3.6.3 built on May 19 2015 19:27:14
> Repository revision: git://git.gluster.com/glusterfs.git
> Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
>
>
>
> On Fri, Jun 19, 2015 at 8:53 PM Atin Mukherjee <atin.mukherjee83@xxxxxxxxx> wrote:
>>
>> Could you paste output of :
>>
>> gluster peer status from all the nodes?
>>
>> gluster volume status
>>
>> gluster --version
>>
>> Atin
>>
>> Sent from one plus one
>>
>>
>> On Jun 20, 2015 4:12 AM, "David Roundy" <roundyd@xxxxxxxxxxxxxxxxxxxxxxx> wrote:
>> >
>> > Hi all,
>> >
>> > I'm having some trouble where different nodes in my volume have differing ideas as to the state of the system.  This is a 4-node (one brick per node) cluster, with replica 2.
>> >
>> > I have identified two differing symptoms, I am hoping of the same problem.
>> >
>> > The first symptom is that two of the four nodes don't show files in some of the directories of the volume, which are present in the other two nodes.
>> >
>> > The second symptom is that I am having some compile commands (but not all) error with "No space left on device" when there is plenty of space on every brick.
>> >
>> > The second symptom seems harder to track down than the first.  Presumably my nodes are not properly synchronized in some way.  How would I track something like this down?
>> >
>> > A side note:  I have had rebalance fix-layout seemingly hang with no evidence of progress, and a plain rebalance took a long time and I eventually stopped it.  I created the cluster two nodes at a time, with the understanding that I would be able to add pairs of nodes without problems, but it doesn't seem to be working out that way so far.
>> >
>> > Any suggestions would be appreciated! As with my last question, I'm not sure what further information will be helpful, and will be happy to provide more information!
>> >
>> > David
>> >
>>
>> > _______________________________________________
>> > Gluster-users mailing list
>> > Gluster-users@xxxxxxxxxxx
>> > http://www.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux