Unable To Heal On 3.7 Branch With Arbiter

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

 

I have a rather odd situation I’m hoping someone can help me out with.  I have a 2 node gluster replica with an arbiter running on the 3.7 branch.  I don’t appear to have a split-brain yet I am unable able to heal the cluster after a power failure.  Perhaps someone can tell me how to fix this?  I don't see much in the logs that may help.  ’Here is some command output that might be helpful:

 

gluster volume info gv0:

Volume Name: gv0

Type: Replicate

Volume ID: 14e7bb9c-aa5e-4386-8dd2-83a88d93dc54

Status: Started

Number of Bricks: 1 x 3 = 3

Transport-type: tcp

Bricks:

Brick1: server1:/export/brick1

Brick2: server2:/export/brick1

Brick3: kvm:/export/brick1

Options Reconfigured:

nfs.acl: off

performance.readdir-ahead: on

performance.quick-read: off

performance.read-ahead: off

performance.io-cache: off

performance.stat-prefetch: off

cluster.eager-lock: enable

network.remote-dio: enable


--- 

gluster volume status gv0 detail:

Status of volume: gv0

------------------------------------------------------------------------------

Brick                : Brick server1:/export/brick1

TCP Port             : 49152

RDMA Port            : 0

Online               : Y

Pid                  : 4409

File System          : ext3

Device               : /dev/sdb1

Mount Options        : rw

Inode Size           : 128

Disk Space Free      : 1.7TB

Total Disk Space     : 1.8TB

Inode Count          : 244203520

Free Inodes          : 244203413

------------------------------------------------------------------------------

Brick                : Brick server2:/export/brick1

TCP Port             : 49152

RDMA Port            : 0

Online               : Y

Pid                  : 4535

File System          : ext3

Device               : /dev/sdb1

Mount Options        : rw

Inode Size           : 128

Disk Space Free      : 1.7TB

Total Disk Space     : 1.8TB

Inode Count          : 244203520

Free Inodes          : 244203405


--- 

Why doesn’t this accomplish anything?

gluster volume heal gv0:

Launching heal operation to perform index self heal on volume gv0 has been successful

Use heal info commands to check status


--- 

Or this?

gluster volume heal gv0 full:

Launching heal operation to perform full self heal on volume gv0 has been successful

Use heal info commands to check status


--- 

gluster volume heal gv0 info split-brain:

Brick server1:/export/brick1

Number of entries in split-brain: 0

 

Brick server2:/export/brick1

Number of entries in split-brain: 0

 

Brick kvm:/export/brick1

Status: Transport endpoint is not connected

 

--- 

I can't seem to get these to heal?

gluster volume heal gv0 info:

Brick server1:/export/brick1

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/8c524ed9-e382-40cd-9361-60c23a2c1ae2.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/0b16f938-e859-41e3-bb33-fefba749a578.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/715ddb6c-67af-4047-9fa0-728019b49d63.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/b0cdf43c-7e6b-44bf-ab2d-efb14e9d2156.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/d2873b74-f6be-43a9-bdf1-276761e3e228.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/asdf

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/940ee016-8288-4369-9fb8-9c64cb3af256.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/72a33878-59f7-4f6e-b3e1-e137aeb19ced.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/03070877-9cf4-4d55-a66c-fbd3538eedb9.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/c2645723-efd9-474b-8cce-fe07ac9fbba9.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/930196aa-0b85-4482-97ab-3d05e9928884.vhd

Number of entries: 12

 

Brick server2:/export/brick1

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/03070877-9cf4-4d55-a66c-fbd3538eedb9.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/d2873b74-f6be-43a9-bdf1-276761e3e228.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/715ddb6c-67af-4047-9fa0-728019b49d63.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/930196aa-0b85-4482-97ab-3d05e9928884.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/c2645723-efd9-474b-8cce-fe07ac9fbba9.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/asdf

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/940ee016-8288-4369-9fb8-9c64cb3af256.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/0b16f938-e859-41e3-bb33-fefba749a578.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/8c524ed9-e382-40cd-9361-60c23a2c1ae2.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/72a33878-59f7-4f6e-b3e1-e137aeb19ced.vhd

/4b37411d-97cd-0d4c-f898-a3b93cfe1b34/b0cdf43c-7e6b-44bf-ab2d-efb14e9d2156.vhd

Number of entries: 12

 

Brick kvm:/export/brick1

Status: Transport endpoint is not connected


---

Thank you.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux