Re: op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Thu, Jul 6, 2017 at 5:26 PM, Gianluca Cecchi <gianluca.cecchi@xxxxxxxxx> wrote:
On Thu, Jul 6, 2017 at 8:38 AM, Gianluca Cecchi <gianluca.cecchi@xxxxxxxxx> wrote:

Eventually I can destroy and recreate this "export" volume again with the old names (ovirt0N.localdomain.local) if you give me the sequence of commands, then enable debug and retry the reset-brick command

Gianluca


So it seems I was able to destroy and re-create.
Now I see that the volume creation uses by default the new ip, so I reverted the hostnames roles in the commands after putting glusterd in debug mode on the host where I execute the reset-brick command (do I have to set debug for the the nodes too?)

You have to set the log level to debug for glusterd instance where the commit fails and share the glusterd log of that particular node.
 


[root@ovirt01 ~]# gluster volume reset-brick export gl01.localdomain.local:/gluster/brick3/export start
volume reset-brick: success: reset-brick start operation successful

[root@ovirt01 ~]# gluster volume reset-brick export gl01.localdomain.local:/gluster/brick3/export ovirt01.localdomain.local:/gluster/brick3/export commit force
volume reset-brick: failed: Commit failed on ovirt02.localdomain.local. Please check log file for details.
Commit failed on ovirt03.localdomain.local. Please check log file for details.
[root@ovirt01 ~]#

See here the glusterd.log in zip format:

Time of the reset-brick operation in logfile is 2017-07-06 11:42
(BTW: can I have time in log not in UTC format, as I'm using CEST date in my system?)

I see a difference, because the brick doesn't seems isolated as before...

[root@ovirt01 glusterfs]# gluster volume info export
 
Volume Name: export
Type: Replicate
Volume ID: e278a830-beed-4255-b9ca-587a630cbdbf
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt01.localdomain.local:/gluster/brick3/export
Brick2: 10.10.2.103:/gluster/brick3/export
Brick3: 10.10.2.104:/gluster/brick3/export (arbiter)

[root@ovirt02 ~]# gluster volume info export
 
Volume Name: export
Type: Replicate
Volume ID: e278a830-beed-4255-b9ca-587a630cbdbf
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt01.localdomain.local:/gluster/brick3/export
Brick2: 10.10.2.103:/gluster/brick3/export
Brick3: 10.10.2.104:/gluster/brick3/export (arbiter)

And also in oVirt I see all 3 bricks online....

Gianluca


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux