Re: op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 5, 2017 at 6:39 PM, Atin Mukherjee <amukherj@xxxxxxxxxx> wrote:
OK, so the log just hints to the following:

[2017-07-05 15:04:07.178204] E [MSGID: 106123] [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit] 0-management: Commit failed for operation Reset Brick on local node
[2017-07-05 15:04:07.178214] E [MSGID: 106123] [glusterd-replace-brick.c:649:glusterd_mgmt_v3_initiate_replace_brick_cmd_phases] 0-management: Commit Op Failed

While going through the code, glusterd_op_reset_brick () failed resulting into these logs. Now I don't see any error logs generated from glusterd_op_reset_brick () which makes me thing that have we failed from a place where we log the failure in debug mode. Would you be able to restart glusterd service with debug log mode and reran this test and share the log?


What's the best way to set glusterd in debug mode?
Can I set this volume, and work on it even if it is now compromised?

I ask because I have tried this:

[root@ovirt01 ~]# gluster volume get export diagnostics.brick-log-level
Option                                  Value                                  
------                                  -----                                  
diagnostics.brick-log-level             INFO           


[root@ovirt01 ~]# gluster volume set export diagnostics.brick-log-level DEBUG
volume set: failed: Error, Validation Failed
[root@ovirt01 ~]#

While on another volume that is in good state, I can run

[root@ovirt01 ~]# gluster volume set iso diagnostics.brick-log-level DEBUG
volume set: success
[root@ovirt01 ~]#

[root@ovirt01 ~]# gluster volume get iso diagnostics.brick-log-level
Option                                  Value                                  
------                                  -----                                  
diagnostics.brick-log-level             DEBUG        
                         
[root@ovirt01 ~]# gluster volume set iso diagnostics.brick-log-level INFO
volume set: success
[root@ovirt01 ~]#

 [root@ovirt01 ~]# gluster volume get iso diagnostics.brick-log-level
Option                                  Value                                  
------                                  -----                                  
diagnostics.brick-log-level             INFO                                   
[root@ovirt01 ~]#

Do you mean to run the reset-brick command for another volume or for the same? Can I run it against this "now broken" volume?

Or perhaps can I modify /usr/lib/systemd/system/glusterd.service and change in [service] section

from
Environment="LOG_LEVEL=INFO"

to
Environment="LOG_LEVEL=DEBUG"

and then
systemctl daemon-reload
systemctl restart glusterd

I think it would be better to keep gluster in debug mode the less time possible, as there are other volumes active right now, and I want to prevent fill the log files file system
Best to put only some components in debug mode if possible as in the example commands above.

Let me know,
thanks

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux