On Wed, Jul 5, 2017 at 6:39 PM, Atin Mukherjee <amukherj@xxxxxxxxxx> wrote:
What's the best way to set glusterd in debug mode?
Can I set this volume, and work on it even if it is now compromised?
I ask because I have tried this:
[root@ovirt01 ~]# gluster volume get export diagnostics.brick-log-level
Option Value
------ -----
diagnostics.brick-log-level INFO
[root@ovirt01 ~]# gluster volume set export diagnostics.brick-log-level DEBUG
volume set: failed: Error, Validation Failed
[root@ovirt01 ~]#
While on another volume that is in good state, I can run
[root@ovirt01 ~]# gluster volume set iso diagnostics.brick-log-level DEBUG
volume set: success
[root@ovirt01 ~]#
[root@ovirt01 ~]# gluster volume get iso diagnostics.brick-log-level
Option Value
------ -----
diagnostics.brick-log-level DEBUG
[root@ovirt01 ~]# gluster volume set iso diagnostics.brick-log-level INFO
volume set: success
[root@ovirt01 ~]#
[root@ovirt01 ~]# gluster volume get iso diagnostics.brick-log-level
Option Value
------ -----
diagnostics.brick-log-level INFO
[root@ovirt01 ~]#
Or perhaps can I modify /usr/lib/systemd/system/glusterd.service and change in [service] sectionOK, so the log just hints to the following:While going through the code, glusterd_op_reset_brick () failed resulting into these logs. Now I don't see any error logs generated from glusterd_op_reset_brick () which makes me thing that have we failed from a place where we log the failure in debug mode. Would you be able to restart glusterd service with debug log mode and reran this test and share the log?
[2017-07-05 15:04:07.178204] E [MSGID: 106123] [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit] 0-management: Commit failed for operation Reset Brick on local node
[2017-07-05 15:04:07.178214] E [MSGID: 106123] [glusterd-replace-brick.c:649:glusterd_mgmt_v3_initiate_ replace_brick_cmd_phases] 0-management: Commit Op Failed
What's the best way to set glusterd in debug mode?
Can I set this volume, and work on it even if it is now compromised?
I ask because I have tried this:
[root@ovirt01 ~]# gluster volume get export diagnostics.brick-log-level
Option Value
------ -----
diagnostics.brick-log-level INFO
[root@ovirt01 ~]# gluster volume set export diagnostics.brick-log-level DEBUG
volume set: failed: Error, Validation Failed
[root@ovirt01 ~]#
While on another volume that is in good state, I can run
[root@ovirt01 ~]# gluster volume set iso diagnostics.brick-log-level DEBUG
volume set: success
[root@ovirt01 ~]#
[root@ovirt01 ~]# gluster volume get iso diagnostics.brick-log-level
Option Value
------ -----
diagnostics.brick-log-level DEBUG
[root@ovirt01 ~]# gluster volume set iso diagnostics.brick-log-level INFO
volume set: success
[root@ovirt01 ~]#
[root@ovirt01 ~]# gluster volume get iso diagnostics.brick-log-level
Option Value
------ -----
diagnostics.brick-log-level INFO
[root@ovirt01 ~]#
Do you mean to run the reset-brick command for another volume or for the same? Can I run it against this "now broken" volume?
from
Environment="LOG_LEVEL=INFO"
to
Environment="LOG_LEVEL=DEBUG"
Environment="LOG_LEVEL=DEBUG"
and then
systemctl daemon-reload
systemctl daemon-reload
systemctl restart glusterd
I think it would be better to keep gluster in debug mode the less time possible, as there are other volumes active right now, and I want to prevent fill the log files file system
Best to put only some components in debug mode if possible as in the example commands above.
Let me know,
thanks
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users