Re: Zabbix module Octopus 15.2.3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm going to resurrect this thread to throw my hat in the ring as I am having this issue as well.

I just moved to 15.2.4 on Ubuntu 18.04/bionic, and Zabbix is 5.0.2.
$ ceph zabbix config-show
Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 1167, in _handle_command
    return self.handle_command(inbuf, cmd)
  File "/usr/share/ceph/mgr/zabbix/module.py", line 407, in handle_command
    return 0, json.dumps(self.config, index=4, sort_keys=True), ''
  File "/usr/lib/python3.6/json/__init__.py", line 238, in dumps
    **kw).encode(obj)
TypeError: __init__() got an unexpected keyword argument 'index'

Which looks to be exactly the same as your error.

I also appear to be seeing some weird issue with the Zabbix template as it pertains to the discovery and daemon/pool stats.

I made sure that I pulled the latest XML from the ceph repo here: https://github.com/ceph/ceph/blob/master/src/pybind/mgr/zabbix/zabbix_template.xml

When I run the discovery command, it looks like it runs ok?
$ ceph zabbix discovery
Sending discovery data to Zabbix

And if I pass the --verbose flag I get this at the end

better match: 1.5 > 0.5: zabbix config-set <key> <value>
better match: 1.5 > 1.5: zabbix config-show
better match: 1.5 > 1.5: zabbix send
better match: 2.5 > 1.5: zabbix discovery
bestcmds_sorted:
[{'flags': 8,
  'help': 'Discovering Zabbix data',
  'module': 'mgr',
  'perm': 'r',
  'sig': [argdesc(<class 'ceph_argparse.CephPrefix'>, req=True, name=prefix, n=1, numseen=0, prefix=zabbix),
          argdesc(<class 'ceph_argparse.CephPrefix'>, req=True, name=prefix, n=1, numseen=0, prefix=discovery)]}]
Submitting command:  {'prefix': 'zabbix discovery', 'target': ('mon-mgr', '')}
submit ['{"prefix": "zabbix discovery", "target": ["mon-mgr", ""]}'] to mon-mgr
Sending discovery data to Zabbix

The pools get discovered correctly, however the pool % used doesn't work, and the R/W iops/BW are off by a magnitude of 10^9 I think.

[fs-metadata] Pool Percent Used
07/27/2020 06:17:22 PM
0%

[fs-metadata] Pool RAW Used
07/28/2020 05:07:22 PM
180 Gb
+2.52 Mb
[fs-metadata] Pool Read bandwidth
07/28/2020 05:07:22 PM
4.3 Tbytes
+560.13 Kbytes
[fs-metadata] Pool Read operations
07/28/2020 05:07:22 PM
460.18 Mops
+170 ops
[fs-metadata] Pool Used
07/28/2020 05:07:22 PM
180 Gb
+2.52 Mb
[fs-metadata] Pool Write bandwidth
07/28/2020 05:07:22 PM
1.43 Tbytes
+337.92 Kbytes
[fs-metadata] Pool Write operations
07/28/2020 05:07:22 PM
62.04 Mops
+99 ops

However, typical patterns are this:
pool fs-metadata id 16
  client io 3.9 KiB/s rd, 3.3 KiB/s wr, 2 op/s rd, 0 op/s wr

And Zabbix shows this error for the "Pool Percent Used" Item.
Value of type "string" is not suitable for value type "Numeric (unsigned)". Value "0.014200450852513313"
So it looks like this should be a float, and thats a pretty easy change.

The other, much bigger, issue I am seeing is with the discovery for OSD's.

It appears like its descending the crush tree and selecting the crush failure domains.

For one tree, it is chassis, for another it is host.
And the OSD values it is showing "[osd.-##]" correspond directly to the crush ID number.
Table of the Items below.

Ceph OSD discovery: [osd.-18] OSD fill
Triggers 2
ceph.[osd.-18,osd_fill]

90d
365d
Zabbix trapper
Ceph CRUSH [ssd]
Enabled
Ceph OSD discovery: [osd.-18] OSD in

ceph.[osd.-18,in]

90d
365d
Zabbix trapper
Ceph CRUSH [ssd]
Enabled
Ceph OSD discovery: [osd.-18] OSD latency apply

ceph.[osd.-18,osd_latency_apply]

90d
365d
Zabbix trapper
Ceph CRUSH [ssd]
Enabled
Ceph OSD discovery: [osd.-18] OSD latency commit

ceph.[osd.-18,osd_latency_commit]

90d
365d
Zabbix trapper
Ceph CRUSH [ssd]
Enabled
Ceph OSD discovery: [osd.-18] OSD PGs

ceph.[osd.-18,num_pgs]

90d
365d
Zabbix trapper
Ceph CRUSH [ssd]
Enabled
Ceph OSD discovery: [osd.-18] OSD up
Triggers 1
ceph.[osd.-18,up]

90d
365d
Zabbix trapper
Ceph CRUSH [ssd]
Enabled
Ceph OSD discovery: [osd.-55] OSD fill
Triggers 2
ceph.[osd.-55,osd_fill]

90d
365d
Zabbix trapper
Ceph CRUSH [default]
Enabled
Ceph OSD discovery: [osd.-55] OSD in

ceph.[osd.-55,in]

90d
365d
Zabbix trapper
Ceph CRUSH [default]
Enabled
Ceph OSD discovery: [osd.-55] OSD latency apply

ceph.[osd.-55,osd_latency_apply]

90d
365d
Zabbix trapper
Ceph CRUSH [default]
Enabled
Ceph OSD discovery: [osd.-55] OSD latency commit

ceph.[osd.-55,osd_latency_commit]

90d
365d
Zabbix trapper
Ceph CRUSH [default]
Enabled
Ceph OSD discovery: [osd.-55] OSD PGs

ceph.[osd.-55,num_pgs]

90d
365d
Zabbix trapper
Ceph CRUSH [default]
Enabled
Ceph OSD discovery: [osd.-55] OSD up
Triggers 1
ceph.[osd.-55,up]

90d
365d
Zabbix trapper
Ceph CRUSH [default]
Enabled
Ceph OSD discovery: [osd.-56] OSD fill
Triggers 2
ceph.[osd.-56,osd_fill]

90d
365d
Zabbix trapper
Ceph CRUSH [default]
Enabled
Ceph OSD discovery: [osd.-56] OSD in

ceph.[osd.-56,in]

90d
365d
Zabbix trapper
Ceph CRUSH [default]
Enabled
Ceph OSD discovery: [osd.-56] OSD latency apply

ceph.[osd.-56,osd_latency_apply]

90d
365d
Zabbix trapper
Ceph CRUSH [default]
Enabled
Ceph OSD discovery: [osd.-56] OSD latency commit

ceph.[osd.-56,osd_latency_commit]

90d
365d
Zabbix trapper
Ceph CRUSH [default]
Enabled
Ceph OSD discovery: [osd.-56] OSD PGs

ceph.[osd.-56,num_pgs]

90d
365d
Zabbix trapper
Ceph CRUSH [default]
Enabled
Ceph OSD discovery: [osd.-56] OSD up
Triggers 1
ceph.[osd.-56,up]

90d
365d
Zabbix trapper
Ceph CRUSH [default]
Enabled

One last "oddity" in my environment setup is that I'm not running containerized, I'm just running "legacy" as cephadm states, but hopefully this isn't the root cause of this.

Reed 

On Jun 6, 2020, at 3:24 AM, Gert Wieberdink <gert.wieberdink@xxxxxxxx> wrote:

Trying to configure Zabbix module in Octopus 15.2.3.
CentOS 8.1 environment. Installed zabbix40-agent for CentOS 8.1 (from epel repository). This will also install zabbix_sender.
After enabling the Zabbix module in Ceph, I configured my Zabbix host and Zabbix identifier.
# ceph zabbix config-set zabbix_host <zabbix-fqdn>
# ceph zabbix config-set zabbix_identifier <ident>
# ceph zabbix config-show
Error EINVAL: Traceback (most recent call last):
 File "/usr/share/ceph/mgr/mgr_module.py", line 1153, in _handle_command
   return self.handle_command(inbuf, cmd)
 File "/usr/share/ceph/mgr/zabbix/module.py", line 407, in handle_command
   return 0, json.dumps(self.config, index=4, sort_keys=True), ''
 File "/lib64/python3.6/json/__init__.py", line 238, in dumps
   **kw).encode(obj)
TypeError: __init__() got an unexpected keyword argument 'index'

# ceph -v
ceph version 15.2.3 (d289bbdec69ed7c1f516e0a093594580a76b78d0) octopus (stable)
# ceph health detail
HEALTH_OK

Anyone found a solution?
rgds,
-gw
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux