Re: Support of SNMP on CEPH ansible

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I can't speak for details of ceph-ansible. I don't use it because from
what I can see, ceph-ansible requires a lot more symmetry in the server
farm than I have.

It is, however, my understanding that cephadm is the preferred
installation and management option these days and it certainly helped
me to do the migration.

The actual SNMP subsystem for Ceph is based on listening to Prometheus
alerts and translating them into SNMP traps
(https://github.com/maxwo/snmp_notifier). Should be fairly version-
independent I'd think. If ceph-ansible doesn't support it, I expect
than an ansible playbook to do the job would be relatively simple.

Just as an alternative, since I don't use SNMP for Ceph, and I do use
Nagios as my general systems monitor, I did a Nagios NRPE script that
simply runs "ceph health" and looks for "OK", "WARN" or "ERROR" in the
response, which translates to Nagios alert levels. If Nagios alerts me,
I then go to the Ceph Control panel for details. Which I'd think is a
lot less work than dissecting traps, though I can tolerate a 5-minute
delay for polling on alerts here. Your Mileage May Vary.

On Tue, 2023-12-19 at 08:10 +0000, Eugen Block wrote:
> Hi,
> 
> I don't have an answer for the SNMP part, I guess you could just
> bring  
> up your own snmp daemon and configure it to your needs. As for the  
> orchestrator backend you have these three options (I don't know what 
> "test_orchestrator" does but it doesn't sound like it should be used 
> in production):
> 
>              enum_allowed=['cephadm', 'rook', 'test_orchestrator'],
> 
> If you intend to use the orchestrator I suggest to move to cephadm  
> (you can convert an existing cluster by following this guide:  
> https://docs.ceph.com/en/latest/cephadm/adoption/). Although the  
> orchestrator is "on" it requires a backend.
> 
> Regards,
> Eugen
> 
> Zitat von Lokendra Rathour <lokendrarathour@xxxxxxxxx>:
> 
> > Hi Team,
> > please help in the reference of the issue raised.
> > 
> > 
> > Best Regards,
> > Lokendra
> > 
> > On Wed, Dec 13, 2023 at 2:33 PM Kushagr Gupta
> > <kushagrguptasps.mun@xxxxxxxxx>
> > wrote:
> > 
> > > Hi Team,
> > > 
> > > *Environment:*
> > > We have deployed a ceph setup using ceph-ansible.
> > > Ceph-version: 18.2.0
> > > OS: Almalinux 8.8
> > > We have a 3 node-setup.
> > > 
> > > *Queries:*
> > > 
> > > 1. Is SNMP supported for ceph-ansible?Is there some other way to
> > > setup
> > > SNMP gateway for the ceph cluster?
> > > 2. Do we have a procedure to set backend for ceph-orchestrator
> > > via
> > > ceph-ansible? Which backend to use?
> > > 3. Are there any CEPH MIB files which work independent of
> > > prometheus.
> > > 
> > > 
> > > *Description:*
> > > We are trying to perform SNMP monitoring for the ceph cluster
> > > using the
> > > following link:
> > > 
> > > 1.
> > > https://docs.ceph.com/en/quincy/cephadm/services/snmp-gateway/#:~:text=Ceph's%20SNMP%20integration%20focuses%20on,a%20designated%20SNMP%20management%20platform
> > > .
> > > 2.
> > > https://www.ibm.com/docs/en/storage-ceph/7?topic=traps-deploying-snmp-gateway
> > > 
> > > But when we try to follow the steps mentioned in the above link,
> > > we get
> > > the following error when we try to run any "ceph orch" we get the
> > > following
> > > error:
> > > "Error ENOENT: No orchestrator configured (try `ceph orch set
> > > backend`)"
> > > 
> > > After going through following links:
> > > 1.
> > > https://www.ibm.com/docs/en/storage-ceph/5?topic=operations-use-ceph-orchestrator
> > > 2.
> > > https://forum.proxmox.com/threads/ceph-mgr-orchestrator-enabled-but-showing-missing.119145/
> > > 3. https://docs.ceph.com/en/latest/mgr/orchestrator_modules/
> > > I think since we have deployed the cluster using ceph-ansible, we
> > > can't
> > > use the ceph-orch commands.
> > > When we checked in the cluster, the following are the enabled
> > > modules:
> > > "
> > > [root@storagenode1 ~]# ceph mgr module ls
> > > MODULE
> > > balancer           on (always on)
> > > crash              on (always on)
> > > devicehealth       on (always on)
> > > orchestrator       on (always on)
> > > pg_autoscaler      on (always on)
> > > progress           on (always on)
> > > rbd_support        on (always on)
> > > status             on (always on)
> > > telemetry          on (always on)
> > > volumes            on (always on)
> > > alerts             on
> > > iostat             on
> > > nfs                on
> > > prometheus         on
> > > restful            on
> > > dashboard          -
> > > influx             -
> > > insights           -
> > > localpool          -
> > > mds_autoscaler     -
> > > mirroring          -
> > > osd_perf_query     -
> > > osd_support        -
> > > rgw                -
> > > selftest           -
> > > snap_schedule      -
> > > stats              -
> > > telegraf           -
> > > test_orchestrator  -
> > > zabbix             -
> > > [root@storagenode1 ~]#
> > > "
> > > As can be seen above, orchestrator is on.
> > > 
> > > Also, We were exploring more about snmp and as per the file:
> > > "/etc/prometheus/ceph/ceph_default_alerts.yml" on the ceph
> > > storage, the
> > > OIDs in the file represents the OID for ceph components via
> > > prometheus.
> > > For example:
> > > for the following OID: 1.3.6.1.4.1.50495.1.2.1.2.1
> > > [root@storagenode3 ~]# snmpwalk -v 2c -c 209ijvfwer0df92jd -O e
> > > 10.0.1.36
> > > 1.3.6.1.4.1.50495.1.2.1.2.1
> > > CEPH-MIB::promHealthStatusError = No Such Object available on
> > > this agent
> > > at this OID
> > > [root@storagenode3 ~]#
> > > 
> > > Kindly help us for the same.
> > > 
> > > Thanks and regards,
> > > Kushagra Gupta
> > > 
> > 
> > 
> > --
> > ~ Lokendra
> > skype: lokendrarathour
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux