Re: Reset health.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 You can use the `ceph crash` interface to view/archive recent crashes. [1]

To list recent crashes: ceph crash ls-new
To get information about a particular crash: ceph crash info <crash-id>
To silence a crash: ceph crash archive <crash-id>
To silence all active crashes: ceph crash archive-all

[1]
https://docs.ceph.com/en/latest/rados/operations/health-checks/#recent-crash

Em sex., 22 de mar. de 2024 às 18:28, Albert Shih <Albert.Shih@xxxxxxxx>
escreveu:

> Hi,
>
> Very basic question : 2 days ago I reboot all the cluster. Everything work
> fine. But I'm guessing during the shutdown 4 osd was mark as crash
>
> [WRN] RECENT_CRASH: 4 daemons have recently crashed
>     osd.381 crashed on host cthulhu5 at 2024-03-20T18:33:12.017102Z
>     osd.379 crashed on host cthulhu4 at 2024-03-20T18:47:13.838839Z
>     osd.376 crashed on host cthulhu3 at 2024-03-20T18:50:00.877536Z
>     osd.373 crashed on host cthulhu1 at 2024-03-20T18:56:46.887394Z
>
> is they are any way to «clean» that ? Because otherwise my icinga
> complain....
>
> I don't like to add a downtime in icinga.
>
> Thanks.
> --
> Albert SHIH 🦫 🐸
> France
> Heure locale/Local time:
> ven. 22 mars 2024 22:24:35 CET
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux