Re: Recovery or recreation of a monitor rocksdb

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, I did end up destroying and recreating the monitor.

As I wanted to use the same IP it was somewhat tedious as I had to restart every OSD so they will catch the new value for mon_host.

Is there any way to tell all OSD that mon_host has a new value without restarting them?



On 4/4/22 16:48, Konstantin Shalygin wrote:
Hi,

The fast way to fix quorum issue is redeploy ceph-mon service


k
Sent from my iPhone

On 1 Apr 2022, at 14:43, Victor Rodriguez <vrodriguez@xxxxxxxxxxxxx> wrote:

Hello,

Have a 3 node cluster using Proxmox + ceph version 14.2.22 (nautilus). After a power failure one of the monitors does not start. The log states some kind of problem with it's rocksdb but I can't really pinpoint the issue. The log is available at https://pastebin.com/TZrFrZ1u.

How can I check or repair the rocksdb of this monitor?

Is there anyway to force the replication from another monitor?

Should I just remove that monitor from the cluster and re-add it back?

Should I force something to remove it from the cluster?


I've had problems with rocksdb only once before. Then it was an OSD and simply removed it and recreated and Ceph did rebuild/replace all PGs, etc.

Many thanks in advance.


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux