PGs unknown after pool creation (Nautilus 14.2.4/6)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,


I have a small ceph cluster running with 3 MON/MGR and 3 OSD hosts. There are also 3 virtual hosts in the crushmap to have a seperate SSD pool. Currently two pools are running, one of that exclusive to the SSD device class.

My problem now is, that any new pool I try to create won't become functional as all new pgs are in unknown state. I've tried varying pg number, crush ruleset, size and so on, but nothing helped.

my OSDs regularly show an error message like

"var/log/ceph/ceph-osd.42.log:2020-03-04 17:09:04.641 7f76240c9700  0 --1- [v2:192.168.44.110:6834/23888,v1:192.168.44.110:6835/23888] >> v1:192.168.44.111:6826/484449 conn(0x55901c805800 0x55901d5bf000 :-1 s=CONNECTING_SEND_CONNECT_MSG pgs=398 cs=186 l=0).handle_connect_reply_2 connect got BADAUTHORIZER"

,but I don't find any reasons for that (clocks are synchronized).

Also one of my mons is two minor versions newer then the other nodes, but would not really like to update the whole cluster right now as I've had somewhat bad experience with the last update :)


Does anyone have any idea what I could try ?

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux