Re: Cephadm: module not found

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a known bug from a quick search. https://tracker.ceph.com/issues/47438

I’m not very familiar with the development procedure, but maybe they should backport it?

> 在 2020年10月26日,22:40,Marco Venuti <afm.itunev@xxxxxxxxx> 写道:
> 
> I indeed have very small osds, but shortly I will be able to test ceph on
> much larger osds.
> 
> However, looking in syslog I found this
> 
> Oct 23 23:45:14 ceph0 bash[2265]: debug 2020-10-23T21:45:14.918+0000
> 7fe8de931700 -1 mgr load Failed to construct class in 'cephadm'
> Oct 23 23:45:14 ceph0 bash[2265]: debug 2020-10-23T21:45:14.918+0000
> 7fe8de931700 -1 mgr load Traceback (most recent call last):
> Oct 23 23:45:14 ceph0 bash[2265]:   File
> "/usr/share/ceph/mgr/cephadm/module.py", line 325, in __init__
> Oct 23 23:45:14 ceph0 bash[2265]:     self.rm_util.load_from_store()
> Oct 23 23:45:14 ceph0 bash[2265]:   File
> "/usr/share/ceph/mgr/cephadm/services/osd.py", line 465, in load_from_store
> Oct 23 23:45:14 ceph0 bash[2265]:     osd_obj =
> OSD.from_json(json.loads(osd), ctx=self)
> Oct 23 23:45:14 ceph0 bash[2265]:   File
> "/lib64/python3.6/json/__init__.py", line 348, in loads
> Oct 23 23:45:14 ceph0 bash[2265]:     'not
> {!r}'.format(s.__class__.__name__))
> Oct 23 23:45:14 ceph0 bash[2265]: TypeError: the JSON object must be str,
> bytes or bytearray, not 'dict'
> Oct 23 23:45:14 ceph0 bash[2265]: debug 2020-10-23T21:45:14.918+0000
> 7fe8de931700 -1 mgr operator() Failed to run module in active mode
> ('cephadm')
> Oct 23 23:45:14 ceph0 bash[2265]: debug 2020-10-23T21:45:14.922+0000
> 7fe8de931700  0 [crash DEBUG root] setting log level based on debug_mgr:
> WARNING (1/5)
> [...]
> Oct 23 23:45:33 ceph0 bash[2265]: debug 2020-10-23T21:45:33.477+0000
> 7fe8cf913700 -1 no module 'cephadm'
> 
> Before this, cephadm seemed to be operating normally.
> 
> Attached is a larger portion of the relevant log.
> 
>> Il giorno lun 26 ott 2020 alle ore 09:03 Eugen Block <eblock@xxxxxx> ha
>> scritto:
>> 
>> Interesting, what do you see in the MGR logs, there should be
>> something in there.
>> 
>> 
>> Zitat von Marco Venuti <afm.itunev@xxxxxxxxx>:
>> 
>>> Yes, this is the status
>>> 
>>> # ceph -s
>>>  cluster:
>>>    id:     ab471d92-14a2-11eb-ad67-525400bbdc0d
>>>    health: HEALTH_OK
>>> 
>>>  services:
>>>    mon: 5 daemons, quorum ceph0.starfleet.sns.it
>> ,ceph1,ceph3,ceph5,ceph4
>>> (age 104m)
>>>    mgr: ceph1.jxmtpn(active, since 17m), standbys:
>>> ceph0.starfleet.sns.it.clzhjp
>>>    mds: starfs:1 {0=starfs.ceph4.kqwkdc=up:active} 1 up:standby
>>>    osd: 12 osds: 10 up (since 103m), 10 in (since 2d)
>>> 
>>>  task status:
>>>    scrub status:
>>>        mds.starfs.ceph4.kqwkdc: idle
>>> 
>>>  data:
>>>    pools:   4 pools, 97 pgs
>>>    objects: 10.95k objects, 3.6 GiB
>>>    usage:   23 GiB used, 39 GiB / 62 GiB avail
>>>    pgs:     97 active+clean
>>> 
>>> Il giorno dom 25 ott 2020 alle ore 21:02 Eugen Block <eblock@xxxxxx> ha
>>> scritto:
>>> 
>>>> Is one of the MGRs up? What is the ceph status?
>>>> 
>>>> 
>>>> Zitat von Marco Venuti <afm.itunev@xxxxxxxxx>:
>>>> 
>>>>> Hi,
>>>>> I'm experimenting ceph on a (small) test cluster. I'm using version
>>>> 15.2.5
>>>>> deployed with cephadm.
>>>>> I was trying to do some "disaster" testing, such as wiping a disk in
>>>> order
>>>>> to simulate a hardware failure, destroy the osd and recreate it, all
>> of
>>>>> which I managed to do successfully.
>>>>> However, a few hours after this test, the orchestrator failed with no
>>>>> apparent reason. I tried to disable and reenable cephadm, but with no
>>>> luck:
>>>>> 
>>>>> # ceph orch ls
>>>>> Error ENOENT: No orchestrator configured (try `ceph orch set backend`)
>>>>> # ceph orch set backend cephadm
>>>>> Error ENOENT: Module not found
>>>>> 
>>>>> What could have happened? Is there some way to reenable cephadm?
>>>>> 
>>>>> Thanks,
>>>>> Marco
>>>>> _______________________________________________
>>>>> ceph-users mailing list -- ceph-users@xxxxxxx
>>>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>>> 
>>>> 
>>>> 
>>>> 
>> 
>> 
>> 
>> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux