Interesting, what do you see in the MGR logs, there should be
something in there.
Zitat von Marco Venuti <afm.itunev@xxxxxxxxx>:
Yes, this is the status
# ceph -s
cluster:
id: ab471d92-14a2-11eb-ad67-525400bbdc0d
health: HEALTH_OK
services:
mon: 5 daemons, quorum ceph0.starfleet.sns.it,ceph1,ceph3,ceph5,ceph4
(age 104m)
mgr: ceph1.jxmtpn(active, since 17m), standbys:
ceph0.starfleet.sns.it.clzhjp
mds: starfs:1 {0=starfs.ceph4.kqwkdc=up:active} 1 up:standby
osd: 12 osds: 10 up (since 103m), 10 in (since 2d)
task status:
scrub status:
mds.starfs.ceph4.kqwkdc: idle
data:
pools: 4 pools, 97 pgs
objects: 10.95k objects, 3.6 GiB
usage: 23 GiB used, 39 GiB / 62 GiB avail
pgs: 97 active+clean
Il giorno dom 25 ott 2020 alle ore 21:02 Eugen Block <eblock@xxxxxx> ha
scritto:
Is one of the MGRs up? What is the ceph status?
Zitat von Marco Venuti <afm.itunev@xxxxxxxxx>:
> Hi,
> I'm experimenting ceph on a (small) test cluster. I'm using version
15.2.5
> deployed with cephadm.
> I was trying to do some "disaster" testing, such as wiping a disk in
order
> to simulate a hardware failure, destroy the osd and recreate it, all of
> which I managed to do successfully.
> However, a few hours after this test, the orchestrator failed with no
> apparent reason. I tried to disable and reenable cephadm, but with no
luck:
>
> # ceph orch ls
> Error ENOENT: No orchestrator configured (try `ceph orch set backend`)
> # ceph orch set backend cephadm
> Error ENOENT: Module not found
>
> What could have happened? Is there some way to reenable cephadm?
>
> Thanks,
> Marco
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx