Re: Troubleshooting cephadm - not deploying any daemons

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Zach,

Try running `ceph orch apply mgr 2` or `ceph orch apply mgr
--placement="<host1>
<host2>"`. Refer this
<https://docs.ceph.com/en/latest/cephadm/services/#orchestrator-cli-placement-spec>
doc for more information, hope it helps.

Regards,
Dhairya

On Thu, Jun 9, 2022 at 1:59 AM Zach Heise (SSCC) <heise@xxxxxxxxxxxx> wrote:

> Our 16.2.7 cluster was deployed using cephadm from the start, but now it
> seems like deploying daemons with it is broken. Running 'ceph orch apply
> mgr --placement=2' causes '6/8/22 2:34:18 PM[INF]Saving service mgr spec
> with placement count:2' to appear in the logs, but a 2nd mgr does not
> get created.
>
> I also confirmed the same with mds daemons - using the dashboard, I
> tried creating a new set of MDS daemons "220606" count:3, but they never
> got deployed. The service type appears in the dashboard, though, just
> with no daemons deployed under it. Then I tried to delete it with the
> dashboard, and now 'ceph orch ls' outputs:
>
> NAME                       PORTS        RUNNING  REFRESHED   AGE PLACEMENT
> mds.220606                                  0/3  <deleting> 15h  count:3
>
> More detail in YAML format doesn't even give me that much information:
>
> ceph01> ceph orch ls --service_name=mds.220606 --format yaml
> service_type: mds
> service_id: '220606'
> service_name: mds.220606
> placement:
>    count: 3
> status:
>    created: '2022-06-07T03:42:57.234124Z'
>    running: 0
>    size: 3
> events:
> - 2022-06-07T03:42:57.301349Z service:mds.220606 [INFO] "service was
> created"
>
> 'ceph health detail' reports HEALTH_OK but cephadm doesn't seem to be
> doing its job. I read through the Cephadm troubleshooting page on ceph's
> website but since the daemons I'm trying to create don't even seem to
> try to spawn containers (podman ps shows the existing containers just
> fine) I don't know where to look next for logs, to see if cephadm +
> podman are trying to create new containers and failing, or not even trying.
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux