Re: Problem with Ceph daemons

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Is there anything useful in the rgw daemon's logs? (e.g. journalctl -xeu
ceph-35194656-893e-11ec-85c8-005056870dae@rgw.obj0.c01.gpqshk)

 - Adam King

On Wed, Feb 16, 2022 at 3:58 PM Ron Gage <ron@xxxxxxxxxxx> wrote:

> Hi everyone!
>
>
>
> Looks like I am having some problems with some of my ceph RGW daemons -
> they
> won't stay running.
>
>
>
> From 'cephadm ls'.
>
>
>
> {
>
>         "style": "cephadm:v1",
>
>         "name": "rgw.obj0.c01.gpqshk",
>
>         "fsid": "35194656-893e-11ec-85c8-005056870dae",
>
>         "systemd_unit":
> "ceph-35194656-893e-11ec-85c8-005056870dae@rgw.obj0.c01.gpqshk
> <mailto:ceph-35194656-893e-11ec-85c8-005056870dae@rgw.obj0.c01.gpqshk> ",
>
>         "enabled": true,
>
>         "state": "error",
>
>         "service_name": "rgw.obj0",
>
>         "ports": [
>
>             80
>
>         ],
>
>         "ip": null,
>
>         "deployed_by": [
>
>
> "
> quay.io/ceph/ceph@sha256:c3a89afac4f9c83c716af57e08863f7010318538c7e2cd9114
> 58800097f7d97d
> <http://quay.io/ceph/ceph@sha256:c3a89afac4f9c83c716af57e08863f7010318538c7e2cd911458800097f7d97d>
> <mailto:quay.io/ceph/ceph@sha256
> :c3a89afac4f9c83c716af57e08863f7010318538c7e
> 2cd911458800097f7d97d> ",
>
>
> "
> quay.io/ceph/ceph@sha256:a39107f8d3daab4d756eabd6ee1630d1bc7f31eaa76fff41a7
> 7fa32d0b903061
> <http://quay.io/ceph/ceph@sha256:a39107f8d3daab4d756eabd6ee1630d1bc7f31eaa76fff41a77fa32d0b903061>
> <mailto:quay.io/ceph/ceph@sha256
> :a39107f8d3daab4d756eabd6ee1630d1bc7f31eaa76
> fff41a77fa32d0b903061> "
>
>         ],
>
>         "rank": null,
>
>         "rank_generation": null,
>
>         "memory_request": null,
>
>         "memory_limit": null,
>
>         "container_id": null,
>
>         "container_image_name":
> "
> quay.io/ceph/ceph@sha256:a39107f8d3daab4d756eabd6ee1630d1bc7f31eaa76fff41a7
> 7fa32d0b903061
> <http://quay.io/ceph/ceph@sha256:a39107f8d3daab4d756eabd6ee1630d1bc7f31eaa76fff41a77fa32d0b903061>
> <mailto:quay.io/ceph/ceph@sha256
> :a39107f8d3daab4d756eabd6ee1630d1bc7f31eaa76
> fff41a77fa32d0b903061> ",
>
>         "container_image_id": null,
>
>         "container_image_digests": null,
>
>         "version": null,
>
>         "started": null,
>
>         "created": "2022-02-09T01:00:53.411541Z",
>
>         "deployed": "2022-02-09T01:00:52.338515Z",
>
>         "configured": "2022-02-09T01:00:53.411541Z"
>
>     },
>
>
>
> That whole "state: error" bit is concerning to me - and it contributing to
> the cluster status of warning (showing 6 cephadm daemons down).
>
>
>
> Can I get a hint or two on how to fix this?
>
>
> Thanks!
>
>
>
> Ron Gage
>
> Westland, MI
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux