Re: Unable to add OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I see you mentioned apparmor and MongoDB, so I guess there's a chance you
found https://tracker.ceph.com/issues/66389 already (your traceback also
looks the same). Other than making sure that relevant apparmor file it's
parsing doesn't contain settings with spaces or trying to manually apply
the fix from
https://github.com/ceph/ceph/pull/57955/files#diff-5acc9785d0e913430134b549d6695381e922ad7771aa4b5ed3deecd3e18ef9dbR722-R724
(it's in the squid branch at this point, but hasn't made it into any actual
releases) I don't remember any workarounds being found.

On Wed, Nov 6, 2024 at 5:54 AM tpDev Tester <tpdev.tester@xxxxxxxxx> wrote:

> Hi,
>
> I try to add OSDs to my new Cluster (Ubuntu 24.04 + podman), Four
> devices are listed as available:
>
>
> root@ceph-1:~#  ceph-volume inventory
>
> Device Path               Size         Device nodes    rotates available
> Model name
> /dev/nvme0n1              1.82 TB      nvme0n1         False True
> KINGSTON SNV2S2000G
> /dev/sda                  1.82 TB      sda             True True
> WDC WD20EFZX-68A
> /dev/sdb                  16.37 TB     sdb             True True
> TOSHIBA MG09ACA1
> /dev/sdc                  931.51 GB    sdc             True True
> WDC WD10EFRX-68J
>
>
> but none of them can be added:
>
> root@ceph-1:~# ceph orch daemon add osd ceph-1:/dev/sda
> Error EINVAL: Traceback (most recent call last):
>    File "/usr/share/ceph/mgr/mgr_module.py", line 1862, in _handle_command
>      return self.handle_command(inbuf, cmd)
>    File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 184, in
> handle_command
>      return dispatch[cmd['prefix']].call(self, cmd, inbuf)
>    File "/usr/share/ceph/mgr/mgr_module.py", line 499, in call
>      return self.func(mgr, **kwargs)
>    File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 120, in
> <lambda>
>      wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args,
> **l_kwargs)  # noqa: E731
>    File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 109, in
> wrapper
>      return func(*args, **kwargs)
>    File "/usr/share/ceph/mgr/orchestrator/module.py", line 1374, in
> _daemon_add_osd
>      raise_if_exception(completion)
>    File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 241, in
> raise_if_exception
>      raise e
> RuntimeError: cephadm exited with an error code: 1, stderr:Inferring
> config /var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/mon.ceph-1/config
> Traceback (most recent call last):
>    File "<frozen runpy>", line 198, in _run_module_as_main
>    File "<frozen runpy>", line 88, in _run_code
>    File
> "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/__main__.py",
>
> line 5579, in <module>
>    File
> "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/__main__.py",
>
> line 5567, in main
>    File
> "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/__main__.py",
>
> line 409, in _infer_config
>    File
> "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/__main__.py",
>
> line 324, in _infer_fsid
>    File
> "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/__main__.py",
>
> line 437, in _infer_image
>    File
> "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/__main__.py",
>
> line 311, in _validate_fsid
>    File
> "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/__main__.py",
>
> line 3288, in command_ceph_volume
>    File
> "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/__main__.py",
>
> line 918, in get_container_mounts_for_type
>    File
> "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/cephadmlib/daemons/ceph.py",
>
> line 422, in get_ceph_mounts_for_type
>    File
> "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/cephadmlib/host_facts.py",
>
> line 760, in selinux_enabled
>    File
> "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/cephadmlib/host_facts.py",
>
> line 743, in kernel_security
>    File
> "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/cephadmlib/host_facts.py",
>
> line 722, in _fetch_apparmor
> ValueError: too many values to unpack (expected 2)
>
> I found some hints with apparmor and MongoDB, but that does not help in
> this case.
>
> Any help appreciated.
>
>
> Kind regards
>
> Thomas
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux