Re: Unable to add OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



1. Make sure you have enough RAM on ceph-1 and the "ls -h /" indicates that the system disk is less than 70% full (managed services eat a LOt of disk space!)

2. Check your selunix audit log to make sure nothing's being blocked.

3. Check your /var/lib/ceph and /var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10 for existing OSD directories. Cross-reference this to a "ceph osd tree" inventory of ceph-1. You can't define something if it's defined, albeit defectively.

4. Don't forget to zap /dev/sda with the volume-zap utility to clear out any annoying pre-existing infrastructure!


  Regards, Tim

On 11/6/24 05:53, tpDev Tester wrote:
Hi,

I try to add OSDs to my new Cluster (Ubuntu 24.04 + podman), Four devices are listed as available:


root@ceph-1:~#  ceph-volume inventory

Device Path               Size         Device nodes    rotates available Model name /dev/nvme0n1              1.82 TB      nvme0n1         False True      KINGSTON SNV2S2000G /dev/sda                  1.82 TB      sda             True True      WDC WD20EFZX-68A /dev/sdb                  16.37 TB     sdb             True True      TOSHIBA MG09ACA1 /dev/sdc                  931.51 GB    sdc             True True      WDC WD10EFRX-68J


but none of them can be added:

root@ceph-1:~# ceph orch daemon add osd ceph-1:/dev/sda
Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 1862, in _handle_command
    return self.handle_command(inbuf, cmd)
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 184, in handle_command
    return dispatch[cmd['prefix']].call(self, cmd, inbuf)
  File "/usr/share/ceph/mgr/mgr_module.py", line 499, in call
    return self.func(mgr, **kwargs)
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 120, in <lambda>     wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)  # noqa: E731   File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 109, in wrapper
    return func(*args, **kwargs)
  File "/usr/share/ceph/mgr/orchestrator/module.py", line 1374, in _daemon_add_osd
    raise_if_exception(completion)
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 241, in raise_if_exception
    raise e
RuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/mon.ceph-1/config
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/__main__.py", line 5579, in <module>   File "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/__main__.py", line 5567, in main   File "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/__main__.py", line 409, in _infer_config   File "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/__main__.py", line 324, in _infer_fsid   File "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/__main__.py", line 437, in _infer_image   File "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/__main__.py", line 311, in _validate_fsid   File "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/__main__.py", line 3288, in command_ceph_volume   File "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/__main__.py", line 918, in get_container_mounts_for_type   File "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/cephadmlib/daemons/ceph.py", line 422, in get_ceph_mounts_for_type   File "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/cephadmlib/host_facts.py", line 760, in selinux_enabled   File "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/cephadmlib/host_facts.py", line 743, in kernel_security   File "/var/lib/ceph/16a56cdf-9bb4-11ef-b530-001e06456e10/cephadm.a58127a8eed242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/cephadmlib/host_facts.py", line 722, in _fetch_apparmor
ValueError: too many values to unpack (expected 2)

I found some hints with apparmor and MongoDB, but that does not help in this case.

Any help appreciated.


Kind regards

Thomas



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux