Hi all,
we have still problem to add any disk behind multipath. We tried
osd-spec in yml, ceph orch daemon add osd with mpath, dm-X or sdX
devices (for sdX we disaled multipath daemon and flush multipath table).
Do you have any idea?
ceph orch daemon add osd serverX:/dev/mapper/mpathm
RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host
--net=host --entrypoint /usr/sbin/ceph-volume --privileged
--group-add=disk -e CONTAINER_IMAGE=quay.io/ceph/ceph:v15 -e
NODE_NAME=serverX -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -v
/var/run/ceph/69748548-7ba4-11ec-83c5-3cfdfec3517c:/var/run/ceph:z -v
/var/log/ceph/69748548-7ba4-11ec-83c5-3cfdfec3517c:/var/log/ceph:z -v
/var/lib/ceph/69748548-7ba4-11ec-83c5-3cfdfec3517c/crash:/var/lib/ceph/crash:z
-v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v
/run/lock/lvm:/run/lock/lvm -v
/tmp/ceph-tmpu04_fcc9:/etc/ceph/ceph.conf:z -v
/tmp/ceph-tmpbmrjdlv2:/var/lib/ceph/bootstrap-osd/ceph.keyring:z
quay.io/ceph/ceph:v15 lvm batch --no-auto /dev/mapper/mpathm --yes
--no-systemd
2022-01-24T18:39:08.390014+0100 mgr.serverX.jxbuay [ERR] _Promise failed
Traceback (most recent call last):
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 294, in
_finalize
next_result = self._on_complete(self._value)
File "/usr/share/ceph/mgr/cephadm/module.py", line 115, in <lambda>
return CephadmCompletion(on_complete=lambda _: f(*args, **kwargs))
File "/usr/share/ceph/mgr/cephadm/module.py", line 1677, in create_osds
return self.osd_service.create_from_spec(drive_group)
File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 51, in
create_from_spec
ret = create_from_spec_one(self.prepare_drivegroup(drive_group))
File "/usr/share/ceph/mgr/cephadm/utils.py", line 65, in
forall_hosts_wrapper
return CephadmOrchestrator.instance._worker_pool.map(do_work, vals)
File "/lib64/python3.6/multiprocessing/pool.py", line 266, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/lib64/python3.6/multiprocessing/pool.py", line 644, in get
raise self._value
File "/lib64/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/lib64/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/usr/share/ceph/mgr/cephadm/utils.py", line 59, in do_work
return f(*arg)
File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 47, in
create_from_spec_one
host, cmd, replace_osd_ids=osd_id_claims.get(host, []),
env_vars=env_vars
File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 67, in
create_single_host
code, '\n'.join(err)))
ceph orch daemon add osd serverX:/dev/dm-19
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/mgr/mgr_module.py", line 1212, in _handle_command
return self.handle_command(inbuf, cmd)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 140, in
handle_command
return dispatch[cmd['prefix']].call(self, cmd, inbuf)
File "/usr/share/ceph/mgr/mgr_module.py", line 320, in call
return self.func(mgr, **kwargs)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 102, in
<lambda>
wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 91, in
wrapper
return func(*args, **kwargs)
File "/usr/share/ceph/mgr/orchestrator/module.py", line 781, in
_daemon_add_osd
raise_if_exception(completion)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 642, in
raise_if_exception
raise e
RuntimeError: cephadm exited with an error code: 1,
stderr:/usr/bin/podman: stderr --> passed data devices: 0 physical, 1 LVM
/usr/bin/podman: stderr --> relative data size: 1.0
/usr/bin/podman: stderr --> IndexError: list index out of range
Traceback (most recent call last):
File "<stdin>", line 6251, in <module>
File "<stdin>", line 1359, in _infer_fsid
File "<stdin>", line 1442, in _infer_image
File "<stdin>", line 3713, in command_ceph_volume
File "<stdin>", line 1121, in call_throws
RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host
--net=host --entrypoint /usr/sbin/ceph-volume --privileged
--group-add=disk -e CONTAINER_IMAGE=quay.io/ceph/ceph:v15 -e
NODE_NAME=serverX -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -v
/var/run/ceph/69748548-7ba4-11ec-83c5-3cfdfec3517c:/var/run/ceph:z -v
/var/log/ceph/69748548-7ba4-11ec-83c5-3cfdfec3517c:/var/log/ceph:z -v
/var/lib/ceph/69748548-7ba4-11ec-83c5-3cfdfec3517c/crash:/var/lib/ceph/crash:z
-v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v
/run/lock/lvm:/run/lock/lvm -v
/tmp/ceph-tmpthn_t0il:/etc/ceph/ceph.conf:z -v
/tmp/ceph-tmprygbv15w:/var/lib/ceph/bootstrap-osd/ceph.keyring:z
quay.io/ceph/ceph:v15 lvm batch --no-auto /dev/dm-19 --yes --no-systemd
Ad. Ceph cluster is newly installed. On the backend is CentOS 8 Stream.
Thank you
Regards,
Michal
On 12/28/21 2:31 PM, Michal Strnad wrote:
Hi David.
I can't find any special character and command file show ASCII text.
file osd-spec-serverX.yml
osd-spec-serverX.yml: ASCII text
Problem is probably elsewhere. Does anyone use multipath with cephadm?
Thank you
--
Michal Strnad
On 12/24/21 12:47 PM, David Caro wrote:
I did not really look deep, but by the last log it seems there's some
utf chars somewhere (Greek phi?) And the code is not handling it well
when logging, trying to use ASCII.
On Thu, 23 Dec 2021, 19:02 Michal Strnad, <michal.strnad@xxxxxxxxx
<mailto:michal.strnad@xxxxxxxxx>> wrote:
Hi all.
We have problem using disks accessible via multipath. We are using
cephadm for deployment, Pacific version for containers, CentOS 8
Stream
on servers and following LVM configuration.
devices {
multipath_component_detection = 1
}
We tried several methods.
1.) Direct approach.
cephadm shell ceph orch daemon add osd serverX:/dev/mapper/mpatha
Errors are attached in 1.output file.
2. With the help of OSD specifications where they are mpathX
devices used.
service_type: osd
service_id: osd-spec-serverX
placement:
host_pattern: 'serverX'
spec:
data_devices:
paths:
- /dev/mapper/mpathaj
- /dev/mapper/mpathan
- /dev/mapper/mpatham
db_devices:
paths:
- /dev/sdc
encrypted: true
Errors are attached in 2.output file.
2. With the help of OSD specifications where they are dm-X devices
used.
service_type: osd
service_id: osd-spec-serverX
placement:
host_pattern: 'serverX'
spec:
data_devices:
paths:
- /dev/dm-1
- /dev/dm-2
- /dev/dm-3
- /dev/dm-X
db_devices:
size: ':2TB'
encrypted: true
Errors are attached in 3.output file.
What is the right method for multipath deployments? I didn't find
much
on this topic.
Thank you
Michal
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx
<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx