Fwd: OSD apply failing, how to stop

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have tried to create OSD with this config:
service_type: osd
service_id: osd_nnn1
placement:
  hosts:
    - nakidra
data_devices:
  paths:
    - /dev/sdc
    - /dev/sdd
db_devices:
  paths:
    - ceph-nvme-04/block
wal_devices:
  paths:
    - ceph-nvme-14/block

with command:
ceph orch apply osd -i osd1.yml

but unfortunately the system stuck in retry cycle:

8/30/21 2:27:00 AM
[ERR]
Failed to apply osd.osd_nakidra1 spec
DriveGroupSpec(name=osd_nakidra1->placement=PlacementSpec(hosts=[HostPlacementSpec(hostname='nnn',
network='', name='')]), service_id='osd_nakidra1', service_type='osd',
data_devices=DeviceSelection(paths=[<ceph.deployment.inventory.Device
object at 0x7faa7f4c99e8>, <ceph.deployment.inventory.Device object at
0x7faa7f4c90f0>], all=False),
db_devices=DeviceSelection(paths=[<ceph.deployment.inventory.Device object
at 0x7faa7f4c9400>], all=False),
wal_devices=DeviceSelection(paths=[<ceph.deployment.inventory.Device object
at 0x7faa7f4c9518>], all=False), osd_id_claims={}, unmanaged=False,
filter_logic='AND', preview_only=False): cephadm exited with an error code:
1, stderr:Non-zero exit code 1 from /usr/bin/docker run --rm --ipc=host
--stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume
--privileged --group-add=disk --init -e CONTAINER_IMAGE=
docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb
-e NODE_NAME=nnn -e CEPH_USE_RANDOM_NONCE=1 -e
CEPH_VOLUME_OSDSPEC_AFFINITY=osd_nnn1 -v
/var/run/ceph/03d0b03e-085b-11ec-8e4b-814a39073967:/var/run/ceph:z -v
/var/log/ceph/03d0b03e-085b-11ec-8e4b-814a39073967:/var/log/ceph:z -v
/var/lib/ceph/03d0b03e-085b-11ec-8e4b-814a39073967/crash:/var/lib/ceph/crash:z
-v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v
/run/lock/lvm:/run/lock/lvm -v /tmp/ceph-tmp26b6lukq:/etc/ceph/ceph.conf:z
-v /tmp/ceph-tmp9unbqyia:/var/lib/ceph/bootstrap-osd/ceph.keyring:z
docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb
lvm batch --no-auto /dev/sdc /dev/sdd --db-devices ceph-nvme-04/block
--wal-devices ceph-nvme-14/block --yes --no-systemd /usr/bin/docker: stderr
--> passed data devices: 2 physical, 0 LVM /usr/bin/docker: stderr -->
relative data size: 1.0 /usr/bin/docker: stderr --> passed block_db
devices: 0 physical, 1 LVM /usr/bin/docker: stderr --> ZeroDivisionError:
integer division or modulo by zero Traceback (most recent call last): File
"/var/lib/ceph/03d0b03e-085b-11ec-8e4b-814a39073967/cephadm.d4237e4639c108308fe13147b1c08af93c3d5724d9ff21ae797eb4b78fea3931",
line 8230, in <module> main() File
"/var/lib/ceph/03d0b03e-085b-11ec-8e4b-814a39073967/cephadm.d4237e4639c108308fe13147b1c08af93c3d5724d9ff21ae797eb4b78fea3931",
line 8218, in main r = ctx.func(ctx) File
"/var/lib/ceph/03d0b03e-085b-11ec-8e4b-814a39073967/cephadm.d4237e4639c108308fe13147b1c08af93c3d5724d9ff21ae797eb4b78fea3931",
line 1653, in _infer_fsid return func(ctx) File
"/var/lib/ceph/03d0b03e-085b-11ec-8e4b-814a39073967/cephadm.d4237e4639c108308fe13147b1c08af93c3d5724d9ff21ae797eb4b78fea3931",
line 1737, in _infer_image return func(ctx) File
"/var/lib/ceph/03d0b03e-085b-11ec-8e4b-814a39073967/cephadm.d4237e4639c108308fe13147b1c08af93c3d5724d9ff21ae797eb4b78fea3931",
line 4599, in command_ceph_volume out, err, code = call_throws(ctx,
c.run_cmd()) File
"/var/lib/ceph/03d0b03e-085b-11ec-8e4b-814a39073967/cephadm.d4237e4639c108308fe13147b1c08af93c3d5724d9ff21ae797eb4b78fea3931",
line 1453, in call_throws raise RuntimeError('Failed command: %s' % '
'.join(command)) Runt....

The error I see:
stderr --> ZeroDivisionError: integer division or modulo by zero

What could be wrong? According to the docs you can pass LVM volume ar db
and wal device..

How can I stop this cycle? e.g. cancel apply command..
How correctly to set up OSD with rotational disk as data and nvme as db and
wal device?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux