Re: Disk device path changed - cephadm faild to apply osd service

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



But that could be done easily like this:

service_type: osd
service_id: ssd-db
service_name: osd.ssd-db
placement:
  hosts:
  - storage01
  - storage02
...
spec:
  block_db_size: 64G
  data_devices:
    rotational: 1
  db_devices:
    rotational: 0
  filter_logic: AND
  objectstore: bluestore

Anyway, I would expect that fixing the drivegroup config would fix your issue, but I'm not sure either.

Zitat von Kilian Ries <mail@xxxxxxxxxxxxxx>:

Yes i need specific device paths because all HDD / SSD are the same size / same vendor etc. I group multiple HDDs with an exclusive SSD for cacheing, for example:


spec:

  data_devices:

    paths:

    - /dev/sdh

    - /dev/sdi

    - /dev/sdj

    - /dev/sdk

    - /dev/sdl

  db_devices:

    paths:

    - /dev/sdf

  filter_logic: AND

  objectstore: bluestore

________________________________
Von: Eugen Block <eblock@xxxxxx>
Gesendet: Mittwoch, 2. August 2023 08:13:41
An: ceph-users@xxxxxxx
Betreff: Re: Disk device path changed - cephadm faild to apply osd service

Do you really need device paths in your configuration? You could use
other criteria like disk sizes, vendors, rotational flag etc. If you
really want device paths you'll probably need to ensure they're
persistent across reboots via udev rules.

Zitat von Kilian Ries <mail@xxxxxxxxxxxxxx>:

Hi,


it seems that after reboot / OS update my disk labels / device paths
may have changed. Since then i get an error like this:



CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.osd-12-22_hdd-2



###


RuntimeError: cephadm exited with an error code: 1, stderr:Non-zero
exit code 1 from /bin/docker run --rm --ipc=host
--stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume
--privileged --group-add=disk --init -e
CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:9e2fd45a080aea67d1935d7d9a9025b6db2e8be9173186e068a79a0da5a54ada -e NODE_NAME=ceph-osd07.intern -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=osd-12-22_hdd-2 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/01578d80-6c97-46ba-9327-cb2b13980916:/var/run/ceph:z -v /var/log/ceph/01578d80-6c97-46ba-9327-cb2b13980916:/var/log/ceph:z -v /var/lib/ceph/01578d80-6c97-46ba-9327-cb2b13980916/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp2cvmr5lf:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpb38cuw7q:/var/lib/ceph/bootstrap-osd/ceph.keyring:z
quay.io/ceph/ceph@sha256:9e2fd45a080aea67d1935d7d9a9
 025b6db2e8be9173186e068a79a0da5a54ada lvm batch --no-auto /dev/sdm
/dev/sdn /dev/sdo /dev/sdp /dev/sdq --db-devices /dev/sdg --yes
--no-systemd

/bin/docker: stderr Traceback (most recent call last):

/bin/docker: stderr   File "/usr/sbin/ceph-volume", line 11, in <module>

/bin/docker: stderr     load_entry_point('ceph-volume==1.0.0',
'console_scripts', 'ceph-volume')()

/bin/docker: stderr   File
"/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 41, in
__init__

/bin/docker: stderr     self.main(self.argv)

/bin/docker: stderr   File
"/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line
59, in newfunc

/bin/docker: stderr     return f(*a, **kw)

/bin/docker: stderr   File
"/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153, in
main

/bin/docker: stderr     terminal.dispatch(self.mapper, subcommand_args)

/bin/docker: stderr   File
"/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line
194, in dispatch

/bin/docker: stderr     instance.main()

/bin/docker: stderr   File
"/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py",
line 46, in main

/bin/docker: stderr     terminal.dispatch(self.mapper, self.argv)

/bin/docker: stderr   File
"/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line
192, in dispatch

/bin/docker: stderr     instance = mapper.get(arg)(argv[count:])

/bin/docker: stderr   File
"/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py",
line 348, in __init__

/bin/docker: stderr     self.args = parser.parse_args(argv)

/bin/docker: stderr   File "/usr/lib64/python3.6/argparse.py", line
1734, in parse_args

/bin/docker: stderr     args, argv = self.parse_known_args(args, namespace)

/bin/docker: stderr   File "/usr/lib64/python3.6/argparse.py", line
1766, in parse_known_args

/bin/docker: stderr     namespace, args =
self._parse_known_args(args, namespace)

/bin/docker: stderr   File "/usr/lib64/python3.6/argparse.py", line
1954, in _parse_known_args

/bin/docker: stderr     positionals_end_index =
consume_positionals(start_index)

/bin/docker: stderr   File "/usr/lib64/python3.6/argparse.py", line
1931, in consume_positionals

/bin/docker: stderr     take_action(action, args)

/bin/docker: stderr   File "/usr/lib64/python3.6/argparse.py", line
1824, in take_action

/bin/docker: stderr     argument_values = self._get_values(action,
argument_strings)

/bin/docker: stderr   File "/usr/lib64/python3.6/argparse.py", line
2279, in _get_values

/bin/docker: stderr     value = [self._get_value(action, v) for v in
arg_strings]

/bin/docker: stderr   File "/usr/lib64/python3.6/argparse.py", line
2279, in <listcomp>

/bin/docker: stderr     value = [self._get_value(action, v) for v in
arg_strings]

/bin/docker: stderr   File "/usr/lib64/python3.6/argparse.py", line
2294, in _get_value

/bin/docker: stderr     result = type_func(arg_string)

/bin/docker: stderr   File
"/usr/lib/python3.6/site-packages/ceph_volume/util/arg_validators.py", line
116, in __call__

/bin/docker: stderr     return self._format_device(self._is_valid_device())

/bin/docker: stderr   File
"/usr/lib/python3.6/site-packages/ceph_volume/util/arg_validators.py", line
127, in _is_valid_device

/bin/docker: stderr     super()._is_valid_device(raise_sys_exit=False)

/bin/docker: stderr   File
"/usr/lib/python3.6/site-packages/ceph_volume/util/arg_validators.py", line
104, in _is_valid_device

/bin/docker: stderr     super()._is_valid_device()

/bin/docker: stderr   File
"/usr/lib/python3.6/site-packages/ceph_volume/util/arg_validators.py", line
69, in _is_valid_device

/bin/docker: stderr     super()._is_valid_device()

/bin/docker: stderr   File
"/usr/lib/python3.6/site-packages/ceph_volume/util/arg_validators.py", line
47, in _is_valid_device

/bin/docker: stderr     raise RuntimeError("Device {} has
partitions.".format(self.dev_path))

/bin/docker: stderr RuntimeError: Device /dev/sdq has partitions.

Traceback (most recent call last):

  File
"/var/lib/ceph/01578d80-6c97-46ba-9327-cb2b13980916/cephadm.0317efb4d3a353d5a77e82f4a4f52582f06970d6aba66473daecf92e26ee3a51", line 9309, in
<module>

    main()

  File
"/var/lib/ceph/01578d80-6c97-46ba-9327-cb2b13980916/cephadm.0317efb4d3a353d5a77e82f4a4f52582f06970d6aba66473daecf92e26ee3a51", line 9297, in
main

    r = ctx.func(ctx)

  File
"/var/lib/ceph/01578d80-6c97-46ba-9327-cb2b13980916/cephadm.0317efb4d3a353d5a77e82f4a4f52582f06970d6aba66473daecf92e26ee3a51", line 1941, in
_infer_config

    return func(ctx)

  File
"/var/lib/ceph/01578d80-6c97-46ba-9327-cb2b13980916/cephadm.0317efb4d3a353d5a77e82f4a4f52582f06970d6aba66473daecf92e26ee3a51", line 1872, in
_infer_fsid

    return func(ctx)

  File
"/var/lib/ceph/01578d80-6c97-46ba-9327-cb2b13980916/cephadm.0317efb4d3a353d5a77e82f4a4f52582f06970d6aba66473daecf92e26ee3a51", line 1969, in
_infer_image

    return func(ctx)

  File
"/var/lib/ceph/01578d80-6c97-46ba-9327-cb2b13980916/cephadm.0317efb4d3a353d5a77e82f4a4f52582f06970d6aba66473daecf92e26ee3a51", line 1859, in
_validate_fsid

    return func(ctx)

  File
"/var/lib/ceph/01578d80-6c97-46ba-9327-cb2b13980916/cephadm.0317efb4d3a353d5a77e82f4a4f52582f06970d6aba66473daecf92e26ee3a51", line 5366, in
command_ceph_volume

    out, err, code = call_throws(ctx, c.run_cmd())

  File
"/var/lib/ceph/01578d80-6c97-46ba-9327-cb2b13980916/cephadm.0317efb4d3a353d5a77e82f4a4f52582f06970d6aba66473daecf92e26ee3a51", line 1661, in
call_throws

    raise RuntimeError('Failed command: %s' % ' '.join(command))


###



/dev/sdg is my boot device at the moment wich was formerly /dev/sda.


Is it safe to edit the ceph orch yaml file and change the device
pathes to the new format? Like this:


ceph orch ls --service_name=<service-name> --export > myservice.yaml


vi (change device pathes in spec -> data_devices -> path |
db_devices -> path)


ceph orch apply -i myservice.yaml [--dry-run]



Is that ok / expected behaviour? Or is there a better way?


However can see that mgr detects new devices in ceph orch log:


mgr.ceph-mon03.lrfomu [INF] Detected new or changed devices on ceph-osd07



Regards,

Kilian


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux