VolumeGroup must have a non-empty name / 17.2.5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I updated from pacific 16.2.10 to 17.2.5 and the orchestration update went perfectly. Very impressive.

I have one host which then started throwing a cephadm warning after the upgrade.

2023-01-07 11:17:50,080 7f0b26c8ab80 INFO Non-zero exit code 1 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45 -e NODE_NAME=kelli.domain.name -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/404b94ab-b4d6-4218-9a4e-ecb8899108ca:/var/run/ceph:z -v /var/log/ceph/404b94ab-b4d6-4218-9a4e-ecb8899108ca:/var/log/ceph:z -v /var/lib/ceph/404b94ab-b4d6-4218-9a4e-ecb8899108ca/crash:/var/lib/ceph/crash:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/404b94ab-b4d6-4218-9a4e-ecb8899108ca/selinux:/sys/fs/selinux:ro -v /:/rootfs -v /tmp/ceph-tmpltrnmxf8:/etc/ceph/ceph.conf:z quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45 inventory --format=json-pretty --filter-for-batch
2023-01-07 11:17:50,081 7f0b26c8ab80 INFO /usr/bin/podman: stderr Traceback (most recent call last):
2023-01-07 11:17:50,081 7f0b26c8ab80 INFO /usr/bin/podman: stderr File "/usr/sbin/ceph-volume", line 11, in <module>
2023-01-07 11:17:50,081 7f0b26c8ab80 INFO /usr/bin/podman: stderr load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
2023-01-07 11:17:50,081 7f0b26c8ab80 INFO /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 41, in __init__
2023-01-07 11:17:50,081 7f0b26c8ab80 INFO /usr/bin/podman: stderr self.main(self.argv)
2023-01-07 11:17:50,082 7f0b26c8ab80 INFO /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
2023-01-07 11:17:50,082 7f0b26c8ab80 INFO /usr/bin/podman: stderr return f(*a, **kw)
2023-01-07 11:17:50,082 7f0b26c8ab80 INFO /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153, in main
2023-01-07 11:17:50,082 7f0b26c8ab80 INFO /usr/bin/podman: stderr terminal.dispatch(self.mapper, subcommand_args)
2023-01-07 11:17:50,082 7f0b26c8ab80 INFO /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
2023-01-07 11:17:50,082 7f0b26c8ab80 INFO /usr/bin/podman: stderr instance.main()
2023-01-07 11:17:50,082 7f0b26c8ab80 INFO /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/inventory/main.py", line 53, in main
2023-01-07 11:17:50,082 7f0b26c8ab80 INFO /usr/bin/podman: stderr with_lsm=self.args.with_lsm))
2023-01-07 11:17:50,082 7f0b26c8ab80 INFO /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 39, in __init__
2023-01-07 11:17:50,082 7f0b26c8ab80 INFO /usr/bin/podman: stderr all_devices_vgs = lvm.get_all_devices_vgs()
2023-01-07 11:17:50,082 7f0b26c8ab80 INFO /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 797, in get_all_devices_vgs
2023-01-07 11:17:50,083 7f0b26c8ab80 INFO /usr/bin/podman: stderr return [VolumeGroup(**vg) for vg in vgs]
2023-01-07 11:17:50,083 7f0b26c8ab80 INFO /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 797, in <listcomp>
2023-01-07 11:17:50,083 7f0b26c8ab80 INFO /usr/bin/podman: stderr return [VolumeGroup(**vg) for vg in vgs]
2023-01-07 11:17:50,083 7f0b26c8ab80 INFO /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 517, in __init__
2023-01-07 11:17:50,083 7f0b26c8ab80 INFO /usr/bin/podman: stderr raise ValueError('VolumeGroup must have a non-empty name')
2023-01-07 11:17:50,083 7f0b26c8ab80 INFO /usr/bin/podman: stderr ValueError: VolumeGroup must have a non-empty name

This host is the only one which has 14 drives which aren't being used. I'm guessing this is why its getting this error. The drives may have been used previous in a cluster (maybe not the same cluster) or something. I don't know.

Any suggestions for what to try to get past this issue?

peter

 
Peter Eisch
DevOps Manager
peter.eisch@xxxxxxxxxxxxxxx
T
1.612.445.5135
Confidentiality Notice: This email was sent securely using Transport Layer Security (TLS) Encryption. Please ensure your email systems support TLS before replying with any confidential information. The information contained in this e-mail, including any attachment(s), is intended solely for use by the designated recipient(s). Unauthorized use, dissemination, distribution, or reproduction of this message by anyone other than the intended recipient(s), or a person designated as responsible for delivering such messages to the intended recipient, is strictly prohibited and may be unlawful. This e-mail may contain proprietary, confidential or privileged information. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Virgin Pulse, Inc. If you have received this message in error, or are not the named recipient(s), please immediately notify the sender and delete this e-mail message.
v3.02
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux