It looks like the problem is that ceph-exporter stuff is in the quincy branch (including the cephadm binary) but not any quincy image yet, since 17.2.6 isn't out yet (our ci tests against images built on latest quincy branch so hasn't hit this issue). So even pulling quay.io/ceph/ceph:v17, which is the latest quincy build, doesn't pull an image that supports the ceph-exporter, resulting in this error. Technically the exact command you used should start to work again once 17.2.6 is released, but there should definitely be some error handling here for those using the recent versions of the binary in github to bootstrap with older quincy images. I'll make a small patch to address it. Thanks for the heads up.
On Wed, Mar 15, 2023 at 6:04 AM Tobias Fischer <tobias.fischer@xxxxxxxxx> wrote:
Hi Adam,
I noticed a bug using cephadm quincy:
when using cephadm from
https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm as written
on https://docs.ceph.com/en/quincy/cephadm/install/ I get following error:
root@ceph-01:~# cephadm bootstrap --ssh-user cephadm --ssh-public-key
/home/cephadm/.ssh/cephadm.pub --ssh-private-key
/home/cephadm/.ssh/cephadm --mon-ip 10.82.71.11
Verifying ssh connectivity ...
Adding key to cephadm@localhost authorized_keys...
key already in cephadm@localhost authorized_keys...
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chrony.service is enabled and running
Repeating the final host check...
podman (/usr/bin/podman) version 4.3.1 is present
systemctl is present
lvcreate is present
Unit chrony.service is enabled and running
Host looks OK
Cluster fsid: e460fb6a-c315-11ed-8abc-02000a52470b
Verifying IP 10.82.71.11 port 3300 ...
Verifying IP 10.82.71.11 port 6789 ...
Mon IP `10.82.71.11` is in CIDR network `10.82.71.0/24`
Mon IP `10.82.71.11` is in CIDR network `10.82.71.0/24`
Internal network (--cluster-network) has not been provided, OSD
replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v17...
Ceph version: ceph version 17.2.5
(98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network to 10.82.71.0/24
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 5...
mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Using provided ssh keys...
Adding key to cephadm@localhost authorized_keys...
key already in cephadm@localhost authorized_keys...
Adding host ceph-01...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying ceph-exporter service with default placement...
Non-zero exit code 22 from /usr/bin/podman run --rm --ipc=host
--stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e
CONTAINER_IMAGE=quay.io/ceph/ceph:v17 -e NODE_NAME=ceph-01 -e
CEPH_USE_RANDOM_NONCE=1 -v
/var/log/ceph/e460fb6a-c315-11ed-8abc-02000a52470b:/var/log/ceph:z -v
/tmp/ceph-tmpdj4jpaw8:/etc/ceph/ceph.client.admin.keyring:z -v
/tmp/ceph-tmpqrstws24:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v17 orch
apply ceph-exporter
/usr/bin/ceph: stderr Error EINVAL: Usage:
/usr/bin/ceph: stderr ceph orch apply -i <yaml spec> [--dry-run]
/usr/bin/ceph: stderr ceph orch apply <service_type>
[--placement=<placement_string>] [--unmanaged]
/usr/bin/ceph: stderr
Traceback (most recent call last):
File "/usr/local/bin/cephadm", line 9653, in <module>
main()
File "/usr/local/bin/cephadm", line 9641, in main
r = ctx.func(ctx)
^^^^^^^^^^^^^
File "/usr/local/bin/cephadm", line 2205, in _default_image
return func(ctx)
^^^^^^^^^
File "/usr/local/bin/cephadm", line 5774, in command_bootstrap
prepare_ssh(ctx, cli, wait_for_mgr_restart)
File "/usr/local/bin/cephadm", line 5275, in prepare_ssh
cli(['orch', 'apply', t])
File "/usr/local/bin/cephadm", line 5714, in cli
).run(timeout=timeout, verbosity=verbosity)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/bin/cephadm", line 4144, in run
out, _, _ = call_throws(self.ctx, self.run_cmd(),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/bin/cephadm", line 1853, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host
--stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e
CONTAINER_IMAGE=quay.io/ceph/ceph:v17 -e NODE_NAME=ceph-01 -e
CEPH_USE_RANDOM_NONCE=1 -v
/var/log/ceph/e460fb6a-c315-11ed-8abc-02000a52470b:/var/log/ceph:z -v
/tmp/ceph-tmpdj4jpaw8:/etc/ceph/ceph.client.admin.keyring:z -v
/tmp/ceph-tmpqrstws24:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v17 orch
apply ceph-exporter
If I skip the monitoring stack with "--skip-monitoring-stack"
boostraping works as expected.
Using cephadm from
https://github.com/ceph/ceph/raw/v17.2.5/src/cephadm/cephadm ( "v17.2.5"
instead of "quincy" ) also works as expected.
If you have any questions please get in touch. thanks!
BR
Tobi
--
Mit freundlichen Grüßen
Tobias Fischer
Head of Ceph
Clyso GmbH
p: +49 89 21552391 12
a: Loristraße 8 | 80335 München | Germany
w: https://clyso.com | e: tobias.fischer@xxxxxxxxx
We are hiring: https://www.clyso.com/jobs/
---
Geschäftsführer: Dipl. Inf. (FH) Joachim Kraftmayer
Unternehmenssitz: Utting am Ammersee
Handelsregister beim Amtsgericht: Augsburg
Handelsregister-Nummer: HRB 25866
USt. ID-Nr.: DE275430677
_______________________________________________ Dev mailing list -- dev@xxxxxxx To unsubscribe send an email to dev-leave@xxxxxxx