>From the error message: 2022-06-25 21:51:59,798 7f4748727b80 INFO /usr/bin/ceph-mon: stderr too many arguments: [--default-log-to-journald=true,--default-mon-cluster-log-to-journald=true] it seems that you are not using the cephadm that corresponds to your ceph version. Please, try to get cephadm for octopus. -Redo On Sun, Jun 26, 2022 at 4:07 AM Brent Kennedy <bkennedy@xxxxxxxxxx> wrote: > I successfully converted to cephadm after upgrading the cluster to octopus. > I am on CentOS 7 and am attempting to convert some of the nodes over to > rocky, but when I try to add a rocky node in and start the mgr or mon > service, it tries to start an octopus container and the service comes back > with an error. Is there a way to force it to start a quincy container on > the new host? > > > > I tried to start an upgrade, which did deploy the manager nodes to the new > hosts, but it failed converting the monitors and now one is dead ( a centos > 7 one ). It seems it can spin up quincy containers on the new nodes, but > because it failed upgrading, it still trying to deploy the octopus ones to > the new node. > > > > Cephadm log on new node: > > > > 2022-06-25 21:51:34,427 7f4748727b80 DEBUG stat: Copying blob > sha256:7a0437f04f83f084b7ed68ad9c4a4947e12fc4e1b006b38129bac89114ec3621 > > 2022-06-25 21:51:34,647 7f4748727b80 DEBUG stat: Copying blob > sha256:7a0437f04f83f084b7ed68ad9c4a4947e12fc4e1b006b38129bac89114ec3621 > > 2022-06-25 21:51:34,652 7f4748727b80 DEBUG stat: Copying blob > sha256:731c3beff4deece7d4e54bc26ecf6d99988b19ea8414524277d83bc5a5d6eb70 > > 2022-06-25 21:51:59,006 7f4748727b80 DEBUG stat: Copying config > sha256:2cf504fded3980c76b59a354fca8f301941f86e369215a08752874d1ddb69b73 > > 2022-06-25 21:51:59,008 7f4748727b80 DEBUG stat: Writing manifest to image > destination > > 2022-06-25 21:51:59,008 7f4748727b80 DEBUG stat: Storing signatures > > 2022-06-25 21:51:59,239 7f4748727b80 DEBUG stat: 167 167 > > 2022-06-25 21:51:59,703 7f4748727b80 DEBUG /usr/bin/ceph-mon: too many > arguments: > [--default-log-to-journald=true,--default-mon-cluster-log-to-journald=true] > > 2022-06-25 21:51:59,797 7f4748727b80 INFO Non-zero exit code 1 from > /bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host > --entrypoint /usr/bin/ceph-mon --init -e > CONTAINER_IMAGE=docker.io/ceph/ceph:v15 -e NODE_NAME=tpixmon5 -e > CEPH_USE_RANDOM_NONCE=1 -v > /var/log/ceph/33ca8009-79d6-45cf-a67e-9753ab4dc861:/var/log/ceph:z -v > > /var/lib/ceph/33ca8009-79d6-45cf-a67e-9753ab4dc861/mon.tpixmon5:/var/lib/cep > h/mon/ceph-tpixmon5:z -v /tmp/ceph-tmp7xmra8lk:/tmp/keyring:z -v > /tmp/ceph-tmp7mid2k57:/tmp/config:z docker.io/ceph/ceph:v15 --mkfs -i > tpixmon5 --fsid 33ca8009-79d6-45cf-a67e-9753ab4dc861 -c /tmp/config > --keyring /tmp/keyring --setuser ceph --setgroup ceph > --default-log-to-file=false --default-log-to-journald=true > --default-log-to-stderr=false --default-mon-cluster-log-to-file=false > --default-mon-cluster-log-to-journald=true > --default-mon-cluster-log-to-stderr=false > > 2022-06-25 21:51:59,798 7f4748727b80 INFO /usr/bin/ceph-mon: stderr too > many > arguments: > [--default-log-to-journald=true,--default-mon-cluster-log-to-journald=true] > > > > Podman Images: > > REPOSITORY TAG IMAGE ID CREATED SIZE > > quay.io/ceph/ceph <none> e1d6a67b021e 2 weeks ago 1.32 GB > > docker.io/ceph/ceph v15 2cf504fded39 13 months ago 1.05 GB > > > > I don't even know what that top one is because its not tagged and it > keeping > pulling it. Why would it be pulling a docker.io image ( only place to get > octopus images? )? > > > > I also tried to force upgrade the older failed monitor but the cephadm tool > says that the OS is too old. Its just odd to me that we would say go to > containers cause the OS wont matter and then it actually still matters > cause > podman versions tied to newer images. > > > > -Brent > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx