Re: Conversion to Cephadm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

there are some defaults for container images when used with cephadm. If you didn't change anything you probably get docker.io... when running:

ceph config dump | grep image
global basic container_image docker.io/ceph/ceph@sha256...

This is a pacific one-node test cluster. If you want to set it to quay.io you can change it like this:

# ceph config set global container_image quay.io/.../ceph-something

I successfully converted to cephadm after upgrading the cluster to octopus.
I am on CentOS 7 and am attempting to convert some of the nodes over to
rocky, but when I try to add a rocky node in and start the mgr or mon
service, it tries to start an octopus container and the service comes back
with an error.  Is there a way to force it to start a quincy container on
the new host?

Just to be clear, you upgraded to Octopus successfully, then tried to add new nodes with a newer OS and it tries to start an Octopus container, but that's expected, isn't it? Can you share more details which errors occur when you try to start Octopus containers?


Zitat von Brent Kennedy <bkennedy@xxxxxxxxxx>:

I successfully converted to cephadm after upgrading the cluster to octopus.
I am on CentOS 7 and am attempting to convert some of the nodes over to
rocky, but when I try to add a rocky node in and start the mgr or mon
service, it tries to start an octopus container and the service comes back
with an error.  Is there a way to force it to start a quincy container on
the new host?



I tried to start an upgrade, which did deploy the manager nodes to the new
hosts, but it failed converting the monitors and now one is dead ( a centos
7 one ).  It seems it can spin up quincy containers on the new nodes, but
because it failed upgrading, it still trying to deploy the octopus ones to
the new node.



Cephadm log on new node:



2022-06-25 21:51:34,427 7f4748727b80 DEBUG stat: Copying blob
sha256:7a0437f04f83f084b7ed68ad9c4a4947e12fc4e1b006b38129bac89114ec3621

2022-06-25 21:51:34,647 7f4748727b80 DEBUG stat: Copying blob
sha256:7a0437f04f83f084b7ed68ad9c4a4947e12fc4e1b006b38129bac89114ec3621

2022-06-25 21:51:34,652 7f4748727b80 DEBUG stat: Copying blob
sha256:731c3beff4deece7d4e54bc26ecf6d99988b19ea8414524277d83bc5a5d6eb70

2022-06-25 21:51:59,006 7f4748727b80 DEBUG stat: Copying config
sha256:2cf504fded3980c76b59a354fca8f301941f86e369215a08752874d1ddb69b73

2022-06-25 21:51:59,008 7f4748727b80 DEBUG stat: Writing manifest to image
destination

2022-06-25 21:51:59,008 7f4748727b80 DEBUG stat: Storing signatures

2022-06-25 21:51:59,239 7f4748727b80 DEBUG stat: 167 167

2022-06-25 21:51:59,703 7f4748727b80 DEBUG /usr/bin/ceph-mon: too many
arguments:
[--default-log-to-journald=true,--default-mon-cluster-log-to-journald=true]

2022-06-25 21:51:59,797 7f4748727b80 INFO Non-zero exit code 1 from
/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host
--entrypoint /usr/bin/ceph-mon --init -e
CONTAINER_IMAGE=docker.io/ceph/ceph:v15 -e NODE_NAME=tpixmon5 -e
CEPH_USE_RANDOM_NONCE=1 -v
/var/log/ceph/33ca8009-79d6-45cf-a67e-9753ab4dc861:/var/log/ceph:z -v
/var/lib/ceph/33ca8009-79d6-45cf-a67e-9753ab4dc861/mon.tpixmon5:/var/lib/cep
h/mon/ceph-tpixmon5:z -v /tmp/ceph-tmp7xmra8lk:/tmp/keyring:z -v
/tmp/ceph-tmp7mid2k57:/tmp/config:z docker.io/ceph/ceph:v15 --mkfs -i
tpixmon5 --fsid 33ca8009-79d6-45cf-a67e-9753ab4dc861 -c /tmp/config
--keyring /tmp/keyring --setuser ceph --setgroup ceph
--default-log-to-file=false --default-log-to-journald=true
--default-log-to-stderr=false --default-mon-cluster-log-to-file=false
--default-mon-cluster-log-to-journald=true
--default-mon-cluster-log-to-stderr=false

2022-06-25 21:51:59,798 7f4748727b80 INFO /usr/bin/ceph-mon: stderr too many
arguments:
[--default-log-to-journald=true,--default-mon-cluster-log-to-journald=true]



Podman Images:

REPOSITORY           TAG         IMAGE ID      CREATED        SIZE

quay.io/ceph/ceph    <none>      e1d6a67b021e  2 weeks ago    1.32 GB

docker.io/ceph/ceph  v15         2cf504fded39  13 months ago  1.05 GB



I don't even know what that top one is because its not tagged and it keeping
pulling it.  Why would it be pulling a docker.io image ( only place to get
octopus images? )?



I also tried to force upgrade the older failed monitor but the cephadm tool
says that the OS is too old.  Its just odd to me that we would say go to
containers cause the OS wont matter and then it actually still matters cause
podman versions tied to newer images.



-Brent

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux