Re: Conversion to Cephadm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Interesting thought.  Thanks for the reply J  

 

I have a mgr running on that same node but that’s what happened when I tried to spin up a monitor.  I went back to the node based on this feedback, removed the mgr instance so it had nothing on it.  Deleted all the images and containers, downloaded the octopus script instead, changed the repo and reinstalled cephadm.  I then redeployed the mgr node and it went just fine but when I tried to deploy the mon container/instance, it fails with the same error.  I did the same thing to the centos 8 stream node which I upgraded from centos 7(out of desperation) and it worked, it is running both the mgr and mon containers now.  Oddly enough, the new containers are running 17.2.0 even though the installed cephadm is octopus.  I had started an upgrade before everything stopped working properly, so some of the containers are quincy and some are octopus on the mgr and mon nodes.

 

One of the biggest things I see is that it seems there is no clear path from centos 7 to centos stream 8 ( or rocky ) without blowing up the machine.  During the upgrade of the centos 7 node, it told me the ceph and python packages might cause an issue and to remove them.  I removed them, but it wiped out any ceph configuration on the machine.  Perhaps this isn’t necessary.  I am not really worried about the monitor and access nodes as they are redundant, but the OSD nodes are physical and host all the drives.  Waiting for rebuilds with a petabyte of data will be a long upgrade…

 

-Brent

 

From: Redouane Kachach Elhichou <rkachach@xxxxxxxxxx> 
Sent: Monday, June 27, 2022 3:10 AM
To: Brent Kennedy <bkennedy@xxxxxxxxxx>
Cc: ceph-users@xxxxxxx
Subject: Re:  Conversion to Cephadm

 

>From the error message:

 

2022-06-25 21:51:59,798 7f4748727b80 INFO /usr/bin/ceph-mon: stderr too many
arguments:
[--default-log-to-journald=true,--default-mon-cluster-log-to-journald=true]

 

it seems that you are not using the cephadm that corresponds to your ceph version. Please, try to get cephadm for octopus.

 

-Redo

 

On Sun, Jun 26, 2022 at 4:07 AM Brent Kennedy <bkennedy@xxxxxxxxxx <mailto:bkennedy@xxxxxxxxxx> > wrote:

I successfully converted to cephadm after upgrading the cluster to octopus.
I am on CentOS 7 and am attempting to convert some of the nodes over to
rocky, but when I try to add a rocky node in and start the mgr or mon
service, it tries to start an octopus container and the service comes back
with an error.  Is there a way to force it to start a quincy container on
the new host?



I tried to start an upgrade, which did deploy the manager nodes to the new
hosts, but it failed converting the monitors and now one is dead ( a centos
7 one ).  It seems it can spin up quincy containers on the new nodes, but
because it failed upgrading, it still trying to deploy the octopus ones to
the new node.  



Cephadm log on new node:



2022-06-25 21:51:34,427 7f4748727b80 DEBUG stat: Copying blob
sha256:7a0437f04f83f084b7ed68ad9c4a4947e12fc4e1b006b38129bac89114ec3621

2022-06-25 21:51:34,647 7f4748727b80 DEBUG stat: Copying blob
sha256:7a0437f04f83f084b7ed68ad9c4a4947e12fc4e1b006b38129bac89114ec3621

2022-06-25 21:51:34,652 7f4748727b80 DEBUG stat: Copying blob
sha256:731c3beff4deece7d4e54bc26ecf6d99988b19ea8414524277d83bc5a5d6eb70

2022-06-25 21:51:59,006 7f4748727b80 DEBUG stat: Copying config
sha256:2cf504fded3980c76b59a354fca8f301941f86e369215a08752874d1ddb69b73

2022-06-25 21:51:59,008 7f4748727b80 DEBUG stat: Writing manifest to image
destination

2022-06-25 21:51:59,008 7f4748727b80 DEBUG stat: Storing signatures

2022-06-25 21:51:59,239 7f4748727b80 DEBUG stat: 167 167

2022-06-25 21:51:59,703 7f4748727b80 DEBUG /usr/bin/ceph-mon: too many
arguments:
[--default-log-to-journald=true,--default-mon-cluster-log-to-journald=true]

2022-06-25 21:51:59,797 7f4748727b80 INFO Non-zero exit code 1 from
/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host
--entrypoint /usr/bin/ceph-mon --init -e
CONTAINER_IMAGE=docker.io/ceph/ceph:v15 <http://docker.io/ceph/ceph:v15>  -e NODE_NAME=tpixmon5 -e
CEPH_USE_RANDOM_NONCE=1 -v
/var/log/ceph/33ca8009-79d6-45cf-a67e-9753ab4dc861:/var/log/ceph:z -v
/var/lib/ceph/33ca8009-79d6-45cf-a67e-9753ab4dc861/mon.tpixmon5:/var/lib/cep
h/mon/ceph-tpixmon5:z -v /tmp/ceph-tmp7xmra8lk:/tmp/keyring:z -v
/tmp/ceph-tmp7mid2k57:/tmp/config:z docker.io/ceph/ceph:v15 <http://docker.io/ceph/ceph:v15>  --mkfs -i
tpixmon5 --fsid 33ca8009-79d6-45cf-a67e-9753ab4dc861 -c /tmp/config
--keyring /tmp/keyring --setuser ceph --setgroup ceph
--default-log-to-file=false --default-log-to-journald=true
--default-log-to-stderr=false --default-mon-cluster-log-to-file=false
--default-mon-cluster-log-to-journald=true
--default-mon-cluster-log-to-stderr=false

2022-06-25 21:51:59,798 7f4748727b80 INFO /usr/bin/ceph-mon: stderr too many
arguments:
[--default-log-to-journald=true,--default-mon-cluster-log-to-journald=true]



Podman Images:

REPOSITORY           TAG         IMAGE ID      CREATED        SIZE

quay.io/ceph/ceph <http://quay.io/ceph/ceph>     <none>      e1d6a67b021e  2 weeks ago    1.32 GB

docker.io/ceph/ceph <http://docker.io/ceph/ceph>   v15         2cf504fded39  13 months ago  1.05 GB



I don't even know what that top one is because its not tagged and it keeping
pulling it.  Why would it be pulling a docker.io <http://docker.io>  image ( only place to get
octopus images? )?



I also tried to force upgrade the older failed monitor but the cephadm tool
says that the OS is too old.  Its just odd to me that we would say go to
containers cause the OS wont matter and then it actually still matters cause
podman versions tied to newer images.



-Brent

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx <mailto:ceph-users@xxxxxxx> 
To unsubscribe send an email to ceph-users-leave@xxxxxxx <mailto:ceph-users-leave@xxxxxxx> 

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux