Cephadm upgrade to Pacific problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

Cluster is 3 nodes Debian 10. Started cephadm upgrade on healthy 15.2.10 cluster. Managers were upgraded fine then first monitor went down for upgrade and never came back. Researching at the unit files container fails to run because of an error:

root@host1:/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1# cat unit.run

set -e
/usr/bin/install -d -m0770 -o 167 -g 167 /var/run/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6
# mon.host1
! /usr/bin/docker rm -f ceph-97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6-mon.host1 2> /dev/null /usr/bin/docker run --rm --ipc=host --net=host --entrypoint /usr/bin/ceph-mon --privileged --group-add=disk --init --name ceph-97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6-mon.host1 -e CONTAINER_IMAGE=ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a -e NODE_NAME=host1 -e CEPH_USE_RANDOM_NONCE=1 -v /var/run/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6:/var/run/ceph:z -v /var/log/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6:/var/log/ceph:z -v /var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1:/var/lib/ceph/mon/ceph-host1:z -v /var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a -n mon.host1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true '--default-log-stderr-prefix=debug ' --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true

root@host1:/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1# /usr/bin/docker run --rm --ipc=host --net=host --entrypoint /usr/bin/ceph-mon --privileged --group-add=disk --init --name ceph-97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6-mon.host1 -e CONTAINER_IMAGE=ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a -e NODE_NAME=host1 -e CEPH_USE_RANDOM_NONCE=1 -v /var/run/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6:/var/run/ceph:z -v /var/log/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6:/var/log/ceph:z -v /var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1:/var/lib/ceph/mon/ceph-host1:z -v /var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a -n mon.host1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true '--default-log-stderr-prefix=debug ' --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true


/usr/bin/docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"/dev/init\": stat /dev/init: no such file or directory": unknown.

Any suggestions how to resolve that ?

Thank you.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux