Hi Björn,
have a look at the Type "max". Thus the proxmox cluster should choose the maximum cpu type, which allows for live migration and than ceph should run with v18.2.4 images.
Best regards,
Gunnar
--- Original Nachricht ---
Betreff: Ceph Upgrade 18.2.2 -> 18.2.4 fails | Fatal glibc error: CPU does not support x86-64-v2 on virtualized hostsVon: "Björn Lässig" <b.laessig@xxxxxxxxxxxxxx>
An: "ceph-users" <ceph-users@xxxxxxx>
Datum: 25-07-2024 10:03
Hi there,
Last week I upgraded from 17.2.7 to 18.2.2 and the cluster worked fine
except for a few bugs about IPv6.
But it is working and cluster.health is HEALTH_OK.
Hoping to fix some of these minor IPv6 issues, I tried upgrading to
18.2.4. I see 2 errors:
The first is that the upgrade fails to:
{
"target_image": "quay.io/ceph/ceph@sha256:6ac7f923aa1d23b43248ce0ddec7e1388855ee3d00813b52c3172b0b23b37906",
"in_progress": true,
"which": "Upgrading all daemon types on all hosts",
"services_complete": [],
"progress": "0/89 daemons upgraded",
"message": "Error: UPGRADE_FAILED_PULL: Upgrade: failed to pull target image",
"is_paused": true
}
The error message is misleading as:
# podman images --digests
REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE
quay.io/ceph/ceph <none> sha256:6ac7f923aa1d23b43248ce0ddec7e1388855ee3d00813b52c3172b0b23b37906 2bc0b0f4375d 33 hours ago 1.25 GB
[…]
the target image was pulled without problems.
(podman pull runs without problems)
The error occures while starting the image:
Jul 25 09:16:18 cephmgr1 podman[1171476]: 2024-07-25 09:16:18.752832293 +0200 CEST m=+0.047066303 container create 4368eba845dcbc6f3cb7fe8265fc7044d91208f87c25cbf1abe6c3d0447b468e (image=quay.io/ceph/ceph@sha256:6ac7f923aa1d23b43248ce0ddec7e1388855ee3d00813b52c3172b0b23b37906, name=reverent_hypatia, org.label-schema.build-date=20240716, maintainer=Guillaume Abrioux <gabrioux@xxxxxxxxxx>, org.label-schema.name=CentOS Stream 9 Base Image, GIT_CLEAN=True, GIT_BRANCH=HEAD, RELEASE=HEAD, org.label-schema.license=GPLv2, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.8, org.label-schema.vendor=CentOS, CEPH_POINT_RELEASE=-18.2.4, GIT_COMMIT=c5aaba5e3282b30e4782f2b5d6e4e362e22dfcb7, org.label-schema.schema-version=1.0, ceph=True)
Jul 25 09:16:18 cephmgr1 systemd[1]: Started libpod-conmon-4368eba845dcbc6f3cb7fe8265fc7044d91208f87c25cbf1abe6c3d0447b468e.scope.
Jul 25 09:16:18 cephmgr1 systemd[1]: Started libpod-4368eba845dcbc6f3cb7fe8265fc7044d91208f87c25cbf1abe6c3d0447b468e.scope - libcrun container.
Jul 25 09:16:18 cephmgr1 podman[1171476]: 2024-07-25 09:16:18.820823054 +0200 CEST m=+0.115057064 container init 4368eba845dcbc6f3cb7fe8265fc7044d91208f87c25cbf1abe6c3d0447b468e (image=quay.io/ceph/ceph@sha256:6ac7f923aa1d23b43248ce0ddec7e1388855ee3d00813b52c3172b0b23b37906, name=reverent_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_POINT_RELEASE=-18.2.4, org.label-schema.build-date=20240716, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.8, GIT_BRANCH=HEAD, RELEASE=HEAD, ceph=True, maintainer=Guillaume Abrioux <gabrioux@xxxxxxxxxx>, org.label-schema.vendor=CentOS, GIT_COMMIT=c5aaba5e3282b30e4782f2b5d6e4e362e22dfcb7, GIT_CLEAN=True)
Jul 25 09:16:18 cephmgr1 podman[1171476]: 2024-07-25 09:16:18.826093069 +0200 CEST m=+0.120327079 container start 4368eba845dcbc6f3cb7fe8265fc7044d91208f87c25cbf1abe6c3d0447b468e (image=quay.io/ceph/ceph@sha256:6ac7f923aa1d23b43248ce0ddec7e1388855ee3d00813b52c3172b0b23b37906, name=reverent_hypatia, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=HEAD, GIT_CLEAN=True, org.label-schema.schema-version=1.0, maintainer=Guillaume Abrioux <gabrioux@xxxxxxxxxx>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_POINT_RELEASE=-18.2.4, RELEASE=HEAD, org.label-schema.build-date=20240716, GIT_COMMIT=c5aaba5e3282b30e4782f2b5d6e4e362e22dfcb7, io.buildah.version=1.33.8)
Jul 25 09:16:18 cephmgr1 podman[1171476]: 2024-07-25 09:16:18.726428757 +0200 CEST m=+0.020662767 image pull quay.io/ceph/ceph@sha256:6ac7f923aa1d23b43248ce0ddec7e1388855ee3d00813b52c3172b0b23b37906
Jul 25 09:16:18 cephmgr1 podman[1171476]: 2024-07-25 09:16:18.82623823 +0200 CEST m=+0.120472250 container attach 4368eba845dcbc6f3cb7fe8265fc7044d91208f87c25cbf1abe6c3d0447b468e (image=quay.io/ceph/ceph@sha256:6ac7f923aa1d23b43248ce0ddec7e1388855ee3d00813b52c3172b0b23b37906, name=reverent_hypatia, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=-18.2.4, RELEASE=HEAD, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GIT_BRANCH=HEAD, GIT_CLEAN=True, org.label-schema.build-date=20240716, GIT_COMMIT=c5aaba5e3282b30e4782f2b5d6e4e362e22dfcb7, io.buildah.version=1.33.8, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=Guillaume Abrioux <gabrioux@xxxxxxxxxx>)
Jul 25 09:16:18 cephmgr1 reverent_hypatia[1171489]: Fatal glibc error: CPU does not support x86-64-v2
Jul 25 09:16:18 cephmgr1 systemd[1]: libpod-4368eba845dcbc6f3cb7fe8265fc7044d91208f87c25cbf1abe6c3d0447b468e.scope: Deactivated successfully.
When starting the container for 18.2.4. a glibc error occurs.
18.2.2. to 18.2.4 is a minor upgrade and should run on same
hardware but this is not hardware:
Both mgr and 3 of 5 monitor daemons run on virtual machines
on a proxmox (qemu-based virtualizaton) which has the default
CPU "kvm64". This means for the qemu options:
kvm ... -smp 1,sockets=1,cores=4,maxcpus=4 \
-device kvm64-x86_64-cpu,id=cpu2,socket-id=0,core-id=1,thread-id=0\
-device kvm64-x86_64-cpu,id=cpu3,socket-id=0,core-id=2,thread-id=0\
-device kvm64-x86_64-cpu,id=cpu4,socket-id=0,core-id=3,thread-id=0
Which CPU type should I choose for my VMs for this libc?
regards
Björn Lässig
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx