one ODS out-down after upgrade to v16.2.3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I updated cluster as usual to newest version 16.2.3 from 16.2.1

ceph orch upgrade start --ceph-version 16.2.3


all went fine.... except on of OSD.54 stop working

on host i see in logs ( after try to manual start )

```
root@ceph-nvme01:/var/lib/ceph/77fc6eb4-7146-11eb-aa58-55847fcdb1f1/osd.54# /usr/bin/docker run --rm --ipc=host --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init --name ceph-77fc6eb4-7146-11eb-aa58-55847fcdb1f1-osd.54-activate -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:c820cef23fb93518d5b35683d6301bae36511e52e0f8cd1495fd58805b849383 -e NODE_NAME=ceph-nvme01 -e CEPH_USE_RANDOM_NONCE=1 -v /var/run/ceph/77fc6eb4-7146-11eb-aa58-55847fcdb1f1:/var/run/ceph:z -v /var/log/ceph/77fc6eb4-7146-11eb-aa58-55847fcdb1f1:/var/log/ceph:z -v /var/lib/ceph/77fc6eb4-7146-11eb-aa58-55847fcdb1f1/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/77fc6eb4-7146-11eb-aa58-55847fcdb1f1/osd.54:/var/lib/ceph/osd/ceph-54:z -v /var/lib/ceph/77fc6eb4-7146-11eb-aa58-55847fcdb1f1/osd.54/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm docker.io/ceph/ceph@sha256:c820cef23fb93518d5b35683d6301bae36511e52e0f8cd1495fd58805b849383 lvm activate 54 07b257ee-bb09-4c4e-9f70-9f0da56253c7 --no-systemd
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-54
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-e5ddebd9-52f5-481d-9470-fb2c94fe79e3/osd-block-07b257ee-bb09-4c4e-9f70-9f0da56253c7 --path /var/lib/ceph/osd/ceph-54 --no-mon-config  stderr: failed to read label for /dev/ceph-e5ddebd9-52f5-481d-9470-fb2c94fe79e3/osd-block-07b257ee-bb09-4c4e-9f70-9f0da56253c7: (2) No such file or directory
-->  RuntimeError: command returned non-zero exit status: 1
```
-----
```
root@ceph-nvme01:/var/lib/ceph/77fc6eb4-7146-11eb-aa58-55847fcdb1f1/osd.54# lsblk
NAME MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
loop0 7:0    0  55.5M  1 loop  /snap/core18/1988
loop2 7:2    0  70.4M  1 loop  /snap/lxd/19647
loop3 7:3    0  55.5M  1 loop  /snap/core18/1997
loop4 7:4    0  32.3M  1 loop  /snap/snapd/11588
loop5 7:5    0  32.3M  1 loop  /snap/snapd/11402
loop6 7:6    0  67.6M  1 loop  /snap/lxd/20326
sda 8:0    1 447.1G  0 disk
├─sda1 8:1    1   512M  0 part
└─sda2 8:2    1   445G  0 part
└─md0 9:0    0   445G  0 raid1 /
sdb 8:16   1 447.1G  0 disk
├─sdb1 8:17   1   512M  0 part  /boot/efi
└─sdb2 8:18   1   445G  0 part
└─md0 9:0    0   445G  0 raid1 /
nvme1n1 259:0    0   2.9T  0 disk
└─ceph--257763b1--ef44--43e8--a8a7--f046f5a69fd4-osd--block--1086e697--7949--417b--9ce2--1459db52bbe2 253:0    0   2.9T  0 lvm
nvme0n1 259:1    0   2.9T  0 disk
└─ceph--7488e6b6--5883--4208--8951--0e3e5cc1c903-osd--block--aea644af--7a01--49c9--8169--cb6ec24d8b78 253:1    0   2.9T  0 lvm
nvme2n1 259:5    0   2.9T  0 disk
└─ceph--de73e328--c694--4f46--9a5f--0a7e356cab85-osd--block--b3a93422--57c0--4be3--ad61--836bb7763fae 253:4    0   2.9T  0 lvm
nvme4n1 259:6    0   2.9T  0 disk
└─ceph--08d2cbac--cb88--40cf--ad6e--88fc5ca3491c-osd--block--35ec77e0--db35--4a43--bb25--4666ce0ea756 253:2    0   2.9T  0 lvm
nvme3n1 259:7    0   2.9T  0 disk
└─ceph--f28ba826--b16c--4e4e--b0de--f556274bbc07-osd--block--399337e3--f2f8--4561--8e02--2e9830b31ec5 253:3    0   2.9T  0 lvm
nvme7n1 259:8    0   2.9T  0 disk
└─ceph--0c92fca4--5ff6--49f6--b994--694253ac9225-osd--block--973a39c9--f330--46f4--afc9--fb11f2862d5d 253:9    0   2.9T  0 lvm
nvme10n1 259:9    0   2.9T  0 disk
└─ceph--e5ddebd9--52f5--481d--9470--fb2c94fe79e3-osd--block--07b257ee--bb09--4c4e--9f70--9f0da56253c7 253:6    0   2.9T  0 lvm
nvme6n1 259:10   0   2.9T  0 disk
└─ceph--caf0ef69--b335--4b6e--a560--ecd4cd74fc9b-osd--block--f0bfe9f0--26f8--4dd4--95eb--3ef5a728cbf9 253:8    0   2.9T  0 lvm
nvme8n1 259:11   0   2.9T  0 disk
└─ceph--a4da070c--9db2--408e--9fae--b7b61023177f-osd--block--66b6ce4e--8ce3--4960--980d--e617c7378763 253:10   0   2.9T  0 lvm
nvme9n1 259:12   0   2.9T  0 disk
└─ceph--21562262--bc8b--4002--8575--7b6729ffef84-osd--block--9c427785--3dbe--4def--ab30--a19579fa63ce 253:11   0   2.9T  0 lvm
nvme11n1 259:13   0   2.9T  0 disk
└─ceph--2de30cc9--9d2a--4f61--818e--ce91a90c3489-osd--block--c4bc16fb--a645--42a6--a2d2--b09c812843e7 253:7    0   2.9T  0 lvm
nvme5n1 259:15   0   2.9T  0 disk
└─ceph--aebd9d97--b693--404e--9a81--55df782a15af-osd--block--2fffb8f0--f41d--423c--becd--88339631de13 253:5    0   2.9T  0 lvm
```


Additional i saw in GUI that all other OSD have

*ceph_version*
ceph version 16.2.3 (381b476cb3900f9a92eb95d03b4850b953cfd79a) pacific (stable)

but failied one have

*ceph_version*
ceph version 16.2.1 (afb9061ab4117f798c858c741efa6390e48ccf10) pacific (stable)

so look like that it didn't updated well.

How to fix this error ?


Milosz






_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux