[2019-07-24 13:40:49,602][ceph_volume.process][INFO ] Running command: /bin/systemctl show --no-pager --property=Id --state=running ceph-osd@*
This is the only log event. At the prompt:
# ceph-volume simple scan
#
peter
From: Paul Emmerich <paul.emmerich@xxxxxxxx>
Date: Wednesday, July 24, 2019 at 1:32 PM
To: Xavier Trilla <xavier.trilla@xxxxxxxxxxx>
Cc: Peter Eisch <peter.eisch@xxxxxxxxxxxxxxx>, "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: Upgrading and lost OSDs
Did you use ceph-disk before?
Support for ceph-disk was removed, see Nautilus upgrade instructions. You'll need to run "ceph-volume simple scan" to convert them to ceph-volume
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://nam02.safelinks.protection.outlook.com/?url="" />
croit GmbH
Freseniusstr. 31h
81247 München
https://nam02.safelinks.protection.outlook.com/?url="" />Tel: +49 89 1896585 90
On Wed, Jul 24, 2019 at 8:25 PM Xavier Trilla <mailto:xavier.trilla@xxxxxxxxxxx> wrote:
Hi Peter,
Im not sure but maybe after some changes the OSDs are not being recongnized by ceph scripts.
Ceph used to use udev to detect the OSDs and then moved to lvm, which kind of OSDs are you running? Blustore or filestore? Which version did you use to create them?
Cheers!
El 24 jul 2019, a les 20:04, Peter Eisch <mailto:peter.eisch@xxxxxxxxxxxxxxx> va escriure:
Hi,
I’m working through updating from 12.2.12/luminious to 14.2.2/nautilus on centos 7.6. The managers are updated alright:
# ceph -s
cluster:
id: 2fdb5976-1234-4b29-ad9c-1ca74a9466ec
health: HEALTH_WARN
Degraded data redundancy: 24177/9555955 objects degraded (0.253%), 7 pgs degraded, 1285 pgs undersized
3 monitors have not enabled msgr2
...
I updated ceph on a OSD host with 'yum update' and then rebooted to grab the current kernel. Along the way, the contents of all the directories in /var/lib/ceph/osd/ceph-*/ were deleted. Thus I have 16 OSDs down from this. I can manage the undersized but I'd like to get these drives working again without deleting each OSD and recreating them.
So far I've pulled the respective cephx key into the 'keyring' file and populated 'bluestore' into the 'type' files but I'm unsure how to get the lockboxes mounted to where I can get the OSDs running. The osd-lockbox directory is otherwise untouched from when the OSDs were deployed.
Is there a way to run ceph-deploy or some other tool to rebuild the mounts for the drives?
peter
_______________________________________________
ceph-users mailing list
mailto:ceph-users@xxxxxxxxxxxxxx
https://nam02.safelinks.protection.outlook.com/?url="" />
This is the only log event. At the prompt:
# ceph-volume simple scan
#
peter
| |||||||
| |||||||
| |||||||
| |||||||
| |||||||
|
From: Paul Emmerich <paul.emmerich@xxxxxxxx>
Date: Wednesday, July 24, 2019 at 1:32 PM
To: Xavier Trilla <xavier.trilla@xxxxxxxxxxx>
Cc: Peter Eisch <peter.eisch@xxxxxxxxxxxxxxx>, "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: Upgrading and lost OSDs
Did you use ceph-disk before?
Support for ceph-disk was removed, see Nautilus upgrade instructions. You'll need to run "ceph-volume simple scan" to convert them to ceph-volume
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://nam02.safelinks.protection.outlook.com/?url="" />
croit GmbH
Freseniusstr. 31h
81247 München
https://nam02.safelinks.protection.outlook.com/?url="" />Tel: +49 89 1896585 90
On Wed, Jul 24, 2019 at 8:25 PM Xavier Trilla <mailto:xavier.trilla@xxxxxxxxxxx> wrote:
Hi Peter,
Im not sure but maybe after some changes the OSDs are not being recongnized by ceph scripts.
Ceph used to use udev to detect the OSDs and then moved to lvm, which kind of OSDs are you running? Blustore or filestore? Which version did you use to create them?
Cheers!
El 24 jul 2019, a les 20:04, Peter Eisch <mailto:peter.eisch@xxxxxxxxxxxxxxx> va escriure:
Hi,
I’m working through updating from 12.2.12/luminious to 14.2.2/nautilus on centos 7.6. The managers are updated alright:
# ceph -s
cluster:
id: 2fdb5976-1234-4b29-ad9c-1ca74a9466ec
health: HEALTH_WARN
Degraded data redundancy: 24177/9555955 objects degraded (0.253%), 7 pgs degraded, 1285 pgs undersized
3 monitors have not enabled msgr2
...
I updated ceph on a OSD host with 'yum update' and then rebooted to grab the current kernel. Along the way, the contents of all the directories in /var/lib/ceph/osd/ceph-*/ were deleted. Thus I have 16 OSDs down from this. I can manage the undersized but I'd like to get these drives working again without deleting each OSD and recreating them.
So far I've pulled the respective cephx key into the 'keyring' file and populated 'bluestore' into the 'type' files but I'm unsure how to get the lockboxes mounted to where I can get the OSDs running. The osd-lockbox directory is otherwise untouched from when the OSDs were deployed.
Is there a way to run ceph-deploy or some other tool to rebuild the mounts for the drives?
peter
_______________________________________________
ceph-users mailing list
mailto:ceph-users@xxxxxxxxxxxxxx
https://nam02.safelinks.protection.outlook.com/?url="" />
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com