Forced upgrade OSD from Luminous to Pacific

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, dear community!

I kindly ask for your help in resolving my issue.

I have a server with a single-node CEPH setup with 5 OSDs. This server has been powered off for about two years, and when I needed the data from it, I found that the SSD where the system was installed had died.

I tried to recover the cluster. First, assuming the old CEPH is there I installed Debian 10 with CEPH 12.2.11, mounted the OSDs to /var/lib/ceph/osd/ceph-xx and assembled the monitor, as described here https://forum.proxmox.com/threads/recover-ceph-from-osds-only.113699/.

However, the monitor wouldn't start, giving an error I don't remember. Then I made a series of mistakes, upgrading the system and CEPH first to Nautilus and then to Pacific. Eventually, I managed to start the monitor, but a compatibility issue with the OSDs remains.

When the OSDs start, I see the message: /check_osdmap_features require_osd_release unknown -> luminous /At the same time, the monitor log shows: /disallowing boot of octopus+ OSD osd.xx. /After starting, the OSD remains in the state: /tick checking mon for new map/


Then I enabledmsgrv2 protocol and tried enabling RocksDB sharding for the OSD, as described here https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#bluestore-rocksdb-sharding, but it didn’t help.

Attempts to start the OSD with lower versions of CEPH event with Octopus end with the error: /2024-10-08 10:45:38.402975 7fba61b34ec0 -1 bluefs _replay 0x0: stop: unrecognized op 12 2024-10-08 10:45:38.402992 7fba61b34ec0 -1 bluefs mount failed to replay log: (5) Input/output error/


So, currently, I have CEPH 16.2.15, and the OSD is in the following state:

/"/var/lib/ceph/osd/ceph-1/block": {
    "osd_uuid": "2bb56721-28c7-45cc-9344-6cc5c699a642",
    "size": 4000681103360,
    "btime": "2018-06-02 13:16:57.042205",
    "description": "main",
    "bfm_blocks": "61045632",
    "bfm_blocks_per_key": "128",
    "bfm_bytes_per_block": "65536",
    "bfm_size": "4000681099264",
    "bluefs": "1",
    "ceph_fsid": "96b6ff1d-25bf-403f-be3d-78c2fb0ff747",
    "kv_backend": "rocksdb",
    "magic": "ceph osd volume v026",
    "mkfs_done": "yes",
    "ready": "ready",
    "require_osd_release": "12",
    "whoami": "1"
}/

with modified RocksDB to enable sharding.


Suggest me, please, Is there a way to upgrade such OSDs so they can run with this version of Ceph?

If you need more information here, let me know and I will provide whatever is needed.

--
Alexander Rydzewski
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux