Hey Flavio,
I think there are no options other then either upgrade the cluster or
backport the relevant bluefs migration code to Lumnous and make a custom
build.
Thanks,
Igor
On 12/17/2021 4:43 PM, Flavio Piccioni wrote:
Hi all,
in a Luminous+Bluestore cluster, I would like to migrate rocksdb (including
wal) to nvme (lvm).
(output comes from test env. with minimum sized hdd to test procedures)
ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-0
infering bluefs devices from bluestore path
{
"/var/lib/ceph/osd/ceph-0/block": {
"osd_uuid": "399e7751-d791-4493-9f53-caf1650573ed",
"size": 107369988096,
"btime": "2021-12-16 16:24:32.412358",
"description": "main",
"bluefs": "1",
"ceph_fsid": "uuid",
"kv_backend": "rocksdb",
"magic": "ceph osd volume v026",
"mkfs_done": "yes",
"osd_key": "mykey",
"ready": "ready",
"require_osd_release": "\u000e",
"whoami": "0"
}
}
rocksdb and wal are integrated in slowfs, so there is no rock.db o .wal
entry
In Luminous and Mimic, there is no bluefs-bdev-new-db option for
ceph-bluestore-tool.
How can this dump+migration be archived in old versions?
Regards
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
--
Igor Fedotov
Ceph Lead Developer
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx