Luminous: export and migrate rocksdb to dedicated lvm/unit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,
in a Luminous+Bluestore cluster, I would like to migrate rocksdb (including
wal) to nvme (lvm).

(output comes from test env. with minimum sized hdd to test procedures)
ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-0
infering bluefs devices from bluestore path
{
"/var/lib/ceph/osd/ceph-0/block": {
"osd_uuid": "399e7751-d791-4493-9f53-caf1650573ed",
"size": 107369988096,
"btime": "2021-12-16 16:24:32.412358",
"description": "main",
"bluefs": "1",
"ceph_fsid": "uuid",
"kv_backend": "rocksdb",
"magic": "ceph osd volume v026",
"mkfs_done": "yes",
"osd_key": "mykey",
"ready": "ready",
"require_osd_release": "\u000e",
"whoami": "0"
}
}
rocksdb and wal are integrated in slowfs, so there is no rock.db o .wal
entry

In Luminous and Mimic, there is no bluefs-bdev-new-db option for
ceph-bluestore-tool.
How can this dump+migration be archived in old versions?

Regards
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux