Have you tried --command option instead of using fixed positional
syntax: ceph-bluestore-tool --path /dev/osd1/ --devs-source
dev/osd1/block --dev-target dev/osd1/block.db --command bluefs-bdev-migrate
If so was it showing the same error?
Thanks,
Igor
On 12/19/2021 11:46 AM, Flavio Piccioni wrote:
I tried the same operation in Nautilus, without success
from ceph-bluestore-tool documentation
*
if source list has slow volume only - operation isn’t permitted,
requires explicit allocation via new-db/new-wal command.
so i tried
ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-${OSD}
bluefs-bdev-new-db --dev-target /dev/bluesfs_db/db-osd${OSD} then
ceph-bluestore-tool --path /dev/osd1/ --devs-source dev/osd1/block
--dev-target dev/osd1/block.db bluefs-bdev-migrate
(and many others syntax combinations)
but i always get "too many positional options have been specified on
the command line"
Maybe "bluefs-bdev-new-db" is sufficient to have "slow's integrated
rocksdb" migrated?
Regards
Flavio
Il giorno ven 17 dic 2021 alle ore 18:57 Anthony D'Atri
<anthony.datri@xxxxxxxxx> ha scritto:
Or incrementally destroy and redeploy the OSDs, which will be
slower and entail a lot of data movement.
>
> Hey Flavio,
>
> I think there are no options other then either upgrade the
cluster or backport the relevant bluefs migration code to Lumnous
and make a custom build.
>
>
> Thanks,
>
> Igor
>
> On 12/17/2021 4:43 PM, Flavio Piccioni wrote:
>> Hi all,
>> in a Luminous+Bluestore cluster, I would like to migrate
rocksdb (including
>> wal) to nvme (lvm).
>>
>> (output comes from test env. with minimum sized hdd to test
procedures)
>> ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-0
>> infering bluefs devices from bluestore path
>> {
>> "/var/lib/ceph/osd/ceph-0/block": {
>> "osd_uuid": "399e7751-d791-4493-9f53-caf1650573ed",
>> "size": 107369988096,
>> "btime": "2021-12-16 16:24:32.412358",
>> "description": "main",
>> "bluefs": "1",
>> "ceph_fsid": "uuid",
>> "kv_backend": "rocksdb",
>> "magic": "ceph osd volume v026",
>> "mkfs_done": "yes",
>> "osd_key": "mykey",
>> "ready": "ready",
>> "require_osd_release": "\u000e",
>> "whoami": "0"
>> }
>> }
>> rocksdb and wal are integrated in slowfs, so there is no
rock.db o .wal
>> entry
>>
>> In Luminous and Mimic, there is no bluefs-bdev-new-db option for
>> ceph-bluestore-tool.
>> How can this dump+migration be archived in old versions?
>>
>> Regards
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
> --
> Igor Fedotov
> Ceph Lead Developer
>
> Looking for help with your Ceph cluster? Contact us at
https://croit.io
>
> croit GmbH, Freseniusstr. 31h, 81247 Munich
> CEO: Martin Verges - VAT-ID: DE310638492
> Com. register: Amtsgericht Munich HRB 231263
> Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
--
Igor Fedotov
Ceph Lead Developer
Looking for help with your Ceph cluster? Contact us athttps://croit.io
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web:https://croit.io | YouTube:https://goo.gl/PGE1Bx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx