Re: Process for adding a separate block.db to an osd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Igor,
I posted it on pastebin: https://pastebin.com/Ze9EuCMD

Cheers
 Boris

Am Mo., 17. Mai 2021 um 12:22 Uhr schrieb Igor Fedotov <ifedotov@xxxxxxx>:

> Hi Boris,
>
> could you please share full OSD startup log and file listing for
> '/var/lib/ceph/osd/ceph-68'?
>
>
> Thanks,
>
> Igor
>
> On 5/17/2021 1:09 PM, Boris Behrens wrote:
> > Hi,
> > sorry for replying to this old thread:
> >
> > I tried to add a block.db to an OSD but now the OSD can not start with
> the
> > error:
> > Mai 17 09:50:38 s3db10.fra2.gridscale.it ceph-osd[26038]: -7> 2021-05-17
> > 09:50:38.362 7fc48ec94a80 -1 rocksdb: Corruption: CURRENT file does not
> end
> > with newline
> > Mai 17 09:50:38 s3db10.fra2.gridscale.it ceph-osd[26038]: -6> 2021-05-17
> > 09:50:38.362 7fc48ec94a80 -1 bluestore(/var/lib/ceph/osd/ceph-68)
> _open_db
> > erroring opening db:
> > Mai 17 09:50:38 s3db10.fra2.gridscale.it ceph-osd[26038]: -1> 2021-05-17
> > 09:50:38.866 7fc48ec94a80 -1
> >
> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueStore.cc:
> > In function 'int BlueStore::_upgrade_super()' thread 7fc48ec94a80 time
> > 2021-05-17 09:50:38.865204
> > Mai 17 09:50:38 s3db10.fra2.gridscale.it ceph-osd[26038]:
> >
> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueStore.cc:
> > 10647: FAILED ceph_assert(ondisk_format > 0)
> >
> > I tried to run an fsck/repair on the disk:
> > [root@s3db10 osd]# ceph-bluestore-tool --path ceph-68  repair
> > 2021-05-17 10:05:25.695 7f714dea3ec0 -1 rocksdb: Corruption: CURRENT file
> > does not end with newline
> > 2021-05-17 10:05:25.695 7f714dea3ec0 -1 bluestore(ceph-68) _open_db
> > erroring opening db:
> > error from fsck: (5) Input/output error
> > [root@s3db10 osd]# ceph-bluestore-tool --path ceph-68  fsck
> > 2021-05-17 10:05:35.012 7fb8f22e6ec0 -1 rocksdb: Corruption: CURRENT file
> > does not end with newline
> > 2021-05-17 10:05:35.012 7fb8f22e6ec0 -1 bluestore(ceph-68) _open_db
> > erroring opening db:
> > error from fsck: (5) Input/output error
> >
> > These are the steps I did to add the disk:
> > $ CEPH_ARGS="--bluestore-block-db-size 53687091200
> > --bluestore_block_db_create=true" ceph-bluestore-tool bluefs-bdev-new-db
> > --path /var/lib/ceph/osd/ceph-68 --dev-target /dev/sdj1
> > $ chown -h ceph:ceph /var/lib/ceph/osd/ceph-68/block.db
> > $ lvchange --addtag ceph.db_device=/dev/sdj1
> >
> /dev/ceph-3bbfd168-2a54-4593-a037-80d0d7e97afd/osd-block-aaeaea54-eb6a-480c-b2fd-d938e336c0f6
> > $ lvchange --addtag ceph.db_uuid=463dd37c-fd49-4ccb-849f-c5827d3d9df2
> >
> /dev/ceph-3bbfd168-2a54-4593-a037-80d0d7e97afd/osd-block-aaeaea54-eb6a-480c-b2fd-d938e336c0f6
> > $ ceph-volume lvm activate --all
> >
> > The UUIDs
> > later I tried this:
> > $ ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-68 --devs-source
> > /var/lib/ceph/osd/ceph-68/block --dev-target
> > /var/lib/ceph/osd/ceph-68/block.db bluefs-bdev-migrate
> >
> > Any ideas how I can get the rocksdb fixed?
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux