Re: Process for adding a separate block.db to an osd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



One more question:
How do I get rid of the bluestore spillover message?
     osd.68 spilled over 64 KiB metadata from 'db' device (13 GiB used of
50 GiB) to slow device

I tried an offline compactation, which did not help.

Am Mo., 17. Mai 2021 um 15:56 Uhr schrieb Boris Behrens <bb@xxxxxxxxx>:

> I have no idea why, but it worked.
>
> As the fsck went well, I just re did the bluefs-bdev-new-db and now the
> OSD is back up, with a block.db device.
>
> Thanks a lot
>
> Am Mo., 17. Mai 2021 um 15:28 Uhr schrieb Igor Fedotov <ifedotov@xxxxxxx>:
>
>> If you haven't had successful OSD.68 starts with standalone DB I think
>> it's safe to revert previous DB adding and just retry it.
>>
>> At first suggest to run bluefs-bdev-new-db command only and then do fsck
>> again. If it's OK - proceed with bluefs migrate followed by another
>> fsck. And then finalize with adding lvm tags and OSD activation.
>>
>>
>> Thanks,
>>
>> Igor
>>
>> On 5/17/2021 3:47 PM, Boris Behrens wrote:
>> > The FSCK looks good:
>> >
>> > [root@s3db10 export-bluefs2]# ceph-bluestore-tool --path
>> > /var/lib/ceph/osd/ceph-68  fsck
>> > fsck success
>> >
>> > Am Mo., 17. Mai 2021 um 14:39 Uhr schrieb Boris Behrens <bb@xxxxxxxxx>:
>> >
>> >> Here is the new output. I kept both for now.
>> >>
>> >> [root@s3db10 export-bluefs2]# ls *
>> >> db:
>> >> 018215.sst  018444.sst  018839.sst  019074.sst  019210.sst  019381.sst
>> >>   019560.sst  019755.sst  019849.sst  019888.sst  019958.sst
>> 019995.sst
>> >>   020007.sst  020042.sst  020067.sst  020098.sst  020115.sst
>> >> 018216.sst  018445.sst  018840.sst  019075.sst  019211.sst  019382.sst
>> >>   019670.sst  019756.sst  019877.sst  019889.sst  019959.sst
>> 019996.sst
>> >>   020008.sst  020043.sst  020068.sst  020104.sst  CURRENT
>> >> 018273.sst  018446.sst  018876.sst  019076.sst  019256.sst  019383.sst
>> >>   019671.sst  019757.sst  019878.sst  019890.sst  019960.sst
>> 019997.sst
>> >>   020030.sst  020055.sst  020069.sst  020105.sst  IDENTITY
>> >> 018300.sst  018447.sst  018877.sst  019081.sst  019257.sst  019395.sst
>> >>   019672.sst  019762.sst  019879.sst  019918.sst  019961.sst
>> 019998.sst
>> >>   020031.sst  020056.sst  020070.sst  020106.sst  LOCK
>> >> 018301.sst  018448.sst  018904.sst  019082.sst  019344.sst  019396.sst
>> >>   019673.sst  019763.sst  019880.sst  019919.sst  019962.sst
>> 019999.sst
>> >>   020032.sst  020057.sst  020071.sst  020107.sst  MANIFEST-020084
>> >> 018326.sst  018449.sst  018950.sst  019083.sst  019345.sst  019400.sst
>> >>   019674.sst  019764.sst  019881.sst  019920.sst  019963.sst
>> 020000.sst
>> >>   020035.sst  020058.sst  020072.sst  020108.sst  OPTIONS-020084
>> >> 018327.sst  018540.sst  018952.sst  019126.sst  019346.sst  019470.sst
>> >>   019675.sst  019765.sst  019882.sst  019921.sst  019964.sst
>> 020001.sst
>> >>   020036.sst  020059.sst  020073.sst  020109.sst  OPTIONS-020087
>> >> 018328.sst  018541.sst  018953.sst  019127.sst  019370.sst  019471.sst
>> >>   019676.sst  019766.sst  019883.sst  019922.sst  019965.sst
>> 020002.sst
>> >>   020037.sst  020060.sst  020074.sst  020110.sst
>> >> 018329.sst  018590.sst  018954.sst  019128.sst  019371.sst  019472.sst
>> >>   019677.sst  019845.sst  019884.sst  019923.sst  019989.sst
>> 020003.sst
>> >>   020038.sst  020061.sst  020075.sst  020111.sst
>> >> 018406.sst  018591.sst  018995.sst  019174.sst  019372.sst  019473.sst
>> >>   019678.sst  019846.sst  019885.sst  019950.sst  019992.sst
>> 020004.sst
>> >>   020039.sst  020062.sst  020094.sst  020112.sst
>> >> 018407.sst  018727.sst  018996.sst  019175.sst  019373.sst  019474.sst
>> >>   019753.sst  019847.sst  019886.sst  019955.sst  019993.sst
>> 020005.sst
>> >>   020040.sst  020063.sst  020095.sst  020113.sst
>> >> 018443.sst  018728.sst  019073.sst  019176.sst  019380.sst  019475.sst
>> >>   019754.sst  019848.sst  019887.sst  019956.sst  019994.sst
>> 020006.sst
>> >>   020041.sst  020064.sst  020096.sst  020114.sst
>> >>
>> >> db.slow:
>> >>
>> >> db.wal:
>> >> 020085.log  020088.log
>> >> [root@s3db10 export-bluefs2]# du -hs
>> >> 12G .
>> >> [root@s3db10 export-bluefs2]# cat db/CURRENT
>> >> MANIFEST-020084
>> >>
>> >> Am Mo., 17. Mai 2021 um 14:28 Uhr schrieb Igor Fedotov <
>> ifedotov@xxxxxxx>:
>> >>
>> >>> On 5/17/2021 2:53 PM, Boris Behrens wrote:
>> >>>> Like this?
>> >>> Yeah.
>> >>>
>> >>> so DB dir structure is more or less O but db/CURRENT looks corrupted.
>> It
>> >>> should contain something like: MANIFEST-020081
>> >>>
>> >>> Could you please remove (or even just rename)  block.db symlink and do
>> >>> the export again? Preferably to preserve the results for the first
>> export.
>> >>>
>> >>> if export reveals proper CURRENT content - you might want to run fsck
>> on
>> >>> the OSD...
>> >>>
>> >>>> [root@s3db10 export-bluefs]# ls *
>> >>>> db:
>> >>>> 018215.sst  018444.sst  018839.sst  019074.sst  019174.sst
>> 019372.sst
>> >>>>    019470.sst  019675.sst  019765.sst  019882.sst  019918.sst
>> 019961.sst
>> >>>>    019997.sst  020022.sst  020042.sst  020061.sst  020073.sst
>> >>>> 018216.sst  018445.sst  018840.sst  019075.sst  019175.sst
>> 019373.sst
>> >>>>    019471.sst  019676.sst  019766.sst  019883.sst  019919.sst
>> 019962.sst
>> >>>>    019998.sst  020023.sst  020043.sst  020062.sst  020074.sst
>> >>>> 018273.sst  018446.sst  018876.sst  019076.sst  019176.sst
>> 019380.sst
>> >>>>    019472.sst  019677.sst  019845.sst  019884.sst  019920.sst
>> 019963.sst
>> >>>>    019999.sst  020030.sst  020049.sst  020063.sst  020075.sst
>> >>>> 018300.sst  018447.sst  018877.sst  019077.sst  019210.sst
>> 019381.sst
>> >>>>    019473.sst  019678.sst  019846.sst  019885.sst  019921.sst
>> 019964.sst
>> >>>>    020000.sst  020031.sst  020051.sst  020064.sst  020077.sst
>> >>>> 018301.sst  018448.sst  018904.sst  019081.sst  019211.sst
>> 019382.sst
>> >>>>    019474.sst  019753.sst  019847.sst  019886.sst  019922.sst
>> 019965.sst
>> >>>>    020001.sst  020032.sst  020052.sst  020065.sst  020080.sst
>> >>>> 018326.sst  018449.sst  018950.sst  019082.sst  019256.sst
>> 019383.sst
>> >>>>    019475.sst  019754.sst  019848.sst  019887.sst  019923.sst
>> 019986.sst
>> >>>>    020002.sst  020035.sst  020053.sst  020066.sst  CURRENT
>> >>>> 018327.sst  018540.sst  018952.sst  019083.sst  019257.sst
>> 019395.sst
>> >>>>    019560.sst  019755.sst  019849.sst  019888.sst  019950.sst
>> 019989.sst
>> >>>>    020003.sst  020036.sst  020055.sst  020067.sst  IDENTITY
>> >>>> 018328.sst  018541.sst  018953.sst  019124.sst  019344.sst
>> 019396.sst
>> >>>>    019670.sst  019756.sst  019877.sst  019889.sst  019955.sst
>> 019992.sst
>> >>>>    020004.sst  020037.sst  020056.sst  020068.sst  LOCK
>> >>>> 018329.sst  018590.sst  018954.sst  019125.sst  019345.sst
>> 019400.sst
>> >>>>    019671.sst  019757.sst  019878.sst  019890.sst  019956.sst
>> 019993.sst
>> >>>>    020005.sst  020038.sst  020057.sst  020069.sst  MANIFEST-020081
>> >>>> 018406.sst  018591.sst  018995.sst  019126.sst  019346.sst
>> 019467.sst
>> >>>>    019672.sst  019762.sst  019879.sst  019915.sst  019958.sst
>> 019994.sst
>> >>>>    020006.sst  020039.sst  020058.sst  020070.sst  OPTIONS-020081
>> >>>> 018407.sst  018727.sst  018996.sst  019127.sst  019370.sst
>> 019468.sst
>> >>>>    019673.sst  019763.sst  019880.sst  019916.sst  019959.sst
>> 019995.sst
>> >>>>    020007.sst  020040.sst  020059.sst  020071.sst  OPTIONS-020084
>> >>>> 018443.sst  018728.sst  019073.sst  019128.sst  019371.sst
>> 019469.sst
>> >>>>    019674.sst  019764.sst  019881.sst  019917.sst  019960.sst
>> 019996.sst
>> >>>>    020008.sst  020041.sst  020060.sst  020072.sst
>> >>>>
>> >>>> db.slow:
>> >>>>
>> >>>> db.wal:
>> >>>> 020082.log
>> >>>> [root@s3db10 export-bluefs]# du -hs
>> >>>> 12G .
>> >>>> [root@s3db10 export-bluefs]# cat db/CURRENT
>> >>>> �g�U
>> >>>>      uN�[�+p[root@s3db10 export-bluefs]#
>> >>>>
>> >>>> Am Mo., 17. Mai 2021 um 13:45 Uhr schrieb Igor Fedotov <
>> >>> ifedotov@xxxxxxx>:
>> >>>>> You might want to check file structure at new DB using
>> >>> bluestore-tools's
>> >>>>> bluefs-export command:
>> >>>>>
>> >>>>> ceph-bluestore-tool --path <osd-path> --command bluefs-export --out
>> >>>>> <target-dir>
>> >>>>>
>> >>>>> <target-dir> needs to have enough free space enough to fit DB data.
>> >>>>>
>> >>>>> Once exported - does <target-dir>  contain valid BlueFS directory
>> >>>>> structure - multiple .sst files, CURRENT and IDENTITY files etc?
>> >>>>>
>> >>>>> If so then please check and share the content of
>> >>> <target-dir>/db/CURRENT
>> >>>>> file.
>> >>>>>
>> >>>>>
>> >>>>> Thanks,
>> >>>>>
>> >>>>> Igor
>> >>>>>
>> >>>>> On 5/17/2021 1:32 PM, Boris Behrens wrote:
>> >>>>>> Hi Igor,
>> >>>>>> I posted it on pastebin: https://pastebin.com/Ze9EuCMD
>> >>>>>>
>> >>>>>> Cheers
>> >>>>>>     Boris
>> >>>>>>
>> >>>>>> Am Mo., 17. Mai 2021 um 12:22 Uhr schrieb Igor Fedotov <
>> >>> ifedotov@xxxxxxx
>> >>>>>> :
>> >>>>>>
>> >>>>>>> Hi Boris,
>> >>>>>>>
>> >>>>>>> could you please share full OSD startup log and file listing for
>> >>>>>>> '/var/lib/ceph/osd/ceph-68'?
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> Thanks,
>> >>>>>>>
>> >>>>>>> Igor
>> >>>>>>>
>> >>>>>>> On 5/17/2021 1:09 PM, Boris Behrens wrote:
>> >>>>>>>> Hi,
>> >>>>>>>> sorry for replying to this old thread:
>> >>>>>>>>
>> >>>>>>>> I tried to add a block.db to an OSD but now the OSD can not start
>> >>> with
>> >>>>>>> the
>> >>>>>>>> error:
>> >>>>>>>> Mai 17 09:50:38 s3db10.fra2.gridscale.it ceph-osd[26038]: -7>
>> >>>>> 2021-05-17
>> >>>>>>>> 09:50:38.362 7fc48ec94a80 -1 rocksdb: Corruption: CURRENT file
>> does
>> >>> not
>> >>>>>>> end
>> >>>>>>>> with newline
>> >>>>>>>> Mai 17 09:50:38 s3db10.fra2.gridscale.it ceph-osd[26038]: -6>
>> >>>>> 2021-05-17
>> >>>>>>>> 09:50:38.362 7fc48ec94a80 -1 bluestore(/var/lib/ceph/osd/ceph-68)
>> >>>>>>> _open_db
>> >>>>>>>> erroring opening db:
>> >>>>>>>> Mai 17 09:50:38 s3db10.fra2.gridscale.it ceph-osd[26038]: -1>
>> >>>>> 2021-05-17
>> >>>>>>>> 09:50:38.866 7fc48ec94a80 -1
>> >>>>>>>>
>> >>>
>> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueStore.cc:
>> >>>>>>>> In function 'int BlueStore::_upgrade_super()' thread 7fc48ec94a80
>> >>> time
>> >>>>>>>> 2021-05-17 09:50:38.865204
>> >>>>>>>> Mai 17 09:50:38 s3db10.fra2.gridscale.it ceph-osd[26038]:
>> >>>>>>>>
>> >>>
>> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.21/rpm/el7/BUILD/ceph-14.2.21/src/os/bluestore/BlueStore.cc:
>> >>>>>>>> 10647: FAILED ceph_assert(ondisk_format > 0)
>> >>>>>>>>
>> >>>>>>>> I tried to run an fsck/repair on the disk:
>> >>>>>>>> [root@s3db10 osd]# ceph-bluestore-tool --path ceph-68  repair
>> >>>>>>>> 2021-05-17 10:05:25.695 7f714dea3ec0 -1 rocksdb: Corruption:
>> CURRENT
>> >>>>> file
>> >>>>>>>> does not end with newline
>> >>>>>>>> 2021-05-17 10:05:25.695 7f714dea3ec0 -1 bluestore(ceph-68)
>> _open_db
>> >>>>>>>> erroring opening db:
>> >>>>>>>> error from fsck: (5) Input/output error
>> >>>>>>>> [root@s3db10 osd]# ceph-bluestore-tool --path ceph-68  fsck
>> >>>>>>>> 2021-05-17 10:05:35.012 7fb8f22e6ec0 -1 rocksdb: Corruption:
>> CURRENT
>> >>>>> file
>> >>>>>>>> does not end with newline
>> >>>>>>>> 2021-05-17 10:05:35.012 7fb8f22e6ec0 -1 bluestore(ceph-68)
>> _open_db
>> >>>>>>>> erroring opening db:
>> >>>>>>>> error from fsck: (5) Input/output error
>> >>>>>>>>
>> >>>>>>>> These are the steps I did to add the disk:
>> >>>>>>>> $ CEPH_ARGS="--bluestore-block-db-size 53687091200
>> >>>>>>>> --bluestore_block_db_create=true" ceph-bluestore-tool
>> >>>>> bluefs-bdev-new-db
>> >>>>>>>> --path /var/lib/ceph/osd/ceph-68 --dev-target /dev/sdj1
>> >>>>>>>> $ chown -h ceph:ceph /var/lib/ceph/osd/ceph-68/block.db
>> >>>>>>>> $ lvchange --addtag ceph.db_device=/dev/sdj1
>> >>>>>>>>
>> >>>
>> /dev/ceph-3bbfd168-2a54-4593-a037-80d0d7e97afd/osd-block-aaeaea54-eb6a-480c-b2fd-d938e336c0f6
>> >>>>>>>> $ lvchange --addtag
>> >>> ceph.db_uuid=463dd37c-fd49-4ccb-849f-c5827d3d9df2
>> >>>
>> /dev/ceph-3bbfd168-2a54-4593-a037-80d0d7e97afd/osd-block-aaeaea54-eb6a-480c-b2fd-d938e336c0f6
>> >>>>>>>> $ ceph-volume lvm activate --all
>> >>>>>>>>
>> >>>>>>>> The UUIDs
>> >>>>>>>> later I tried this:
>> >>>>>>>> $ ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-68
>> --devs-source
>> >>>>>>>> /var/lib/ceph/osd/ceph-68/block --dev-target
>> >>>>>>>> /var/lib/ceph/osd/ceph-68/block.db bluefs-bdev-migrate
>> >>>>>>>>
>> >>>>>>>> Any ideas how I can get the rocksdb fixed?
>> >>>>>>> _______________________________________________
>> >>>>>>> ceph-users mailing list -- ceph-users@xxxxxxx
>> >>>>>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> >>>>>>>
>> >>
>> >> --
>> >> Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend
>> im
>> >> groüen Saal.
>> >>
>> >
>>
>
>
> --
> Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
> groüen Saal.
>


-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux