Re: Upgrade to 16.2.6 and osd+mds crash after bluestore_fsck_quick_fix_on_mount true

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey mgrzybowski!

Never seen that before but perhaps some omaps have been improperly converted to new format and aren't read any more...

I'll take a more detailed look at what's happening during that load_pgs call and what exact information is missing.

Meanwhile could you please set debug-bluestore to 20 and collect OSD startup log?


Thanks,

Igor

On 10/21/2021 12:56 AM, mgrzybowski wrote:
Hi
  Recently I did perform upgrades on single node cephfs server i have.

# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ecpoolk3m1osd ecpoolk5m1osd ecpoolk4m2osd
~# ceph osd pool ls detail
pool 20 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode warn last_change 10674 lfor 0/0/5088 flags hashpspool stripe_width 0 application cephfs pool 21 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode warn last_change 10674 lfor 0/0/5179 flags hashpspool stripe_width 0 application cephfs pool 22 'ecpoolk3m1osd' erasure profile myprofilek3m1osd size 4 min_size 3 crush_rule 3 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode warn last_change 10674 lfor 0/0/1442 flags hashpspool,ec_overwrites stripe_width 12288 compression_algorithm zstd compression_mode aggressive application cephfs pool 23 'ecpoolk5m1osd' erasure profile myprofilek5m1osd size 6 min_size 5 crush_rule 5 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode warn last_change 12517 lfor 0/0/7892 flags hashpspool,ec_overwrites stripe_width 20480 compression_algorithm zstd compression_mode aggressive application cephfs pool 24 'ecpoolk4m2osd' erasure profile myprofilek4m2osd size 6 min_size 5 crush_rule 6 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 10674 flags hashpspool,ec_overwrites stripe_width 16384 compression_algorithm zstd compression_mode aggressive application cephfs pool 25 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 11033 lfor 0/0/10991 flags hashpspool stripe_width 0 pg_num_min 1 application mgr_devicehealth


I started this upgrade from ubuntu 16.04 and luminous ( there were upgrades in the past and some osd's could be started in Kraken ) ): - first i upgraded ceph to Nautilus,  all seems to went well and accoording to the docs, no warning in status - then i did "do-release-upgrade" to ubuntu to 18.04 ( ceph packaged  were not touch by that upgrade ) - then i did "do-release-upgrade" to ubuntu to 20.04 ( this upgrade bumped ceph   packages to 15.2.1-0ubuntu1, before each do-release-upgrade i removed /etc/ceph/ceph.conf,   so at least mon deamon was down. osd should not start ( siple volumes are encrypted )
- next i upgraded ceph packages to  16.2.6-1focal m started deamons .

All seems to work well, only what left was warning:

10 OSD(s) reporting legacy (not per-pool) BlueStore omap usage stats

I found on the list that it is recommend to set:

ceph config set osd bluestore_fsck_quick_fix_on_mount true

and rolling restart OSDs. After first restart+fsck i got crash on OSD ( and on MDS to) :

    -1> 2021-10-14T22:02:45.877+0200 7f7f080a4f00 -1 /build/ceph-16.2.6/src/osd/PG.cc: In function 'static int PG::peek_map_epoch(ObjectStore*, spg_t, epoch_t*)' thread 7f7f080a4f00 time 2021-10-14T22:02:45.878154+0200 /build/ceph-16.2.6/src/osd/PG.cc: 1009: FAILED ceph_assert(values.size() == 2)  ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x152) [0x55e29cd0ce61]
 2: /usr/bin/ceph-osd(+0xac6069) [0x55e29cd0d069]
 3: (PG::peek_map_epoch(ObjectStore*, spg_t, unsigned int*)+0xa17) [0x55e29ce97057]
 4: (OSD::load_pgs()+0x6b4) [0x55e29ce07ec4]
 5: (OSD::init()+0x2b4e) [0x55e29ce14a6e]
 6: main()
 7: __libc_start_main()
 8: _start()


The same went on next restart+fsck  osd:

    -1> 2021-10-17T22:47:49.291+0200 7f98877bff00 -1 /build/ceph-16.2.6/src/osd/PG.cc: In function 'static int PG::peek_map_epoch(ObjectStore*, spg_t, epoch_t*)' thread 7f98877bff00 time 2021-10-17T22:47:49.292912+0200 /build/ceph-16.2.6/src/osd/PG.cc: 1009: FAILED ceph_assert(values.size() == 2)

 ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x152) [0x560e09af7e61]
 2: /usr/bin/ceph-osd(+0xac6069) [0x560e09af8069]
 3: (PG::peek_map_epoch(ObjectStore*, spg_t, unsigned int*)+0xa17) [0x560e09c82057]
 4: (OSD::load_pgs()+0x6b4) [0x560e09bf2ec4]
 5: (OSD::init()+0x2b4e) [0x560e09bffa6e]
 6: main()
 7: __libc_start_main()
 8: _start()


Once crashed OSDs could not be bring back online, they will crash again if i try start them.
Deep fsck did not found anything:

~# ceph-bluestore-tool --command fsck  --deep yes --path /var/lib/ceph/osd/ceph-2
fsck success


Any ideas what could cause this crashes and is it possible to bring online crashed osd this way  ?


--
Igor Fedotov
Ceph Lead Developer

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux