Re: OSD crash with "no available blob id" and check for Zombie blobs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The other fixes landed to nautilus and later releases
I suggest to you upgrade to nautilus as soon as possible, this is very stable release (14.2.22)


k
Sent from my iPhone

> On 14 Jun 2022, at 12:13, tao song <alansong1023@xxxxxxxxx> wrote:
> 
> 
>   Thanks , we have backport some PR to 12.2.12,but the problem remain.Are there any other fixes?
> eg:
>> os/bluestore: apply garbage collection against excessive blob count growth
>>  https://github.com/ceph/ceph/pull/28229 
>> AND the ceph-bluestore-tool fsck/repair
>> https://github.com/ceph/ceph/pull/38050 
> 
> 
> Konstantin Shalygin <k0ste@xxxxxxxx> 于2022年6月14日周二 15:37写道:
>> Hi,
>> 
>> Many of fixes for "zombie blobs" landed in last Nautilus release
>> I suggest to upgrade to last Nautilus version
>> 
>> 
>> k
>> 
>> > On 14 Jun 2022, at 10:23, tao song <alansong1023@xxxxxxxxx> wrote:
>> > 
>> > I have a old Cluster 12.2.12 running bluestore ,use iscsi + RBD in EC
>> > pools(k:m=2:1) with ec_overwrites flags. Multiple OSD crashes occurred due
>> > to assert (0 == "no available blob id").
>> > The problems occur periodically when the RBD volume is cyclically
>> > overwritten.
>> > 
>> > 2022-05-24 22:08:19.950550 7fcb41894700  1 osd.171 pg_epoch: 47676
>> >> pg[4.1467s2( v 44365'8455207 (44045'8453660,44365'8455207]
>> >> local-lis/les=44415/44416 n=21866 ec=16123/349 lis/c 47665/44415 les/c/f
>> >> 47666/44416/4714 47676/47676/39511)
>> >> [115,16,171]/[115,2147483647,2147483647]p115(0) r=-1 lpr=47676
>> >> pi=[44415,47676)/2 crt=44260'8455206 lcod 0'0 remapped NOTIFY mbc={}]
>> >> state<Start>: transitioning to Stray
>> >> 2022-05-24 22:08:20.834007 7fcb41894700  1 osd.171 pg_epoch: 47677
>> >> pg[4.1467s2( v 44365'8455207 (44045'8453660,44365'8455207]
>> >> local-lis/les=44415/44416 n=21866 ec=16123/349 lis/c 47665/44415 les/c/f
>> >> 47666/44416/4714 47676/47677/39511) [115,16,171]p115(0) r=2 lpr=47677
>> >> pi=[44415,47677)/2 crt=44260'8455206 lcod 0'0 unknown NOTIFY mbc={}]
>> >> start_peering_interval up [115,16,171] -> [115,16,171], acting
>> >> [115,2147483647,2147483647] -> [115,16,171], acting_primary 115(0) -> 115,
>> >> up_primary 115(0) -> 115, role -1 -> 2, features acting 4611087853746454523
>> >> upacting 4611087853746454523
>> >> 2022-05-24 22:08:20.834073 7fcb41894700  1 osd.171 pg_epoch: 47677
>> >> pg[4.1467s2( v 44365'8455207 (44045'8453660,44365'8455207]
>> >> local-lis/les=44415/44416 n=21866 ec=16123/349 lis/c 47665/44415 les/c/f
>> >> 47666/44416/4714 47676/47677/39511) [115,16,171]p115(0) r=2 lpr=47677
>> >> pi=[44415,47677)/2 crt=44260'8455206 lcod 0'0 unknown NOTIFY mbc={}]
>> >> state<Start>: transitioning to Stray
>> >> 2022-05-24 22:08:22.097055 7fcb3a085700 -1
>> >> /ceph-12.2.12/src/os/bluestore/BlueStore.cc: In function 'bid_t
>> >> BlueStore::ExtentMap::allocate_spanning_blob_id()' thread 7fcb3a085700 time
>> >> 2022-05-24 22:08:22.091806
>> >> /ceph-12.2.12/src/os/bluestore/BlueStore.cc: 2083: FAILED assert(0 == "no
>> >> available blob id")
>> >> 
>> >> ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous
>> >> (stable)
>> >> 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
>> >> const*)+0x110) [0x560e41dbd520]
>> >> 2: (()+0x8fce4e) [0x560e41c15e4e]
>> >> 3: (BlueStore::ExtentMap::reshard(KeyValueDB*,
>> >> std::shared_ptr<KeyValueDB::TransactionImpl>)+0x13da) [0x560e41c6fc6a]
>> >> 4: (BlueStore::_txc_write_nodes(BlueStore::TransContext*,
>> >> std::shared_ptr<KeyValueDB::TransactionImpl>)+0x1ab) [0x560e41c7131b]
>> >> 5: (BlueStore::queue_transactions(ObjectStore::Sequencer*,
>> >> std::vector<ObjectStore::Transaction,
>> >> std::allocator<ObjectStore::Transaction> >&,
>> >> boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x3fd)
>> >> [0x560e41c8cc4d]
>> >> 6:
>> >> (PrimaryLogPG::queue_transactions(std::vector<ObjectStore::Transaction,
>> >> std::allocator<ObjectStore::Transaction> >&,
>> >> boost::intrusive_ptr<OpRequest>)+0x65) [0x560e419efac5]
>> >> 7: (ECBackend::handle_sub_write(pg_shard_t,
>> >> boost::intrusive_ptr<OpRequest>, ECSubWrite&, ZTracer::Trace const&,
>> >> Context*)+0x631) [0x560e41b18331]
>> >> 8: (ECBackend::_handle_message(boost::intrusive_ptr<OpRequest>)+0x349)
>> >> [0x560e41b29ba9]
>> >> 9: (PGBackend::handle_message(boost::intrusive_ptr<OpRequest>)+0x50)
>> >> [0x560e41a255f0]
>> >> 10: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&,
>> >> ThreadPool::TPHandle&)+0x59c) [0x560e4198f97c]
>> >> 11: (OSD::dequeue_op(boost::intrusive_ptr<PG>,
>> >> boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x3f9)
>> >> [0x560e4180af59]
>> >> 12: (PGQueueable::RunVis::operator()(boost::intrusive_ptr<OpRequest>
>> >> const&)+0x57) [0x560e41a9ac27]
>> >> 13: (OSD::ShardedOpWQ::_process(unsigned int,
>> >> ceph::heartbeat_handle_d*)+0xfce) [0x560e4183a20e]
>> >> 14: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x83f)
>> >> [0x560e41dc304f]
>> >> 15: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x560e41dc4fe0]
>> >> 16: (()+0x7dd5) [0x7fcb5913fdd5]
>> >> 17: (clone()+0x6d) [0x7fcb5822fead]
>> >> NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
>> >> to interpret this.
>> >> 
>> > 
>> > Some would not restart with "no available blob id" assertion.We adjust the
>> > following parameters to to ensure that the OSD can be started
>> >  bluestore_extent_map_shard_target_size=2000 default 500,
>> >  bluestore_extent_map_shard_target_size_slop=0.300000 default 0.200000,
>> > 
>> > 
>> >> We found several related bugs :
>> >> https://tracker.ceph.com/issues/48216
>> >> https://tracker.ceph.com/issues/38272
>> > 
>> > The PR :
>> > 
>> > os/bluestore: apply garbage collection against excessive blob count growth
>> > 
>> > https://github.com/ceph/ceph/pull/28229
>> >> we have backport the PR to 12.2.12,but  it didn't solve the problem.
>> > 
>> > The workaround that works is to fsck / repair the stopped OSD :
>> >> ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-<osd_id> --command
>> >> repair
>> >> 
>> > But it's not a long term solution.
>> >> I have seen a PR merged in 2019 here :
>> >> https://github.com/ceph/ceph/pull/28229
>> > 
>> > The fsck log:
>> > 
>> >> 2022-06-11 14:33:00.524108 7ff94ce7eec0 -1
>> >> bluestore(/var/lib/ceph/osd/ceph-162/) fsck error:
>> >> 2#4:fbdd648a:::rbd_data.3.3404c86b8b4567.0000000001977896:head# - 1 zombie
>> >> spanning blob(s) found, the first one: Blob(0x5567bd482690 spanning 7
>> >> blob([!~40000] csum crc32c/0x1000) use_tracker(0x4*0x10000 0x[0,0,0,0])
>> >> SharedBlob(0x5567bd482150 sbid 0x0))
>> >> 2022-06-11 14:33:00.620716 7ff94ce7eec0 -1
>> >> bluestore(/var/lib/ceph/osd/ceph-162/) fsck error:
>> >> 2#4:fbdda2c7:::rbd_data.3.3404d16b8b4567.0000000000dd0d7c:head# - 1 zombie
>> >> spanning blob(s) found, the first one: Blob(0x5567b1790230 spanning 0
>> >> blob([!~50000] csum crc32c/0x1000) use_tracker(0x5*0x10000 0x[0,0,0,0,0])
>> >> SharedBlob(0x5567b17902a0 sbid 0x0))
>> >> 2022-06-11 14:33:00.659399 7ff94ce7eec0 -1 bluestore
>> >> (/var/lib/ceph/osd/ceph-162/)   fsck error:
>> >> 2#4:fbddcdc9:::rbd_data.3.3404d76b8b4567.00000000005c3915:head# - 1 zombie
>> >> spanning blob(s) found, the first one: Blob(0x5568059181c0 spanning 0
>> >> blob([!~20000] csum crc32c/0x1000) use_tracker(0x2*0x10000 0x[0,0])
>> >> SharedBlob(0x556805919ce0 sbid 0x0))
>> >> 2022-06-11 14:33:00.666271 7ff94ce7eec0 -1 bluestore
>> >> (/var/lib/ceph/osd/ceph-162/)   fsck error:
>> >> 2#4:fbddd3fa:::rbd_data.3.3404d16b8b4567.00000000008d9fcf:head# - 1 zombie
>> >> spanning blob(s) found, the first one: Blob(0x55675655ea80 spanning 1
>> >> blob([!~20000] csum crc32c/0x1000) use_tracker(0x2*0x10000 0x[0,0])
>> >> SharedBlob(0x55675655eaf0 sbid 0x0))
>> >> 2022-06-11 14:33:00.830832 7ff94ce7eec0 -1 bluestore
>> >> (/var/lib/ceph/osd/ceph-162/)  fsck error:
>> >> 2#4:fbde5cdd:::rbd_data.3.3404d76b8b4567.0000000000aff911:head# - 1 zombie
>> >> spanning blob(s) found, the first one: Blob(0x556810246690 spanning 0
>> >> blob([!~20000] csum crc32c/0x1000) use_tracker(0x2*0x10000 0x[0,0])
>> >> SharedBlob(0x5567aff68f50 sbid 0x0))
>> >> 2022-06-11 14:33:00.875893 7ff94ce7eec0 -1 bluestore
>> >> (/var/lib/ceph/osd/ceph-162/)  fsck error:
>> >> 2#4:fbde8f23:::rbd_data.3.3404c86b8b4567.000000000151a7dc:head# - 1 zombie
>> >> spanning blob(s) found, the first one: Blob(0x5567f2a7b650 spanning 8
>> >> blob([!~50000] csum crc32c/0x1000) use_tracker(0x5*0x10000 0x[0,0,0,0,0])
>> >> SharedBlob(0x5567f2a7b810 sbid 0x0))
>> >> 2022-06-11 14:33:00.894370 7ff94ce7eec0 -1 bluestore
>> >> (/var/lib/ceph/osd/ceph-162/)   fsck error:
>> >> 2#4:fbdea5c3:::rbd_data.3.3404d16b8b4567.0000000000dcecaf:head# - 1 zombie
>> >> spanning blob(s) found, the first one: Blob(0x5567b5faa5b0 spanning 0
>> >> blob([!~50000] csum crc32c/0x1000) use_tracker(0x5*0x10000 0x[0,0,0,0,0])
>> >> SharedBlob(0x5567b5fabf10 sbid 0x0))
>> >> 2022-06-11 14:33:00.930450 7ff94ce7eec0 -1 bluestore
>> >> (/var/lib/ceph/osd/ceph-162/)   fsck error:
>> >> 2#4:fbdebbe9:::rbd_data.3.3404c86b8b4567.0000000000be844c:head# - 1 zombie
>> >> spanning blob(s) found, the first one: Blob(0x556737a395e0 spanning 8
>> >> blob([!~20000] csum crc32c/0x1000) use_tracker(0x2*0x10000 0x[0,0])
>> >> SharedBlob(0x556737b7dc70 sbid 0x0))
>> >> 2022-06-11 14:33:01.132212 7ff94ce7eec0 -1 bluestore
>> >> (/var/lib/ceph/osd/ceph-162/)   fsck error:
>> >> 2#4:fbdf864f:::rbd_data.3.3404c86b8b4567.0000000001d29b47:head# - 1 zombie
>> >> spanning blob(s) found, the first one: Blob(0x55673c858620 spanning 7
>> >> blob([!~40000] csum crc32c/0x1000) use_tracker(0x4*0x10000 0x[0,0,0,0])
>> >> SharedBlob(0x55673c354a80 sbid 0x0))
>> >> 2022-06-11 14:33:01.157017 7ff94ce7eec0 -1 bluestore
>> >> (/var/lib/ceph/osd/ceph-162/)   fsck error:
>> >> 2#4:fbdfa612:::rbd_data.3.3404d16b8b4567.0000000000a73d63:head# - 1 zombie
>> >> spanning blob(s) found, the first one: Blob(0x55673f6fcf5
>> > 
>> > 
>> > What is the root cause of zombie blobs? How can we avoid this problem?
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@xxxxxxx
>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux