Re: allocate_bluefs_freespace failed to allocate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes. I don't have separate DB/WAL. These SSD's are only using by rgw index.
The command "--command bluefs-bdev-sizes" is not working if the osd up and
working.
I need a new OSD failure to get useful output. I will check when I get one.

I picked an OSD from my test environment to check the command output and
looks like it is almost same with "ceph osd df tree":

ID  CLASS WEIGHT     REWEIGHT SIZE    RAW USE DATA    *OMAP*    META
AVAIL   %USE  VAR  PGS STATUS TYPE NAME
14   ssd    0.87299  1.00000 894 GiB  29 GiB 9.8 GiB  *19 GiB *485 MiB 865
GiB  3.25 0.05  87     up         osd.14

inferring bluefs devices from bluestore path
1 : device size 0xdf90000000 : own 0x[6b4f5c0000~8f1480000] = 0x8f1480000 :
using 0x4cbe40000(*19 GiB*) : bluestore has 0xd3f43e0000(848 GiB) available

In my production env, I have large OMAP's but also AVAIL space is large
enough to fit anything.

SIZE	= 894 GiB
RAW USE	= 214 GiB
DATA	= 95  GiB
OMAP	= 118 GiB
META	= 839 MiB
AVAIL	= 680 GiB
%USE	= 23.92



If you couldn't check the osd log I'm sending that below:

-78> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.write_buffer_size: 67108864
   -77> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.max_write_buffer_number: 32
   -76> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.compression: NoCompression
   -75> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
   Options.bottommost_compression: Disabled
   -74> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.prefix_extractor: nullptr
   -73> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.memtable_insert_with_hint_prefix_extractor: nullptr
   -72> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.num_levels: 7
   -71> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.min_write_buffer_number_to_merge: 2
   -70> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.max_write_buffer_number_to_maintain: 0
   -69> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.bottommost_compression_opts.window_bits: -14
   -68> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
   Options.bottommost_compression_opts.level: 32767
   -67> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.bottommost_compression_opts.strategy: 0
   -66> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.bottommost_compression_opts.max_dict_bytes: 0
   -65> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.bottommost_compression_opts.zstd_max_train_bytes: 0
   -64> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
   Options.bottommost_compression_opts.enabled: false
   -63> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.compression_opts.window_bits: -14
   -62> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
   Options.compression_opts.level: 32767
   -61> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.compression_opts.strategy: 0
   -60> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.compression_opts.max_dict_bytes: 0
   -59> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.compression_opts.zstd_max_train_bytes: 0
   -58> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
   Options.compression_opts.enabled: false
   -57> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.level0_file_num_compaction_trigger: 8
   -56> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.level0_slowdown_writes_trigger: 32
   -55> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.level0_stop_writes_trigger: 64
   -54> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
    Options.target_file_size_base: 67108864
   -53> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.target_file_size_multiplier: 1
   -52> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
 Options.max_bytes_for_level_base: 536870912
   -51> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.level_compaction_dynamic_level_bytes: 0
   -50> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.max_bytes_for_level_multiplier: 10.000000
   -49> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.max_bytes_for_level_multiplier_addtl[0]: 1
   -48> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.max_bytes_for_level_multiplier_addtl[1]: 1
   -47> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.max_bytes_for_level_multiplier_addtl[2]: 1
   -46> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.max_bytes_for_level_multiplier_addtl[3]: 1
   -45> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.max_bytes_for_level_multiplier_addtl[4]: 1
   -44> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.max_bytes_for_level_multiplier_addtl[5]: 1
   -43> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.max_bytes_for_level_multiplier_addtl[6]: 1
   -42> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.max_sequential_skip_in_iterations: 8
   -41> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
     Options.max_compaction_bytes: 1677721600
   -40> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
         Options.arena_block_size: 8388608
   -39> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.soft_pending_compaction_bytes_limit: 68719476736
   -38> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.hard_pending_compaction_bytes_limit: 274877906944
   -37> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.rate_limit_delay_max_milliseconds: 100
   -36> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
 Options.disable_auto_compactions: 0
   -35> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
         Options.compaction_style: kCompactionStyleLevel
   -34> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
           Options.compaction_pri: kMinOverlappingRatio
   -33> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.compaction_options_universal.size_ratio: 1
   -32> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.compaction_options_universal.min_merge_width: 2
   -31> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.compaction_options_universal.max_merge_width: 4294967295
   -30> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.compaction_options_universal.max_size_amplification_percent:
200
   -29> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.compaction_options_universal.compression_size_percent: -1
   -28> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.compaction_options_universal.stop_style:
kCompactionStopStyleTotalSize
   -27> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.compaction_options_fifo.max_table_files_size: 1073741824
   -26> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.compaction_options_fifo.allow_compaction: 0
   -25> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
    Options.table_properties_collectors:
   -24> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
    Options.inplace_update_support: 0
   -23> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
  Options.inplace_update_num_locks: 10000
   -22> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.memtable_prefix_bloom_size_ratio: 0.000000
   -21> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.memtable_whole_key_filtering: 0
   -20> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
Options.memtable_huge_page_size: 0
   -19> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
            Options.bloom_locality: 0
   -18> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
     Options.max_successive_merges: 0
   -17> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
 Options.optimize_filters_for_hits: 0
   -16> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
 Options.paranoid_file_checks: 0
   -15> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
 Options.force_consistency_checks: 0
   -14> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
 Options.report_bg_io_stats: 0
   -13> 2021-11-06 19:01:10.454 7fa799989c40  4 rocksdb:
                Options.ttl: 0
   -12> 2021-11-06 19:01:10.474 7fa799989c40  4 rocksdb:
[db/version_set.cc:3757] Recovered from manifest
file:db/MANIFEST-039638 succeeded,manifest_file_number is 39638,
next_file_number is 39691, last_sequence is 2225092576, log_number is
39688,prev_log_number is 0,max_column_family is
0,min_log_number_to_keep is 0

   -11> 2021-11-06 19:01:10.474 7fa799989c40  4 rocksdb:
[db/version_set.cc:3766] Column family [default] (ID 0), log number is
39688

   -10> 2021-11-06 19:01:10.474 7fa799989c40  4 rocksdb: EVENT_LOG_v1
{"time_micros": 1636214470484248, "job": 1, "event":
"recovery_started", "log_files": [39685, 39687, 39688, 39691]}
    -9> 2021-11-06 19:01:10.474 7fa799989c40  4 rocksdb:
[db/db_impl_open.cc:583] Recovering log #39685 mode 0
    -8> 2021-11-06 19:01:10.514 7fa799989c40  4 rocksdb:
[db/db_impl_open.cc:583] Recovering log #39687 mode 0
    -7> 2021-11-06 19:01:10.554 7fa799989c40  4 rocksdb:
[db/db_impl_open.cc:583] Recovering log #39688 mode 0
    -6> 2021-11-06 19:01:10.854 7fa799989c40  1 bluefs _allocate
failed to allocate 0xf4f04 on bdev 1, free 0xb0000; fallback to bdev 2
    -5> 2021-11-06 19:01:10.854 7fa799989c40  1 bluefs _allocate
unable to allocate 0xf4f04 on bdev 2, free 0xffffffffffffffff;
fallback to slow device expander
    -4> 2021-11-06 19:01:10.854 7fa799989c40 -1
bluestore(/var/lib/ceph/osd/ceph-218) allocate_bluefs_freespace failed
to allocate on 0x80000000 min_size 0x100000 > allocated total 0x0
bluefs_shared_alloc_size 0x10000 allocated 0x0 available 0x a497aab000
    -3> 2021-11-06 19:01:10.854 7fa799989c40 -1 bluefs _allocate
failed to expand slow device to fit +0xf4f04
    -2> 2021-11-06 19:01:10.854 7fa799989c40 -1 bluefs _flush_range
allocated: 0x1800000 offset: 0x17f54f8 length: 0xffa0c
    -1> 2021-11-06 19:01:10.864 7fa799989c40 -1
/home/service/build/bhfs-14.2.16/src/ceph-14.2.16/src/os/bluestore/BlueFS.cc:
In function 'int BlueFS::_flush_range(BlueFS::FileWriter*, uint64_t,
uint64_t)' thread 7fa799989c40 time 2021-11-06 19:01:10.862375
/home/service/build/bhfs-14.2.16/src/ceph-14.2.16/src/os/bluestore/BlueFS.cc:
2351: ceph_abort_msg("bluefs enospc")

 ceph version 14.2.16 (762032d6f509d5e7ee7dc008d80fe9c87086603c)
nautilus (stable)
 1: (ceph::__ceph_abort(char const*, int, char const*,
std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > const&)+0xdf) [0x5558ad4de604]
 2: (BlueFS::_flush_range(BlueFS::FileWriter*, unsigned long, unsigned
long)+0x1a36) [0x5558adb00de6]
 3: (BlueFS::_flush(BlueFS::FileWriter*, bool)+0x11c) [0x5558adb0143c]
 4: (BlueRocksWritableFile::Flush()+0x3d) [0x5558adb2892d]
 5: (rocksdb::WritableFileWriter::Flush()+0x32f) [0x5558ae1105cf]
 6: (rocksdb::WritableFileWriter::Append(rocksdb::Slice const&)+0x657)
[0x5558ae111567]
 7: (rocksdb::BlockBasedTableBuilder::WriteRawBlock(rocksdb::Slice
const&, rocksdb::CompressionType, rocksdb::BlockHandle*, bool)+0xdd)
[0x5558ae19082d]
 8: (rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::Slice
const&, rocksdb::BlockHandle*, bool)+0x583) [0x5558ae1915a3]
 9: (rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::BlockBuilder*,
rocksdb::BlockHandle*, bool)+0x4b) [0x5558ae191a7b]
 10: (rocksdb::BlockBasedTableBuilder::Flush()+0x9d) [0x5558ae191b4d]
 11: (rocksdb::BlockBasedTableBuilder::Add(rocksdb::Slice const&,
rocksdb::Slice const&)+0x396) [0x5558ae195206]
 12: (rocksdb::BuildTable(std::__cxx11::basic_string<char,
std::char_traits<char>, std::allocator<char> > const&, rocksdb::Env*,
rocksdb::ImmutableCFOptions const&, rocksdb::MutableCFOptions const&,
rocksdb::EnvOptions const&, rocksdb::TableCache*,
rocksdb::InternalIteratorBase<rocksdb::Slice>*,
std::vector<std::unique_ptr<rocksdb::FragmentedRangeTombstoneIterator,
std::default_delete<rocksdb::FragmentedRangeTombstoneIterator> >,
std::allocator<std::unique_ptr<rocksdb::FragmentedRangeTombstoneIterator,
std::default_delete<rocksdb::FragmentedRangeTombstoneIterator> > > >,
rocksdb::FileMetaData*, rocksdb::InternalKeyComparator const&,
std::vector<std::unique_ptr<rocksdb::IntTblPropCollectorFactory,
std::default_delete<rocksdb::IntTblPropCollectorFactory> >,
std::allocator<std::unique_ptr<rocksdb::IntTblPropCollectorFactory,
std::default_delete<rocksdb::IntTblPropCollectorFactory> > > > const*,
unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > const&, std::vector<unsigned long,
std::allocator<unsigned long> >, unsigned long,
rocksdb::SnapshotChecker*, rocksdb::CompressionType, unsigned long,
rocksdb::CompressionOptions const&, bool, rocksdb::InternalStats*,
rocksdb::TableFileCreationReason, rocksdb::EventLogger*, int,
rocksdb::Env::IOPriority, rocksdb::TableProperties*, int, unsigned
long, unsigned long, rocksdb::Env::WriteLifeTimeHint)+0xc2a)
[0x5558ae13e57a]
 13: (rocksdb::DBImpl::WriteLevel0TableForRecovery(int,
rocksdb::ColumnFamilyData*, rocksdb::MemTable*,
rocksdb::VersionEdit*)+0xc50) [0x5558adfc51a0]
 14: (rocksdb::DBImpl::RecoverLogFiles(std::vector<unsigned long,
std::allocator<unsigned long> > const&, unsigned long*, bool)+0xea2)
[0x5558adfc69e2]
 15: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor,
std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool,
bool)+0xa80) [0x5558adfc8300]
 16: (rocksdb::DBImpl::Open(rocksdb::DBOptions const&,
std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > const&,
std::vector<rocksdb::ColumnFamilyDescriptor,
std::allocator<rocksdb::ColumnFamilyDescriptor> > const&,
std::vector<rocksdb::ColumnFamilyHandle*,
std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**, bool,
bool)+0xb05) [0x5558adfc2725]
 17: (rocksdb::DB::Open(rocksdb::DBOptions const&,
std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > const&,
std::vector<rocksdb::ColumnFamilyDescriptor,
std::allocator<rocksdb::ColumnFamilyDescriptor> > const&,
std::vector<rocksdb::ColumnFamilyHandle*,
std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**)+0x24)
[0x5558adfc3c84]
 18: (RocksDBStore::do_open(std::ostream&, bool, bool,
std::vector<KeyValueDB::ColumnFamily,
std::allocator<KeyValueDB::ColumnFamily> > const*)+0x141f)
[0x5558adf4ba0f]
 19: (BlueStore::_open_db(bool, bool, bool)+0x18ff) [0x5558ada0fe0f]
 20: (BlueStore::_open_db_and_around(bool)+0x18b) [0x5558ada283eb]
 21: (BlueStore::_mount(bool, bool)+0x5da) [0x5558ada693ca]
 22: (OSD::init()+0x33f) [0x5558ad5f045f]
 23: (main()+0x391e) [0x5558ad549fde]
 24: (__libc_start_main()+0xf3) [0x7fa799c06223]
 25: (_start()+0x2e) [0x5558ad57ed7e]

     0> 2021-11-06 19:01:10.874 7fa799989c40 -1 *** Caught signal (Aborted) **
 in thread 7fa799989c40 thread_name:ceph-osd

 ceph version 14.2.16 (762032d6f509d5e7ee7dc008d80fe9c87086603c)
nautilus (stable)
 1: (()+0x123c0) [0x7fa79a1083c0]
 2: (gsignal()+0x10f) [0x7fa799c19d7f]
 3: (abort()+0x125) [0x7fa799c04672]
 4: (ceph::__ceph_abort(char const*, int, char const*,
std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > const&)+0x1b0) [0x5558ad4de6d5]
 5: (BlueFS::_flush_range(BlueFS::FileWriter*, unsigned long, unsigned
long)+0x1a36) [0x5558adb00de6]
 6: (BlueFS::_flush(BlueFS::FileWriter*, bool)+0x11c) [0x5558adb0143c]
 7: (BlueRocksWritableFile::Flush()+0x3d) [0x5558adb2892d]
 8: (rocksdb::WritableFileWriter::Flush()+0x32f) [0x5558ae1105cf]
 9: (rocksdb::WritableFileWriter::Append(rocksdb::Slice const&)+0x657)
[0x5558ae111567]
 10: (rocksdb::BlockBasedTableBuilder::WriteRawBlock(rocksdb::Slice
const&, rocksdb::CompressionType, rocksdb::BlockHandle*, bool)+0xdd)
[0x5558ae19082d]
 11: (rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::Slice
const&, rocksdb::BlockHandle*, bool)+0x583) [0x5558ae1915a3]
 12: (rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::BlockBuilder*,
rocksdb::BlockHandle*, bool)+0x4b) [0x5558ae191a7b]
 13: (rocksdb::BlockBasedTableBuilder::Flush()+0x9d) [0x5558ae191b4d]
 14: (rocksdb::BlockBasedTableBuilder::Add(rocksdb::Slice const&,
rocksdb::Slice const&)+0x396) [0x5558ae195206]
 15: (rocksdb::BuildTable(std::__cxx11::basic_string<char,
std::char_traits<char>, std::allocator<char> > const&, rocksdb::Env*,
rocksdb::ImmutableCFOptions const&, rocksdb::MutableCFOptions const&,
rocksdb::EnvOptions const&, rocksdb::TableCache*,
rocksdb::InternalIteratorBase<rocksdb::Slice>*,
std::vector<std::unique_ptr<rocksdb::FragmentedRangeTombstoneIterator,
std::default_delete<rocksdb::FragmentedRangeTombstoneIterator> >,
std::allocator<std::unique_ptr<rocksdb::FragmentedRangeTombstoneIterator,
std::default_delete<rocksdb::FragmentedRangeTombstoneIterator> > > >,
rocksdb::FileMetaData*, rocksdb::InternalKeyComparator const&,
std::vector<std::unique_ptr<rocksdb::IntTblPropCollectorFactory,
std::default_delete<rocksdb::IntTblPropCollectorFactory> >,
std::allocator<std::unique_ptr<rocksdb::IntTblPropCollectorFactory,
std::default_delete<rocksdb::IntTblPropCollectorFactory> > > > const*,
unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > const&, std::vector<unsigned long,
std::allocator<unsigned long> >, unsigned long,
rocksdb::SnapshotChecker*, rocksdb::CompressionType, unsigned long,
rocksdb::CompressionOptions const&, bool, rocksdb::InternalStats*,
rocksdb::TableFileCreationReason, rocksdb::EventLogger*, int,
rocksdb::Env::IOPriority, rocksdb::TableProperties*, int, unsigned
long, unsigned long, rocksdb::Env::WriteLifeTimeHint)+0xc2a)
[0x5558ae13e57a]
 16: (rocksdb::DBImpl::WriteLevel0TableForRecovery(int,
rocksdb::ColumnFamilyData*, rocksdb::MemTable*,
rocksdb::VersionEdit*)+0xc50) [0x5558adfc51a0]
 17: (rocksdb::DBImpl::RecoverLogFiles(std::vector<unsigned long,
std::allocator<unsigned long> > const&, unsigned long*, bool)+0xea2)
[0x5558adfc69e2]
 18: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor,
std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool,
bool)+0xa80) [0x5558adfc8300]
 19: (rocksdb::DBImpl::Open(rocksdb::DBOptions const&,
std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > const&,
std::vector<rocksdb::ColumnFamilyDescriptor,
std::allocator<rocksdb::ColumnFamilyDescriptor> > const&,
std::vector<rocksdb::ColumnFamilyHandle*,
std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**, bool,
bool)+0xb05) [0x5558adfc2725]
 20: (rocksdb::DB::Open(rocksdb::DBOptions const&,
std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > const&,
std::vector<rocksdb::ColumnFamilyDescriptor,
std::allocator<rocksdb::ColumnFamilyDescriptor> > const&,
std::vector<rocksdb::ColumnFamilyHandle*,
std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**)+0x24)
[0x5558adfc3c84]
 21: (RocksDBStore::do_open(std::ostream&, bool, bool,
std::vector<KeyValueDB::ColumnFamily,
std::allocator<KeyValueDB::ColumnFamily> > const*)+0x141f)
[0x5558adf4ba0f]
 22: (BlueStore::_open_db(bool, bool, bool)+0x18ff) [0x5558ada0fe0f]
 23: (BlueStore::_open_db_and_around(bool)+0x18b) [0x5558ada283eb]
 24: (BlueStore::_mount(bool, bool)+0x5da) [0x5558ada693ca]
 25: (OSD::init()+0x33f) [0x5558ad5f045f]
 26: (main()+0x391e) [0x5558ad549fde]
 27: (__libc_start_main()+0xf3) [0x7fa799c06223]
 28: (_start()+0x2e) [0x5558ad57ed7e]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is
needed to interpret this.

--- logging levels ---
   0/ 5 none
   0/ 1 lockdep
   0/ 1 context
   1/ 1 crush
   1/ 5 mds
   1/ 5 mds_balancer
   1/ 5 mds_locker
   1/ 5 mds_log
   1/ 5 mds_log_expire
   1/ 5 mds_migrator
   0/ 1 buffer
   0/ 1 timer
   0/ 1 filer
   0/ 1 striper
   0/ 0 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 rbd_mirror
   0/ 5 rbd_replay
   0/ 5 journaler
   0/ 5 objectcacher
   0/ 5 client
   1/ 5 osd
   0/ 5 optracker
   0/ 5 objclass
   1/ 3 filestore
   1/ 3 journal
   0/ 0 ms
   1/ 5 mon
   0/10 monc
   1/ 5 paxos
   0/ 5 tp
   1/ 5 auth
   1/ 5 crypto
   1/ 1 finisher
   1/ 1 reserver
   1/ 5 heartbeatmap
   1/ 5 perfcounter
   1/ 5 rgw
   1/ 5 rgw_sync
   1/10 civetweb
   1/ 5 javaclient
   1/ 5 asok
   1/ 1 throttle
   0/ 0 refs
   1/ 5 xio
   1/ 5 compressor
   1/ 5 bluestore
   1/ 5 bluefs
   1/ 3 bdev
   1/ 5 kstore
   4/ 5 rocksdb
   4/ 5 leveldb
   4/ 5 memdb
   1/ 5 kinetic
   1/ 5 fuse
   1/ 5 mgr
   1/ 5 mgrc
   1/ 5 dpdk
   1/ 5 eventtrace
   1/ 5 prioritycache
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
  max_recent     10000
  max_new         1000
  log_file /var/log/ceph/ceph-osd.218.log
--- end dump of recent events ---


prosergey07 <prosergey07@xxxxxxxxx>, 10 Kas 2021 Çar, 00:01 tarihinde şunu
yazdı:

> From my understanding you do not have a separate DB/WAL device per OSD.
> Since RocksDB uses bluefs for OMAP storage, we can check the usage and free
> size for bluefs on problematic osd's.
>
> ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-OSD_ID --command
> bluefs-bdev-sizes
>
> Probably it can shed some light as to why the allocator did not work and
> you had to compact.
>
>
>
> Надіслано з пристрою Galaxy
>
>
> -------- Оригінальне повідомлення --------
> Від: mhnx <morphinwithyou@xxxxxxxxx>
> Дата: 09.11.21 03:05 (GMT+02:00)
> Кому: prosergey07 <prosergey07@xxxxxxxxx>
> Копія: Ceph Users <ceph-users@xxxxxxx>
> Тема: Re:  allocate_bluefs_freespace failed to allocate
>
> I was trying to keep things clear and I was aware of the login issue.
> Sorry. You're right.
>
> OSD's are not full. Need balance but I can't activate the balancer
> because of the issue.
>
> ceph osd df tree | grep 'CLASS\|ssd'
>
> ID  CLASS WEIGHT     REWEIGHT SIZE    RAW USE DATA    OMAP    META    AVAIL   %USE  VAR  PGS STATUS TYPE NAME
>  19   ssd    0.87320  1.00000 894 GiB 401 GiB 155 GiB 238 GiB 8.6 GiB 493 GiB 44.88 0.83 102     up         osd.19
> 208   ssd    0.87329  1.00000 894 GiB 229 GiB 112 GiB 116 GiB 1.5 GiB 665 GiB 25.64 0.48  95     up         osd.208
> 209   ssd    0.87329  1.00000 894 GiB 228 GiB 110 GiB 115 GiB 3.3 GiB 666 GiB 25.54 0.48  65     up         osd.209
> 199   ssd    0.87320  1.00000 894 GiB 348 GiB 155 GiB 191 GiB 1.3 GiB 546 GiB 38.93 0.72 103     up         osd.199
> 202   ssd    0.87329  1.00000 894 GiB 340 GiB 116 GiB 223 GiB 1.7 GiB 554 GiB 38.04 0.71  97     up         osd.202
> 218   ssd    0.87329  1.00000 894 GiB 214 GiB  95 GiB 118 GiB 839 MiB 680 GiB 23.92 0.44  37     up         osd.218
>  39   ssd    0.87320  1.00000 894 GiB 381 GiB 114 GiB 261 GiB 6.4 GiB 514 GiB 42.57 0.79  91     up         osd.39
> 207   ssd    0.87329  1.00000 894 GiB 277 GiB 115 GiB 155 GiB 6.2 GiB 618 GiB 30.94 0.58  81     up         osd.207
> 210   ssd    0.87329  1.00000 894 GiB 346 GiB 138 GiB 207 GiB 1.6 GiB 548 GiB 38.73 0.72  99     up         osd.210
>  59   ssd    0.87320  1.00000 894 GiB 423 GiB 166 GiB 254 GiB 2.9 GiB 471 GiB 47.29 0.88  97     up         osd.59
> 203   ssd    0.87329  1.00000 894 GiB 363 GiB 127 GiB 229 GiB 7.7 GiB 531 GiB 40.63 0.76 104     up         osd.203
> 211   ssd    0.87329  1.00000 894 GiB 257 GiB  76 GiB 179 GiB 1.9 GiB 638 GiB 28.70 0.53  81     up         osd.211
>  79   ssd    0.87320  1.00000 894 GiB 459 GiB 144 GiB 313 GiB 2.0 GiB 435 GiB 51.32 0.95 102     up         osd.79
> 206   ssd    0.87329  1.00000 894 GiB 339 GiB 140 GiB 197 GiB 2.0 GiB 556 GiB 37.88 0.70  94     up         osd.206
> 212   ssd    0.87329  1.00000 894 GiB 301 GiB 107 GiB 192 GiB 1.5 GiB 593 GiB 33.68 0.63  80     up         osd.212
>  99   ssd    0.87320  1.00000 894 GiB 282 GiB  96 GiB 180 GiB 6.2 GiB 612 GiB 31.59 0.59  85     up         osd.99
> 205   ssd    0.87329  1.00000 894 GiB 309 GiB 115 GiB 186 GiB 7.5 GiB 585 GiB 34.56 0.64  95     up         osd.205
> 213   ssd    0.87329  1.00000 894 GiB 335 GiB 119 GiB 213 GiB 2.5 GiB 559 GiB 37.44 0.70  95     up         osd.213
> 114   ssd    0.87329  1.00000 894 GiB 374 GiB 163 GiB 207 GiB 3.9 GiB 520 GiB 41.84 0.78  99     up         osd.114
> 200   ssd    0.87329  1.00000 894 GiB 271 GiB 104 GiB 163 GiB 3.0 GiB 624 GiB 30.26 0.56  90     up         osd.200
> 214   ssd    0.87329  1.00000 894 GiB 336 GiB 135 GiB 199 GiB 2.7 GiB 558 GiB 37.59 0.70 100     up         osd.214
> 139   ssd    0.87320  1.00000 894 GiB 320 GiB 128 GiB 189 GiB 3.6 GiB 574 GiB 35.82 0.67  96     up         osd.139
> 204   ssd    0.87329  1.00000 894 GiB 362 GiB 153 GiB 206 GiB 3.1 GiB 532 GiB 40.47 0.75 104     up         osd.204
> 215   ssd    0.87329  1.00000 894 GiB 236 GiB  99 GiB 133 GiB 3.4 GiB 659 GiB 26.35 0.49  81     up         osd.215
> 119   ssd    0.87329  1.00000 894 GiB 242 GiB 139 GiB 101 GiB 2.1 GiB 652 GiB 27.09 0.50  99     up         osd.119
> 159   ssd    0.87329  1.00000 894 GiB 253 GiB 127 GiB 123 GiB 2.7 GiB 642 GiB 28.25 0.53  93     up         osd.159
> 216   ssd    0.87329  1.00000 894 GiB 378 GiB 137 GiB 239 GiB 1.8 GiB 517 GiB 42.22 0.79 101     up         osd.216
> 179   ssd    0.87329  1.00000 894 GiB 473 GiB 112 GiB 348 GiB  12 GiB 421 GiB 52.91 0.98 104     up         osd.179
> 201   ssd    0.87329  1.00000 894 GiB 348 GiB 137 GiB 203 GiB 8.5 GiB 546 GiB 38.92 0.72 103     up         osd.201
> 217   ssd    0.87329  1.00000 894 GiB 301 GiB 105 GiB 194 GiB 2.5 GiB 593 GiB 33.64 0.63  89     up         osd.217
>
>
>
>
> prosergey07 <prosergey07@xxxxxxxxx>, 9 Kas 2021 Sal, 03:02 tarihinde şunu
> yazdı:
>
>> Are those problematic OSDs getting almost full ? I do not have Ubuntu
>> account to check their pastebin.
>>
>>
>>
>> Надіслано з пристрою Galaxy
>>
>>
>> -------- Оригінальне повідомлення --------
>> Від: mhnx <morphinwithyou@xxxxxxxxx>
>> Дата: 08.11.21 15:31 (GMT+02:00)
>> Кому: Ceph Users <ceph-users@xxxxxxx>
>> Тема:  allocate_bluefs_freespace failed to allocate
>>
>> Hello.
>>
>> I'm using Nautilus 14.2.16
>> I have 30 SSD in my cluster and I use them as Bluestore OSD for RGW index.
>> Almost every week I'm losing (down) an OSD and when I check osd log I see:
>>
>>     -6> 2021-11-06 19:01:10.854 7fa799989c40  1 *bluefs _allocate
>> failed to allocate 0xf4f04 on bdev 1, free 0xb0000; fallback to bdev
>> 2*
>>     -5> 2021-11-06 19:01:10.854 7fa799989c40  1 *bluefs _allocate
>> unable to allocate 0xf4f04 on bdev 2, free 0xffffffffffffffff;
>> fallback to slow device expander*
>>     -4> 2021-11-06 19:01:10.854 7fa799989c40 -1
>> bluestore(/var/lib/ceph/osd/ceph-218) *allocate_bluefs_freespace
>> failed to allocate on* 0x80000000 min_size 0x100000 > allocated total
>> 0x0 bluefs_shared_alloc_size 0x10000 allocated 0x0 available 0x
>> a497aab000
>>     -3> 2021-11-06 19:01:10.854 7fa799989c40 -1 *bluefs _allocate
>> failed to expand slow device to fit +0xf4f04*
>>
>>
>> Full log: https://paste.ubuntu.com/p/MpJfVjMh7V/plain/
>>
>> And OSD does not start without offline compaction.
>> Offline compaction log: https://paste.ubuntu.com/p/vFZcYnxQWh/plain/
>>
>> After the Offline compaction I tried to start OSD with bitmap allocator
>> but
>> it is not getting up because of " FAILED ceph_assert(available >=
>> allocated)"
>> Log: https://paste.ubuntu.com/p/2Bbx983494/plain/
>>
>> Then I start the OSD with hybrid allocator and let it recover.
>> When the recover is done I stop the OSD and start with the bitmap
>> allocator.
>> This time it came up but I've got "80 slow ops, oldest one blocked for 116
>> sec, osd.218 has slow ops" and I increased "osd_recovery_sleep 10" to give
>> a breath to cluster and cluster marked the osd as down (it was still
>> working) after a while the osd marked up and cluster became normal. But
>> while recovering, other osd's started to give slow ops and I've played
>> around with "osd_recovery_sleep 0.1 <---> 10" to keep the cluster stable
>> till recovery finishes.
>>
>> Ceph osd df tree before: https://paste.ubuntu.com/p/4K7JXcZ8FJ/plain/
>> Ceph osd df tree after osd.218 = bitmap:
>> https://paste.ubuntu.com/p/5SKbhrbgVM/plain/
>>
>> If I want to change all other osd's allocator to bitmap, I need to repeat
>> the process 29 time and it will take too much time.
>> I don't want to heal OSDs with the offline compaction anymore so I will do
>> that if that's the solution but I want to be sure before doing a lot of
>> work and maybe with the issue I can provide helpful logs and information
>> for developers.
>>
>> Have a nice day.
>> Thanks.
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux