Re: OSDs crush - Since Pacific

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Wissem,

sharing OSD log snippet preceding the crash (e.g. prior 20K lines) could be helpful and hopefully will provide more insigh - there might be some errors/assertion details and/or other artefacts...


Thanks,

Igor

On 8/30/2022 10:51 AM, Wissem MIMOUNA wrote:
Hi Stefan,
We don’t have automatic conversion going on , and the « bluestore_fsck_quick_fix_on_mount » is not set .
So we did an offline compaction as suggested but this didn’t fix the problem os osd crush .
In the meantime we are rebuilding all OSDs on the cluster and it seems it improve the cluster overall performance and has removed the slow ops , so far …

But still I can’t figure out why same OSDs may still encounter random crush . Here below the last log from a crashed OSD :


{
     "crash_id": "2022-08-29T23:25:21.xxxxx -4f61-8c1c-f3542f83568d",
     "timestamp": "2022-08-29T23:25:21.501287Z",
     "process_name": "ceph-osd",
     "entity_name": "osd.39",
     "ceph_version": "16.2.9",
     "utsname_hostname": "xxxxx",
     "utsname_sysname": "Linux",
     "utsname_release": "4.15.0-162-generic",
     "utsname_version": "#170-Ubuntu SMP Mon Oct 18 11:38:05 UTC 2021",
     "utsname_machine": "x86_64",
     "os_name": "Ubuntu",
     "os_id": "ubuntu",
     "os_version_id": "18.04",
     "os_version": "18.04.6 LTS (Bionic Beaver)",
     "backtrace": [
         "/lib/x86_64-linux-gnu/libpthread.so.0(+0x12980) [0x7efef8051980]",
         "/usr/bin/ceph-osd(+0xf89db0) [0x55a9a8bdddb0]",
         "/usr/bin/ceph-osd(+0xf9472e) [0x55a9a8be872e]",
         "/usr/bin/ceph-osd(+0xf950fd) [0x55a9a8be90fd]",
         "(BlueStore::_collection_list(BlueStore::Collection*, ghobject_t const&, ghobject_t const&, int, bool, std::vector<ghobject_t, std::allocator<ghobject_t> >*, ghobject_t*)+0x13d2) [0x55a9a8c261a2]",
         "(BlueStore::collection_list(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, ghobject_t const&, ghobject_t const&, int, std::vector<ghobject_t, std::allocator<ghobject_t> >*, ghobject_t*)+0xad) [0x55a9a8c26f2d]",
         "(PGBackend::objects_list_partial(hobject_t const&, int, int, std::vector<hobject_t, std::allocator<hobject_t> >*, hobject_t*)+0x68d) [0x55a9a88f0e4d]",
         "(PgScrubber::select_range()+0x2c2) [0x55a9a8a72ba2]",
         "(PgScrubber::select_range_n_notify()+0x24) [0x55a9a8a736d4]",
         "(Scrub::NewChunk::NewChunk(boost::statechart::state<Scrub::NewChunk, Scrub::ActiveScrubbing, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::my_context)+0xf8) [0x55a9a8a8c888]",
         "(boost::statechart::simple_state<Scrub::PendingTimer, Scrub::ActiveScrubbing, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x16a) [0x55a9a8a9642a]",
         "(boost::statechart::state_machine<Scrub::ScrubMachine, Scrub::NotActive, std::allocator<boost::statechart::none>, boost::statechart::null_exception_translator>::process_event(boost::statechart::event_base const&)+0x6b) [0x55a9a8a8863b]",
         "(PgScrubber::send_scrub_resched(unsigned int)+0xef) [0x55a9a8a7fc4f]",
         "(PG::forward_scrub_event(void (ScrubPgIF::*)(unsigned int), unsigned int, std::basic_string_view<char, std::char_traits<char> >)+0x78) [0x55a9a87cdf48]",
         "(ceph::osd::scheduler::PGScrubResched::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x32) [0x55a9a897b2f2]",
         "(OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xd1e) [0x55a9a8736dbe]",
         "(ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x4ac) [0x55a9a8dbc75c]",
         "(ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55a9a8dbfc20]",
         "/lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7efef80466db]",
         "clone()"
     ]
}




Many Thanks
On 8/22/22 11: 50, Wissem MIMOUNA wrote: > Dear All, > > After updating our ceph cluster from Octopus to Pacific , we got a lot of a slow_ops on many osds ( which caused the cluster to become very slow ) . Do you have any automatic
ZjQcmQRYFpfptBannerStart


ZjQcmQRYFpfptBannerEnd

On 8/22/22 11:50, Wissem MIMOUNA wrote:

Dear All,
After updating our ceph cluster from Octopus to Pacific , we got a lot of a slow_ops on many osds ( which caused the cluster to become very slow ) .


Do you have any automatic conversion going on? What is the setting of

"bluestore_fsck_quick_fix_on_mount" on your cluster OSDs?



ceph daemon osd.$id config get bluestore_fsck_quick_fix_on_mount



We did our investiguation and search on the ceph-users list and we found that rebuilding all OSD scan improve ( or fix ) the issue ( we have a doubt that the cause is a defragmented bluestore filesystem ) ,


RocksDB itself might need to be compacted (offline operation). You might

want to stop all OSDs on a host and perform a compaction on them to see

if it helps with slow ops [1]:



df|grep "/var/lib/ceph/osd"|awk '{print $6}'|cut -d '-' -f 2|sort

-n|xargs -n 1 -P 10 -I OSD ceph-kvstore-tool bluestore-kv

/var/lib/ceph/osd/ceph-OSD compact



What is the status of your cluster (ceph -s)?



Gr. Stefan


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
Igor Fedotov
Ceph Lead Developer

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux