Re: Nautilus 14.2.19 mon 100% CPU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 8, 2021 at 11:24 AM Robert LeBlanc <robert@xxxxxxxxxxxxx> wrote:
>
> On Thu, Apr 8, 2021 at 10:22 AM Robert LeBlanc <robert@xxxxxxxxxxxxx> wrote:
> >
> > I upgraded our Luminous cluster to Nautilus a couple of weeks ago and converted the last batch of FileStore OSDs to BlueStore about 36 hours ago. Yesterday our monitor cluster went nuts and started constantly calling elections because monitor nodes were at 100% and wouldn't respond to heartbeats. I reduced the monitor cluster to one to prevent the constant elections and that let the system limp along until the backfills finished. There are large amounts of time where ceph commands hang with the CPU is at 100%, when the CPU drops I see a lot of work getting done in the monitor logs which stops as soon as the CPU is at 100% again.
> >
> > I did a `perf top` on the node to see what's taking all the time and it appears to be in the rocksdb code path. I've set `mon_compact_on_start = true` in the ceph.conf but that does not appear to help. The `/var/lib/ceph/mon/` directory is 311MB which is down from 3.0 GB while the backfills were going on. I've tried adding a second monitor, but it goes back to the constant elections. I tried restarting all the services without luck. I also pulled the monitor from the network work and tried restarting the mon service isolated (this helped a couple of weeks ago when `ceph -s` would cause 100% CPU and lock up the service much worse than this) and didn't see the high CPU load. So I'm guessing it's triggered from some external source.
> >
> > I'm happy to provide more info, just let me know what would be helpful.
>
> Sent this to the dev list, but forgot it needed to be plain text. Here
> is text output of the `perf top` taken a bit later, so not exactly the
> same as the screenshot earlier.
>
> Samples: 20M of event 'cycles', 4000 Hz, Event count (approx.):
> 61966526527 lost: 0/0 drop: 0/0
> Overhead  Shared Object                             Symbol
>  11.52%  ceph-mon                                  [.]
> rocksdb::MemTable::KeyComparator::operator()
>   6.80%  ceph-mon                                  [.]
> rocksdb::MemTable::KeyComparator::operator()
>   4.75%  ceph-mon                                  [.]
> rocksdb::InlineSkipList<rocksdb::MemTableRep::KeyComparator
> const&>::FindGreaterOrEqual
>   2.89%  libc-2.27.so                              [.] vfprintf
>   2.54%  libtcmalloc.so.4.3.0                      [.] tc_deletearray_nothrow
>   2.31%  ceph-mon                                  [.] TLS init
> function for rocksdb::perf_context
>   2.14%  ceph-mon                                  [.] rocksdb::DBImpl::GetImpl
>   1.53%  libc-2.27.so                              [.] 0x000000000018acf8
>   1.44%  libc-2.27.so                              [.] _IO_default_xsputn
>   1.34%  ceph-mon                                  [.] memcmp@plt
>   1.32%  libtcmalloc.so.4.3.0                      [.] tc_malloc
>   1.28%  ceph-mon                                  [.] rocksdb::Version::Get
>   1.27%  libc-2.27.so                              [.] 0x000000000018abf4
>   1.17%  ceph-mon                                  [.] RocksDBStore::get
>   1.08%  ceph-mon                                  [.] 0x0000000000639a33
>   1.04%  ceph-mon                                  [.] 0x0000000000639a0e
>   0.89%  ceph-mon                                  [.] 0x0000000000639a46
>   0.86%  ceph-mon                                  [.] rocksdb::TableCache::Get
>   0.72%  libc-2.27.so                              [.] 0x000000000018abfe
>   0.68%  libceph-common.so.0                       [.] ceph_str_hash_rjenkins
>   0.66%  ceph-mon                                  [.] rocksdb::Hash
>   0.63%  ceph-mon                                  [.] rocksdb::MemTable::Get
>   0.62%  ceph-mon                                  [.] 0x00000000006399ff
>   0.57%  libc-2.27.so                              [.] 0x000000000018abf0
>   0.57%  ceph-mon                                  [.]
> rocksdb::GetContext::GetContext
>   0.57%  ceph-mon                                  [.]
> rocksdb::BlockBasedTable::Get
>   0.57%  ceph-mon                                  [.]
> rocksdb::BlockBasedTable::GetFilter
>   0.55%  [vdso]                                    [.] __vdso_clock_gettime
>   0.54%  ceph-mon                                  [.] 0x00000000005afa17
>   0.53%  ceph-mgr                                  [.]
> std::_Rb_tree<pg_t, pg_t, std::_Identity<pg_t>, std::less<pg_t>,
> std::allocator<pg_t> >::equal_range
>   0.51%  libceph-common.so.0                       [.] PerfCounters::tinc
>   0.50%  ceph-mon                                  [.]
> OSDMonitor::make_snap_epoch_key[abi:cxx11]

Okay, I think I sent it to the old dev list. Trying again.

Thank you,
Robert LeBlanc
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux