Re: ceph-mon high cpu usage, and response slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/30/2015 09:51 AM, Yujian Peng wrote:
> The mons in my production cluster(0.80.7) have a very high cpu usage 100%.
> I added leveldb_compression = false to the ceph.conf to disable leveldb
> compression and restarted all the mons with --compact. But the mons still
> have a high cpu usages, and response to ceph command very slow.

For the monitors, on firefly, you need to use 'mon_leveldb_compression =
false' instead.

  -Joao


> Here is the perf top show:
> Samples: 169K of event 'cycles', Event count (approx.):29897076317
>  65.47%  [kernel]              [k] copy_user_enhanced_fast_string
>  10.12%  [kernel]              [k] put_page
>   5.79%  libsnappy.so.1.1.2    [.] snappy::RawUncompress(snappy::Source*, 
> char*)
>   2.32%  [kernel]              [k] find_get_page
>   1.99%  ceph-mon              [.] 0x000000000049e3e2
>   1.55%  [kernel]              [k] file_read_actor
>   1.05%  libc-2.15.so          [.] 0x000000000015f24f
>   0.92%  libtcmalloc.so.0.1.0  [.] operator delete[](void*)
>   0.59%  libtcmalloc.so.0.1.0  [.] 
> tcmalloc::PageHeap::MergeIntoFreeList(tcmalloc::Span*)
>   0.49%  [kernel]              [k] do_generic_file_read.constprop.39
>   0.45%  [kernel]              [k] radix_tree_lookup_element
>   0.37%  libtcmalloc.so.0.1.0  [.] operator new(unsigned long)
>   0.36%  ceph-mon              [.] leveldb::Block::~Block()
>   0.26%  libtcmalloc.so.0.1.0  [.] operator delete(void*)
>   0.21%  libpthread-2.15.so    [.] pthread_mutex_unlock
>   0.21%  [kernel]              [k] page_fault
>   0.19%  ceph-mon              [.] 
> leveldb::InternalKeyComparator::Compare(leveldb::Slice const&, 
> leveldb::Slice const&) const
>   0.17%  ceph-mon              [.] leveldb::crc32c::Extend(unsigned int, 
> char const*, unsigned long)
>   0.16%  ceph-mon              [.] leveldb::Block::Iter::Next()
>   0.16%  [kernel]              [k] copy_page_rep
>   0.15%  [kernel]              [k] fget_light
>   0.15%  libtcmalloc.so.0.1.0  [.] tc_free
>   0.14%  libpthread-2.15.so    [.] __pthread_enable_asynccancel
>   0.14%  libpthread-2.15.so    [.] pthread_mutex_lock
>   0.14%  libstdc++.so.6.0.16   [.] std::string::_M_mutate(unsigned long, 
> unsigned long, unsigned long)
>   0.13%  [kernel]              [k] common_file_perm
>   0.13%  [kernel]              [k] __ticket_spin_lock
>   0.13%  ceph-mon              [.] 
> leveldb::Block::Iter::Seek(leveldb::Slice const&)
>   0.12%  [kernel]              [k] __d_lookup_rcu
>   0.10%  libstdc++.so.6.0.16   [.] std::string::append(char const*, 
> unsigned long)
>   0.10%  libc-2.15.so          [.] vfprintf
>   0.10%  [kernel]              [k] do_numa_page
>   0.09%  [kernel]              [k] _cond_resched
>   0.09%  ceph-mon              [.] leveldb::Table::BlockReader(void*, 
> leveldb::ReadOptions const&, leveldb::Slice const&)
>   0.09%  [kernel]              [k] mark_page_accessed
>   0.08%  libtcmalloc.so.0.1.0  [.] operator new[](unsigned long)
>   0.08%  ceph-mon              [.] leveldb::Block::Iter::SeekToFirst()
>   0.08%  [kernel]              [k] mem_cgroup_page_lruvec
>   0.07%  libtcmalloc.so.0.1.0  [.] 
> tcmalloc::PageHeap::SearchFreeAndLargeLists(unsigned long)
>   0.07%  libtcmalloc.so.0.1.0  [.] tc_malloc
>   0.07%  [kernel]              [k] change_pte_range
>   0.06%  libtcmalloc.so.0.1.0  [.] 
> tcmalloc::PageHeap::Carve(tcmalloc::Span*, unsigned long)
>   0.06%  libstdc++.so.6.0.16   [.] std::string::assign(char const*, 
> unsigned long)
>   0.06%  [kernel]              [k] vfs_read
>   0.06%  [kernel]              [k] fsnotify
>   0.06%  libtcmalloc.so.0.1.0  [.] 
> tcmalloc::CentralFreeList::FetchFromSpans()
> 
> It seems as if it's caused by libsnappy. Do I need to restart the mons
> without --compact.
> 
> Any help is highly appreciated.
> 
> Thanks!
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux