libjemalloc.so.1 not used?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

we are testing Ceph as block storage (XFS based OSDs) running in a hyper converged setup with KVM as hypervisor. We are using NVMe SSD only (Intel DC P5320) and I would like to use jemalloc on Ubuntu xenial (current kernel  4.4.0-64-generic). I tried to use /etc/default/ceph and uncommented:


# /etc/default/ceph
#
# Environment file for ceph daemon systemd unit files.
#

# Increase tcmalloc cache size
TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728

## use jemalloc instead of tcmalloc
#
# jemalloc is generally faster for small IO workloads and when
# ceph-osd is backed by SSDs.  However, memory usage is usually
# higher by 200-300mb.
#
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1

and it looks like the OSDs are using jemalloc:

lsof |grep -e "ceph-osd.*8074.*malloc"
ceph-osd   8074                   ceph  mem       REG              252,0       294776     659213 /usr/lib/libtcmalloc.so.4.2.6
ceph-osd   8074                   ceph  mem       REG              252,0       219816     658861 /usr/lib/x86_64-linux-gnu/libjemalloc.so.1
ceph-osd   8074  8116             ceph  mem       REG              252,0       294776     659213 /usr/lib/libtcmalloc.so.4.2.6
ceph-osd   8074  8116             ceph  mem       REG              252,0       219816     658861 /usr/lib/x86_64-linux-gnu/libjemalloc.so.1
ceph-osd   8074  8117             ceph  mem       REG              252,0       294776     659213 /usr/lib/libtcmalloc.so.4.2.6
ceph-osd   8074  8117             ceph  mem       REG              252,0       219816     658861 /usr/lib/x86_64-linux-gnu/libjemalloc.so.1
ceph-osd   8074  8118             ceph  mem       REG              252,0       294776     659213 /usr/lib/libtcmalloc.so.4.2.6
ceph-osd   8074  8118             ceph  mem       REG              252,0       219816     658861 /usr/lib/x86_64-linux-gnu/libjemalloc.so.1
[...]

But perf top shows something different:

Samples: 11M of event 'cycles:pp', Event count (approx.): 603904862529620                                                                                                              
Overhead  Shared Object                         Symbol                                                                                                                                 
   1.86%  libtcmalloc.so.4.2.6                  [.] operator new[]
   1.73%  [kernel]                              [k] mem_cgroup_iter
   1.34%  libstdc++.so.6.0.21                   [.] std::__ostream_insert<char, std::char_traits<char> >
   1.29%  libpthread-2.23.so                    [.] pthread_mutex_lock
   1.10%  [kernel]                              [k] __switch_to
   0.97%  libpthread-2.23.so                    [.] pthread_mutex_unlock
   0.94%  [kernel]                              [k] native_queued_spin_lock_slowpath
   0.92%  [kernel]                              [k] update_cfs_shares
   0.90%  libc-2.23.so                          [.] __memcpy_avx_unaligned
   0.87%  libtcmalloc.so.4.2.6                  [.] operator delete[]
   0.80%  ceph-osd                              [.] ceph::buffer::ptr::release
   0.80%  [kernel]                              [k] mem_cgroup_zone_lruvec


Do my OSDs use jemalloc or don't they?

All the best,
Florian




EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

T  +41 44 466 60 00
F  +41 44 466 60 10

florian.engelmann@xxxxxxxxxxxx
www.everyware.ch

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux