Re: tcmalloc use a lot of CPU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mark,

>>Yep! At least from what I've seen so far, jemalloc is still a little 
>>faster for 4k random writes even compared to tcmalloc with the patch + 
>>128MB thread cache. Should have some data soon (mostly just a 
>>reproduction of Sandisk and Intel's work).

I definitively switch to jemmaloc from my production ceph cluster,
I was too tired of this tcmalloc problem (I have hit the bug once or twice, even with TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES)

>>Should have some data soon (mostly just a 
>>>>reproduction of Sandisk and Intel's work).

Client side,it could be great to run fio or rados bench with jemalloc too, I have see around 20% improvement vs glibc.
"LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1 fio ...."


(For my production, I'm running qemu with jemalloc too now)

Regards,

Alexandre

----- Mail original -----
De: "Mark Nelson" <mnelson@xxxxxxxxxx>
À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Lundi 17 Août 2015 16:24:16
Objet: Re:  tcmalloc use a lot of CPU

On 08/17/2015 07:03 AM, Alexandre DERUMIER wrote: 
> Hi, 
> 
>>> Is this phenomenon normal?Is there any idea about this problem? 
> 
> It's a known problem with tcmalloc (search on the ceph mailing). 
> 
> starting osd with "TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=128M" environnement variable should help. 

Note that this only works if you use a version of gperftools/tcmalloc 
newer than 2.1. 

> 
> 
> Another way, is to compile ceph with jemalloc instead tcmalloc (./configure --with-jemalloc ...) 

Yep! At least from what I've seen so far, jemalloc is still a little 
faster for 4k random writes even compared to tcmalloc with the patch + 
128MB thread cache. Should have some data soon (mostly just a 
reproduction of Sandisk and Intel's work). 

> 
> 
> 
> ----- Mail original ----- 
> De: "YeYin" <eyniy@xxxxxx> 
> À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx> 
> Envoyé: Lundi 17 Août 2015 11:58:26 
> Objet:  tcmalloc use a lot of CPU 
> 
> Hi, all, 
> When I do performance test with rados bench, I found tcmalloc consumed a lot of CPU: 
> 
> Samples: 265K of event 'cycles', Event count (approx.): 104385445900 
> + 27.58% libtcmalloc.so.4.1.0 [.] tcmalloc::CentralFreeList::FetchFromSpans() 
> + 15.25% libtcmalloc.so.4.1.0 [.] tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache::FreeList*, unsigned long, 
> + 12.20% libtcmalloc.so.4.1.0 [.] tcmalloc::CentralFreeList::ReleaseToSpans(void*) 
> + 1.63% perf [.] append_chain 
> + 1.39% libtcmalloc.so.4.1.0 [.] tcmalloc::CentralFreeList::ReleaseListToSpans(void*) 
> + 1.02% libtcmalloc.so.4.1.0 [.] tcmalloc::CentralFreeList::RemoveRange(void**, void**, int) 
> + 0.85% libtcmalloc.so.4.1.0 [.] 0x0000000000017e6f 
> + 0.75% libtcmalloc.so.4.1.0 [.] tcmalloc::ThreadCache::IncreaseCacheLimitLocked() 
> + 0.67% libc-2.12.so [.] memcpy 
> + 0.53% libtcmalloc.so.4.1.0 [.] operator delete(void*) 
> 
> Ceph version: 
> # ceph --version 
> ceph version 0.87.2 (87a7cec9ab11c677de2ab23a7668a77d2f5b955e) 
> 
> Kernel version: 
> 3.10.83 
> 
> Is this phenomenon normal? Is there any idea about this problem? 
> 
> Thanks. 
> Ye 
> 
> 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users@xxxxxxxxxxxxxx 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users@xxxxxxxxxxxxxx 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 
_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux