Ah, thanks…
I'm currently trying to diagnose a performace regression that occurs with the Ubuntu 4.15 kernel (on a Proxmox system)
and thought that jemalloc, given the old reports, could help with that. But than I ran into that bug report.
I'll take from your info that I'm gonna stick to tcmalloc. You know, so much to test and benchmark, so little time…
Regards,
Uwe
Am 05.07.2018 um 19:08 schrieb Mark Nelson:
Hi Uwe,
As luck would have it we were just looking at memory allocators again and ran some quick RBD and RGW tests that stress
memory allocation:
https://drive.google.com/uc?export=download&id=1VlWvEDSzaG7fE4tnYfxYtzeJ8mwx4DFg
The gist of it is that tcmalloc looks like it's doing pretty well relative to the version of jemalloc and libc malloc
tested (The jemalloc version here is pretty old though). You are also correct that there have been reports of crashes
with jemalloc, potentially related to rocksdb. Right now it looks like our decision to stick with tcmalloc is still
valid. I wouldn't suggest switching unless you can find evidence that tcmalloc is behaving worse than the others (and
please let me know if you do!).
Thanks,
Mark
On 07/05/2018 08:08 AM, Uwe Sauter wrote:
Hi all,
is using jemalloc still recommended for Ceph?
There are multiple sites (e.g. https://ceph.com/geen-categorie/the-ceph-and-tcmalloc-performance-story/) from 2015
where jemalloc
is praised for higher performance but I found a bug report that Bluestore crashes when used with jemalloc.
Regards,
Uwe
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com