Re: Bluestore and jemalloc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 12/15/2017 09:29 AM, Sage Weil wrote:
On Fri, 15 Dec 2017, Mike A wrote:
Hello!

After fix this issue http://tracker.ceph.com/issues/20557 Ceph is still
working with jemalloc? Or it’s not possible right now only after will be
fix a issue in rocksdb?

I have a work to create a cluster based on nvme disks with jemalloc
enabled configuration. Now I'm worried jemalloc support in ceph about.

I'm not sure about the current status of the bug: I don't know why
jemalloc and rocksdb were not working together, and I'm not sure whether
the fix is simple (build option?) or more complex.

It's also not clear whether jemalloc is going to be helpful or that this
is important.  jemalloc was much better than tcmalloc with
SimpleMessenger, but the default is now AsyncMessenger and there wasn't
much of a delta.

For reference, here's the initial allocator tests from back then:

https://drive.google.com/open?id=0B2gTBZrkrnpZek0zWlE5aVVuRlk

And a comparison of simple vs async for tcmalloc and jemalloc under some of the worst case scenarios from that time period:

https://drive.google.com/open?id=0B2gTBZrkrnpZS1Q4VktjZkhrNHc


Either way, jemalloc is a runtime option, so either it works or it
doesn't, but the fix is to restart the daemons--there's no risk.

sage

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux