Yeah, that means tcmalloc probably caching those as I suspected.. There are some discussion going on in that front, but, unfortunately we concluded to have tcmalloc as default and if somebody needs performance should move to jemalloc. One of the reason is, it seems jemalloc is consuming ~200MB more memory/osd during IO run... But, I think this is one of the serious issue of tcmalloc we need to consider as well..I posted this findings earlier in ceph-devl during my write path optimization investigation. There are some settings in tcmalloc that should expedite this memory release faster though. I tried, but, didn't work. I didn't dig down further in that route though. Mark, Did you observe similar tcmalloc behavior in your recovery experiment for tcmalloc vs jemalloc? Thanks & Regards Somnath -----Original Message----- From: Chad William Seys [mailto:cwseys@xxxxxxxxxxxxxxxx] Sent: Friday, August 28, 2015 7:58 AM To: 池信泽 Cc: Somnath Roy; Haomai Wang; ceph-users@xxxxxxxxxxxxxx Subject: Re: RAM usage only very slowly decreases after cluster recovery Thanks! 'ceph tell osd.* heap release' seems to have worked! Guess I'll sprinkle it around my maintenance scripts. Somnath Is there a plan to make jemalloc standard in Ceph in the future? Thanks! Chad. ________________________________ PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com