Re: How to release Hammer osd RAM when compiled with jemalloc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2016-12-14 9:46 GMT+08:00 Sage Weil <sage@xxxxxxxxxxxx>:
> On Wed, 14 Dec 2016, Dong Wu wrote:
>> Thanks for your response.
>>
>> 2016-12-13 20:40 GMT+08:00 Sage Weil <sage@xxxxxxxxxxxx>:
>> > On Tue, 13 Dec 2016, Dong Wu wrote:
>> >> Hi, all
>> >>    I have a cluster with nearly 1000 osds, and each osd already
>> >> occupied 2.5G physical memory on average, which cause each host 90%
>> >> memory useage. when use tcmalloc, we can use "ceph tell osd.* release"
>> >> to release unused memory, but in my cluster, ceph is build with
>> >> jemalloc, so can't use "ceph tell osd.* release", is there any methods
>> >> to release some memory?
>> >
>> > We explicitly call into tcmalloc to release memory with that command, but
>> > unless you've patched something in yourself there is no integration with
>> > jemalloc's release API.
>>
>> Are there any methods to know detail memory usage of OSD, if we have a
>> memory allocator recording detail memory usage, will this be helpful?
>> Is it on the schedule?
>
> Kraken has a new mempool infrastructure and some of the OSD pieces have
> been moved into it, but only some.  There's quite a bit of opportunity to
> further categorize allocations to get better visibility here.

Looking forward.

>
> Barring that, your best bet is to use either tcmalloc's heap profiling or
> valgrind massif.  Both slow down execution by a lot (5-10x).  Massif has
> better detail, but is somewhat slower.

I'll first use these tools in my test cluster to see memory usage.
But in our product cluster, can I just use ceph tell osd.* injectargs
'--osd_map_max_advance 50 --osd_map_cache_size 100
--osd_map_message_max 50 --osd_map_share_max_epochs 50' to lower osd's
memory?
Or should I change ceph.conf and then restart OSDs?

>
> sage
>
>
>>
>> >
>> >> another question:
>> >> can I decrease the following config value which used for cached osdmap
>> >>  to lower osd's memory?
>> >>
>> >> "mon_min_osdmap_epochs": "500"
>> >> "osd_map_max_advance": "200",
>> >> "osd_map_cache_size": "500",
>> >> "osd_map_message_max": "100",
>> >> "osd_map_share_max_epochs": "100"
>> >
>> > Yeah.  You should be fine with 500, 50, 100, 50, 50.
>>
>> >
>> > sage
>>
>> Thanks.
>> Regards.
>>
>>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux