Re: [CEPH] OSD Memory Usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello. Thank you very much for your explanation.

Because I thought that  osd_memory_target will help me limit OSD memory
usage which will help prevent memory leak - I tried google and many people
talked about memory leak. A nice man, @Anthony D'Atri <aad@xxxxxxxxxxxxxx> ,
on this forum helped me to understand that it wont help to limit OSD usage.

I set it to 1GB because I want to see how this option works.

I will read and test with caches options.

Nguyen Huu Khoi


On Thu, Nov 16, 2023 at 12:23 PM Zakhar Kirpichenko <zakhar@xxxxxxxxx>
wrote:

> Hi,
>
> osd_memory_target is a "target", i.e. an OSD make an effort to consume up
> to the specified amount of RAM, but won't consume less than required for
> its operation and caches, which have some minimum values such as for
> example osd_memory_cache_min, bluestore_cache_size,
> bluestore_cache_size_hdd, bluestore_cache_size_ssd, etc. The recommended
> and default OSD memory target is 4 GB.
>
> Your nodes have a sufficient amount of RAM, thus I don't see why you would
> want to reduce OSD memory consumption below the recommended defaults,
> especially considering that in-memory caches are important for Ceph
> operations as they're many times faster than the fastest storage devices. I
> run my OSDs with osd_memory_target=17179869184 (16 GB) and it helps,
> especially with slower HDD-backed OSDs.
>
> /Z
>
> On Thu, 16 Nov 2023 at 01:02, Nguyễn Hữu Khôi <nguyenhuukhoinw@xxxxxxxxx>
> wrote:
>
>> Hello,
>> I am using a CEPH cluster. After monitoring it, I set:
>>
>> ceph config set osd osd_memory_target_autotune false
>>
>> ceph config set osd osd_memory_target 1G
>>
>> Then restart all OSD services then do test again, I just use fio commands
>> from multi clients and I see that OSD memory consume is over 1GB. Would
>> you
>> like to help me understand this case?
>>
>> Ceph version: Quincy
>>
>> OSD: 3 nodes with 11 nvme each and 512GB ram per node.
>>
>> CPU: 2 socket xeon gold 6138 cpu with 56 cores per socket.
>>
>> Network: 25Gbps x 2 for public network and 25Gbps x 2 for storage network.
>> MTU is 9000
>>
>> Thank you very much.
>>
>>
>> Nguyen Huu Khoi
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux