Re: osd_memory_target ignored

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Stefan,

I check with top the total allocation. ps -aux gives:

USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
ceph      784155 15.8  3.1 6014276 4215008 ?     Sl   Jan31 932:13 /usr/bin/ceph-osd --cluster ceph -f -i 243 ...
ceph      784732 16.6  3.0 6058736 4082504 ?     Sl   Jan31 976:59 /usr/bin/ceph-osd --cluster ceph -f -i 247 ...
ceph      785812 17.1  3.0 5989576 3959996 ?     Sl   Jan31 1008:46 /usr/bin/ceph-osd --cluster ceph -f -i 254 ...
ceph      786352 14.9  3.1 5955520 4132840 ?     Sl   Jan31 874:37 /usr/bin/ceph-osd --cluster ceph -f -i 256 ...

These should have 8GB resident by now, but stay at or just below 4G. The other options are set as

[root@ceph-04 ~]# ceph config get osd.243 bluefs_allocator
bitmap
[root@ceph-04 ~]# ceph config get osd.243 bluestore_allocator
bitmap
[root@ceph-04 ~]# ceph config get osd.243 osd_memory_target
8589934592

Best regards,

=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Stefan Kooman <stefan@xxxxxx>
Sent: 04 February 2020 16:34:34
To: Frank Schilder
Cc: ceph-users
Subject: Re:  osd_memory_target ignored

Hi,

Quoting Frank Schilder (frans@xxxxxx):
> I recently upgraded from 13.2.2 to 13.2.8 and observe two changes that
> I struggle with:
>
> - from release notes: The bluestore_cache_* options are no longer
> needed. They are replaced by osd_memory_target, defaulting to 4GB.  -
> the default for bluestore_allocator has changed from stupid to bitmap,
>
> which seem to conflict each other, or at least I seem unable to
> achieve what I want.
>
> I have a number of OSDs for which I would like to increase the cache
> size. In the past I used bluestore_cache_size=8G and it worked like a
> charm. I now changed that to osd_memory_target=8G without any effect.
> The usage stays at 4G and the virtual size is about 5G. I would expect
> both to be close to 8G. The read cache for these OSDs usually fills up
> within a few hours. The cluster is now running a few days with the new
> configs to no avail.

How do you check the memory usage? We have a osd_memory_target=11G and
the OSDs consume this exact amount of RAM (ps aux |grep osd). We are
running 13.2.8. ceph daemon osd.$id dump_mempools would give ~ 4 GiB of
RAM. So there is more RAM usage than only specified by "mempool"
obviously.

>
> The documentation of osd_memory_target refers to tcmalloc a lot. Is
> this in conflict with allocator=bitmap? If so, what is the way to tune
> cache sizes (say if tcmalloc is not used/how to check?)? Are
> bluestore_cache_* indeed obsolete as the above release notes suggest,
> or is this not true?

AFAIK these are not related. We use "bluefs_allocator": "bitmap" and
"bluestore_allocator": "bitmap".

Gr. Stefan

--
| BIT BV  https://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / info@xxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux