Re: OSD swapping on Pacific

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Global swappiness and per-cgroup swappiness are managed separately. When you change vm.swappiness sysctl, only /sys/fs/cgroup/memory/memory.swappiness changes, but not memory.swappiness of the services under separate slices (like system.slice where ceph services are running).

Check https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt on how to manage cgroup memory settings.

On 2021/08/16 15:10, Alexander Sporleder wrote:
Hello David,

Unfortunately "vm.swapiness" dose not change the behavior. Tweaks on the container side  (--memory-swappiness and --
memory-swap) might make sens but I did not found any Ceph related suggestion.


Am Montag, dem 16.08.2021 um 13:52 +0200 schrieb David Caro:
Afaik the swapping behavior is controlled by the kernel, there might be some tweaks on the container engine side, but
you might want to try to tweak the default behavior by lowering the 'vm.swapiness' of the kernel:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/performance_tuning_guide/s-memory-tunables



On 08/16 13:14, Alexander Sporleder wrote:
Hello list!
We have a containerized Pacific (16.2.5) Cluster running CentOS 8.4 and after a few weeks the OSDs start to use swap
quite a lot despite free memory. The host has 196 GB of memory and 24 OSDs. "OSD Memory Target" is set to 6 GB.



$ cat /proc/meminfo
MemTotal:       196426616 kB
MemFree:        11675608 kB
MemAvailable:   48940232 kB
Buffers:        46757632 kB
Cached:           653216 kB
....


$ smem -k
Command                                  Swap      USS      PSS      RSS
ceph     /usr/bin/ceph-osd -n osd.22     1.7G     3.7G     3.7G     3.7G
ceph     /usr/bin/ceph-osd -n osd.10   853.4M     4.6G     4.6G     4.6G
ceph     /usr/bin/ceph-osd -n osd.12   793.6M     4.6G     4.6G     4.6G
ceph     /usr/bin/ceph-osd -n osd.92   561.3M     4.7G     4.7G     4.7G
ceph     /usr/bin/ceph-osd -n osd.14   647.2M     4.9G     4.9G     4.9G
ceph     /usr/bin/ceph-osd -n osd.15   567.8M     5.0G     5.0G     5.0G
....


Is that a known behavior, an bug or configuration problem? On two hosts I turned of swap and the OSDs a running
happily
now for more the 6 weeks.

Bets,
Alex

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux