Re: OSD swapping on Pacific

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Found some option that seemed to cause some trouble in the past, `bluefs_buffered_io`, it has been disabled/enabled by
default a couple times (disabled on v15.2.2, enabled on v15.2.13), it seems it might have a big effect on performance
and swapping behavior, but might be a lead.

On 08/16 14:10, Alexander Sporleder wrote:
> Hello David,
> 
> Unfortunately "vm.swapiness" dose not change the behavior. Tweaks on the container side  (--memory-swappiness and --
> memory-swap) might make sens but I did not found any Ceph related suggestion. 
> 
> 
> Am Montag, dem 16.08.2021 um 13:52 +0200 schrieb David Caro:
> > Afaik the swapping behavior is controlled by the kernel, there might be some tweaks on the container engine side, but
> > you might want to try to tweak the default behavior by lowering the 'vm.swapiness' of the kernel:
> > 
> > https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/performance_tuning_guide/s-memory-tunables
> > 
> > 
> > 
> > On 08/16 13:14, Alexander Sporleder wrote:
> > > Hello list! 
> > > We have a containerized Pacific (16.2.5) Cluster running CentOS 8.4 and after a few weeks the OSDs start to use swap
> > > quite a lot despite free memory. The host has 196 GB of memory and 24 OSDs. "OSD Memory Target" is set to 6 GB. 
> > > 
> > > 
> > > 
> > > $ cat /proc/meminfo 
> > > MemTotal:       196426616 kB
> > > MemFree:        11675608 kB
> > > MemAvailable:   48940232 kB
> > > Buffers:        46757632 kB
> > > Cached:           653216 kB
> > > ....
> > > 
> > > 
> > > $ smem -k
> > > Command                                  Swap      USS      PSS      RSS 
> > > ceph     /usr/bin/ceph-osd -n osd.22     1.7G     3.7G     3.7G     3.7G 
> > > ceph     /usr/bin/ceph-osd -n osd.10   853.4M     4.6G     4.6G     4.6G 
> > > ceph     /usr/bin/ceph-osd -n osd.12   793.6M     4.6G     4.6G     4.6G 
> > > ceph     /usr/bin/ceph-osd -n osd.92   561.3M     4.7G     4.7G     4.7G 
> > > ceph     /usr/bin/ceph-osd -n osd.14   647.2M     4.9G     4.9G     4.9G 
> > > ceph     /usr/bin/ceph-osd -n osd.15   567.8M     5.0G     5.0G     5.0G
> > > ....
> > > 
> > > 
> > > Is that a known behavior, an bug or configuration problem? On two hosts I turned of swap and the OSDs a running
> > > happily
> > > now for more the 6 weeks. 
> > > 
> > > Bets,
> > > Alex
> > > 
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > 
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

-- 
David Caro
SRE - Cloud Services
Wikimedia Foundation <https://wikimediafoundation.org/>
PGP Signature: 7180 83A2 AC8B 314F B4CE  1171 4071 C7E1 D262 69C3

"Imagine a world in which every single human being can freely share in the
sum of all knowledge. That's our commitment."
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux