Re: Can I limit OSD memory usage?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Did you osd oom killed when cluster doing recover/backfill, or just
the client io?
The configure items you mentioned is for bluestore and the osd memory
include many other
things, like pglog, you it's important to known do you cluster is dong recover?

Sergei Genchev <sgenchev@xxxxxxxxx> 于2019年6月8日周六 上午5:35写道:
>
>  Hi,
>  My OSD processes are constantly getting killed by OOM killer. My
> cluster has 5 servers, each with 18 spinning disks, running 18 OSD
> daemons in 48GB of memory.
>  I was trying to limit OSD cache, according to
> http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/
>
> [osd]
> bluestore_cache_size_ssd = 1G
> bluestore_cache_size_hdd = 768M
>
> Yet, my OSDs are using way more memory than that. I have seen as high as 3.2G
>
> KiB Mem : 47877604 total,   310172 free, 45532752 used,  2034680 buff/cache
> KiB Swap:  2097148 total,        0 free,  2097148 used.   950224 avail Mem
>
>     PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+
> COMMAND
>  352516 ceph      20   0 3962504   2.8g   4164 S   2.3  6.1   4:22.98
> ceph-osd
>  350771 ceph      20   0 3668248   2.7g   4724 S   3.0  6.0   3:56.76
> ceph-osd
>  352777 ceph      20   0 3659204   2.7g   4672 S   1.7  5.9   4:10.52
> ceph-osd
>  353578 ceph      20   0 3589484   2.6g   4808 S   4.6  5.8   3:37.54
> ceph-osd
>  352280 ceph      20   0 3577104   2.6g   4704 S   5.9  5.7   3:44.58
> ceph-osd
>  350933 ceph      20   0 3421168   2.5g   4140 S   2.6  5.4   3:38.13
> ceph-osd
>  353678 ceph      20   0 3368664   2.4g   4804 S   4.0  5.3  12:47.12
> ceph-osd
>  350665 ceph      20   0 3364780   2.4g   4716 S   2.6  5.3   4:23.44
> ceph-osd
>  353101 ceph      20   0 3304288   2.4g   4676 S   4.3  5.2   3:16.53
> ceph-osd
>  .......
>
>
>  Is there any way for me to limit how much memory does OSD use?
> Thank you!
>
> ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic (stable)
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Thank you!
HuangJun
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux