Re: [luminous]OSD memory usage increase when writing a lot of data to cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

There was a thread about this a not long ago, please check:

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021676.html

Denes.

On 10/24/2017 11:48 AM, shadow_lin wrote:
Hi All,
The cluster has 24 osd with 24 8TB hdd.
Each osd server has 2GB ram and runs 2OSD with 2 8TBHDD. I know the memory is below the remmanded value, but this osd server is an ARM  server so I can't do anything to add more ram.
I created a replicated(2 rep) pool and an 20TB image and mounted to the test server with xfs fs. 
 
I have set the ceph.conf to this(according to other related post suggested):
[osd]
        bluestore_cache_size = 104857600
        bluestore_cache_size_hdd = 104857600
        bluestore_cache_size_ssd = 104857600
        bluestore_cache_kv_max = 103809024

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux