Hi All,
The cluster has 24 osd with 24 8TB hdd.
Each osd server has 2GB ram and runs 2OSD with 2
8TBHDD. I know the memory is below the remmanded value, but this osd server is
an ARM server so I can't do anything to add more ram.
I created a replicated(2 rep) pool and an
20TB image and mounted to the test server with xfs fs.
I have set the ceph.conf to this(according to
other related post suggested):
[osd]
bluestore_cache_size = 104857600 bluestore_cache_size_hdd = 104857600 bluestore_cache_size_ssd = 104857600 bluestore_cache_kv_max = 103809024 osd map cache size =
20
osd map max advance = 10 osd map share max epochs = 10 osd pg epoch persisted max stale = 10 The bluestore cache setting did improve the
situation,but if i try to write 1TB data by dd command(dd if=/dev/zero of=test
bs=1G count=1000) to rbd the osd will eventually be killed by oom
killer.
If I only wirte like 100G data
once then everything is fine.
Why does the osd memory usage keep increasing
whle writing ?
Is there anything I can do to reduce the memory
usage?
2017-10-24
lin.yunfan |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com