Hi Stefan, its all at the defaults it seems: [root@gnosis ~]# ceph config get osd.243 bluestore_cache_size 0 [root@gnosis ~]# ceph config get osd.243 bluestore_cache_size_ssd 3221225472 I explicitly removed the old settings with commands like ceph config rm osd.243 bluestore_cache_size Best regards, ================= Frank Schilder AIT Risø Campus Bygning 109, rum S14 ________________________________________ From: Stefan Kooman <stefan@xxxxxx> Sent: 04 February 2020 21:14:28 To: Frank Schilder Cc: ceph-users Subject: Re: osd_memory_target ignored Quoting Frank Schilder (frans@xxxxxx): > Dear Stefan, > > I check with top the total allocation. ps -aux gives: > > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND > ceph 784155 15.8 3.1 6014276 4215008 ? Sl Jan31 932:13 /usr/bin/ceph-osd --cluster ceph -f -i 243 ... > ceph 784732 16.6 3.0 6058736 4082504 ? Sl Jan31 976:59 /usr/bin/ceph-osd --cluster ceph -f -i 247 ... > ceph 785812 17.1 3.0 5989576 3959996 ? Sl Jan31 1008:46 /usr/bin/ceph-osd --cluster ceph -f -i 254 ... > ceph 786352 14.9 3.1 5955520 4132840 ? Sl Jan31 874:37 /usr/bin/ceph-osd --cluster ceph -f -i 256 ... > > These should have 8GB resident by now, but stay at or just below 4G. The other options are set as > > [root@ceph-04 ~]# ceph config get osd.243 bluefs_allocator > bitmap > [root@ceph-04 ~]# ceph config get osd.243 bluestore_allocator > bitmap > [root@ceph-04 ~]# ceph config get osd.243 osd_memory_target > 8589934592 What does "bluestore_cache_size" read? Our OSDs report "0". Gr. Stefan -- | BIT BV https://www.bit.nl/ Kamer van Koophandel 09090351 | GPG: 0xD14839C6 +31 318 648 688 / info@xxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx