Re: osd_memory_target ignored

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Frank Schilder (frans@xxxxxx):
> Dear Stefan,
> 
> I check with top the total allocation. ps -aux gives:
> 
> USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
> ceph      784155 15.8  3.1 6014276 4215008 ?     Sl   Jan31 932:13 /usr/bin/ceph-osd --cluster ceph -f -i 243 ...
> ceph      784732 16.6  3.0 6058736 4082504 ?     Sl   Jan31 976:59 /usr/bin/ceph-osd --cluster ceph -f -i 247 ...
> ceph      785812 17.1  3.0 5989576 3959996 ?     Sl   Jan31 1008:46 /usr/bin/ceph-osd --cluster ceph -f -i 254 ...
> ceph      786352 14.9  3.1 5955520 4132840 ?     Sl   Jan31 874:37 /usr/bin/ceph-osd --cluster ceph -f -i 256 ...
> 
> These should have 8GB resident by now, but stay at or just below 4G. The other options are set as
> 
> [root@ceph-04 ~]# ceph config get osd.243 bluefs_allocator
> bitmap
> [root@ceph-04 ~]# ceph config get osd.243 bluestore_allocator
> bitmap
> [root@ceph-04 ~]# ceph config get osd.243 osd_memory_target
> 8589934592

What does "bluestore_cache_size" read? Our OSDs report "0".

Gr. Stefan

-- 
| BIT BV  https://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / info@xxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux