[CEPH] OSD Memory Usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,
I am using a CEPH cluster. After monitoring it, I set:

ceph config set osd osd_memory_target_autotune false

ceph config set osd osd_memory_target 1G

Then restart all OSD services then do test again, I just use fio commands
from multi clients and I see that OSD memory consume is over 1GB. Would you
like to help me understand this case?

Ceph version: Quincy

OSD: 3 nodes with 11 nvme each and 512GB ram per node.

CPU: 2 socket xeon gold 6138 cpu with 56 cores per socket.

Network: 25Gbps x 2 for public network and 25Gbps x 2 for storage network.
MTU is 9000

Thank you very much.


Nguyen Huu Khoi
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux