It depends on which version of ceph, but it's pretty normal under newer versions.
There are a bunch of variables. How many PGs per OSD, how much data is in the PGs, etc. I'm a bit light on the PGs (~60 PGs per OSD), and heavy on the data (~3 TiB of data on each OSD). In the production cluster, under peak user traffic, my OSDs are using around 1GiB of memory.
If there is some scrubbing, deep-scrubbing, or a recovery, I've seen individual OSDs go as high as 4 GiB. Which causes some problems...
On Thu, Nov 6, 2014 at 11:00 PM, 谢锐 <xierui@xxxxxxxxxxxxxxx> wrote:
and make one osd down.then do stress test by fio.
------------------ Original ------------------
From: "谢锐"<xierui@xxxxxxxxxxxxxxx>;
Date: Fri, Nov 7, 2014 02:50 PM
To: "ceph-users"<ceph-users@xxxxxxxx>;
Subject: Is it normal that osd's memory exceed 1GB under stresstest?
I set mon_osd_down_out_interval to two days,and do stress test. the memory of osd exceed 1GB.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com