Hi folks, I was having trouble with OSDs crashing under ceph 0.56.2 (CentoOS, 64-bit, installed from rpms, using the "elrepo" kernel), so I eagerly installed 0.56.3. Since then, I've been having a lot of trouble getting the OSDs running, either because of previous crashes or because of some change in 0.56.3. What's happening now is that the OSD processes are using so much memory that the machines start swapping, and eventually die. Today, I tried systematically starting up a few OSDs at a time and watching memory usage. The processes climb pretty quickly up to 2-3 GB in RSIZE. If I let them run for a while and then restart them, I find that they tend to settle into an RSIZE of about 1.5 GB. Unfortunately, I don't seem to have any records of how big these processes were under 0.56.2, but this is way too big to fit into the memory available, when I start up all of the OSD processes. My impression was that the sizes used to be well under 1 GB when the cluster was idle. Is there anything I can do to reduce the memory footprint of ceph-osd? Thanks for any advice. Bryan -- ======================================================================== Bryan Wright |"If you take cranberries and stew them like Physics Department | applesauce, they taste much more like prunes University of Virginia | than rhubarb does." -- Groucho Charlottesville, VA 22901| (434) 924-7218 | bryan@xxxxxxxxxxxx ======================================================================== -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html