Hi folks, I’m not even sure this list is still active.
Curious to hear from anyone else out in the community. Does anyone out there have large clusters that have been running for a long amount of time (100+ days), and have successfully done significant amounts of recovery? We are seeing all sorts of memory pressure problems. I’ll try to keep this short. I didn’t send it to the normal users list because I keep getting punted, since our corporate mail server apparently doesn’t like the incoming volume.
37 nodes in our busiest dedicated object storage cluster (we have lots of clusters…)
15 8TB drives
2x 1.6TB NVMe for journal + LVM cache for spinning rust
128GB DDR4 (regretfully small)
2x E52650 pinned at max freq, cstate 0
2x 25Gbe (1 public, 1 cluster, cluster NIC set with jumbo frames on)
Ubuntu 16.04, 4.4.0-78 and 4.4.0-96
RHCS Ceph ( we do have cases open w/ RH, wanted to hear from the other users out there )
Over time, we start seeing symptoms of high memory pressure, such as:
<![if !supportLists]>- <![endif]>Kswapd churning
<![if !supportLists]>- <![endif]>Dropped tx packets (almost always heartbeats, causing “wrongly marked down” alerts, don’t mask this!)
<![if !supportLists]>- <![endif]>XFS crashes (unsure this is related)
<![if !supportLists]>- <![endif]>RGW oddities, like stale index entries, and false 5xx responses (unsure this is related)
Our normal traffic is measured in GBps, not Mbps J Anything under 2GBps is considered a slow day. We have figured out a few things along the way. Don’t drop cache while OSDs are running. This triggers the XFS crash pretty quickly. /proc/buddyinfo is a good indicator of memory problems. We see a lack of 8K and larger pages, which will cause problems for the jumbo frame config. Asked our high traffic generators to back off during recovery.
Our future machines will have 256GB, but even still, the memory will eventually get fragmented with enough use. I know this completely changes with bluestore, since we wouldn’t have page cache, or normal slab info to handle in memory, but I think bluestore at our scale in prod is likely quite a way off.
_______________________________________________ Ceph-large mailing list Ceph-large@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-large-ceph.com