A few years ago, I was working with "big" servers. At least, they were big for that age, with *128G* of RAM!!!1 Holy mackeral, right?!!? Anyway, at that time, I tried allocating 64G to shared buffers and we had a bunch of problems with inconsistent performance, including "stall" periods where the database would stop responding for 2 or 3 seconds. After trying all sorts of tuning options that didn't help, the problem finally went away after reducing shared_buffers to 32G. I speculated, at the time, that the shared buffer code hit performance issues managing that much memory, but I never had the opportunity to really follow up on it. Now, this was back in 2012 or thereabouts. Seems like another lifetime. Probably PostgreSQL 9.2 at that time. Nowadays, 128G is a "medium sized" server. I just got access to one with 775G. It would appear that I could order from Dell with 1.5T of RAM if I'm willing to sell my house ... Yet, all the docs and advice I'm able to find online seem to have been written pre 2008 and say things like "if your server has more than 1G of RAM ..." I feel like it's time for a documentation update ;) But I, personally don't have the experience recently enough to know what sort of recommendations to make. What are people's experience with modern versions of Postgres on hardware this size? Do any of the experts have specific recommendations on large shared_buffers settings? Any developers care to comment on any work that's been done since 2012 to make large values work better? -- Bill Moran <wmoran@xxxxxxxxxxxxxxxxx> -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general