On 02/15/2014 04:55 PM, Antman, Jason (CMG-Atlanta) wrote: > On 02/15/2014 02:00 PM, Francisco Olarte wrote: >> >> If I NEEDED to be able to provide 100-150 snapshots to test/dev >> environments 20% of which maybe active, I'll setup a cluster, buy >> somewhere above a quarter terabyte RAM and some big snapshot >> appliances. They are not that expensive, and someone NEEDING that big >> dev/test systems must be on a humoungous project, where the cost of >> throwing hardware at the problem would be dwarfed by any small delay >> trying to set that up with minimal resources. A postgres instance >> should happily run in half a gig, using proper snapshots would let you >> share cache among them, so you could put quite a lot of them in one of >> these 384G machines floating around, even on a 192G one, throw a >> snapshotting filesystem or appliance on the mix and is not that >> complex. > So you're suggesting running multiple instances of postgres in the same > operating system, and they'll be able to share cache, instead of trying > to get multiple DBs in the same instance? Is there any accepted method > of managing N postgres instances on one host? Any tools or documentation > for that, or best practices? Seems scary to me, but I suppose it's the > best option we have... > > By "A postgres instance should happily run in half a gig" I assume > you're not counting cache/buffers, etc.? Because when trying to run > automation (i.e. automated testing) on our application, we can't get > query execution time down to an acceptable level with anything less than > ~48GB memory available in the host. > > Thanks for the suggestion though. I was considering this as an > alternative, but every experience I've had of running multiple (let's > say, more than 2 or 3) instances of the same application/daemon on the > same host has inevitably led to all sorts of strange issues and > problems... I suppose it sounds like it's going to be the only option > though... > >> Francisco Olarte. > I think I jumped on this without really understanding what you were saying, or the implications of it. If I run N postgres server instances on the same physical host, I can do away with the overhead of running each of them in their own virtualized operating system, but they can't really share resources other than *OS* cache, right? My current postgres instances for testing have 16GB shared_buffers (and 5MB work_mem, 24GB effective_cache_size). So if, hypothetically (to give a mathematically simple example), I have a host machine with 100GB RAM, I can't run 10 postgres instances with those settings, right? I'd still need to provide for the memory needs of each postgres server/instance separately? In which case, from an ease of management/sanity perspective, it would probably be best to try running each one in containers (LXC) or some sort of ultra-low-overhead virtualization, rather than ending up with 10 separate instances of postgres running directly in the same OS? -Jason -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general