Well thanks for someone at least sending a reply, though I suppose I should have asked "how do I do this", or "what are the major hurdles to doing this", as it obviously has to be *possible* given unlimited knowledge, resources and time. Perhaps I should frame the question differently: If you had a single ~1TB database, and needed to be able to give fresh data copies to dev/test environments (which are usually largely idle) either on demand or daily, how would you do it? The only other thing that comes to mind is separate postgres instances (running multiple postgres instances per server?), one per database, for every environment. Which means that if 80% of the environments are idle at a given time, I'm effectively wasting 80% of the memory that I have allocated to shared buffers, etc. and I actually need 4x the resources I'm using? Unless postgres supports balooning of memory? Thanks, Jason On 02/15/2014 01:20 PM, Tom Lane wrote: > "Antman, Jason (CMG-Atlanta)" <Jason.Antman@xxxxxxxxxx> writes: >> Perhaps there's a postgres internals expert around, someone intimitely familiar with pg_xlog/pg_clog/pg_control, who can comment on whether it's possible to take the on-disk files from a single database in a single tablespace, and make them usable by a different postgres server, running multiple databases? > It is not. There's no need for detailed discussion. > > regards, tom lane -- Jason Antman | Systems Engineer | CMGdigital jason.antman@xxxxxxxxxx | p: 678-645-4155 -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general