On 06/23/2011 09:37 PM, Natusch, Paul wrote:
I have an application for which data is being written to many disks simultaneously. I would like to use a postgres table space on each disk. If one of the disks crashes it is tolerable to lose that data, however, I must continue to write to the other disks.
About the only way you'll be able to do that with PostgreSQL is to run one PostgreSQL instance per disk. Give each its own port, datadir, shared_buffers, etc.
I wouldn't expect that setup to perform particularly well, and it costs you the ability to have ACID rules apply between data on different disks. It's also a horribly inefficient use of RAM.
For this kind of task, it is typical to use a simple, dedicated tool to capture the writes from the sensors or whatever you are logging. Once the data has hit disk, another tool can read it in small batches and add it to the database for analysis and reporting.
Perhaps it'd help if you explained what you want - and why - with a little more background and detail?
-- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general