Chris Browne wrote:
"jgardner@xxxxxxxxxxxxxxxxxxx" <jgardner@xxxxxxxxxxxxxxxxxxx> writes:
My question is how can I configure the database to run as quickly as
possible if I don't care about data consistency or durability? That
is, the data is updated so often and it can be reproduced fairly
rapidly so that if there is a server crash or random particles from
space mess up memory we'd just restart the machine and move on.
For such a scenario, I'd suggest you:
- Set up a filesystem that is memory-backed. On Linux, RamFS or TmpFS
are reasonable options for this.
- The complication would be that your "restart the machine and move
on" needs to consist of quite a few steps:
- recreating the filesystem
- fixing permissions as needed
- running initdb to set up new PG instance
- automating any needful fiddling with postgresql.conf, pg_hba.conf
- starting up that PG instance
- creating users, databases, schemas, ...
Doesn't PG now support putting both WAL and user table files onto
file systems other than the one holding the PG config files and PG
'admin' tables? Wouldn't doing so simplify the above considertably
by allowing just the WAL and user tables on the memory-backed file
systems? I wouldn't think the performance impact of leaving
the rest of the stuff on disk would be that large.
Or does losing WAL files mandate a new initdb?
--
Steve Wampler -- swampler@xxxxxxxx
The gods that smiled on your birth are now laughing out loud.
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance