Andrew Smith wrote: > - C++ app reads data from proprietary system and writes it into temp > table in PostgreSQL > - ASP.NET web service reads data from temp table in PostgreSQL and > generates HTML [snip] > This temp table will probably contain up to 10000 records, each of > which could be changing every second (data is coming from a real-time > monitoring system). On top of this, I've then got the ASP.NET app > reading the updated data values every second or so (the operators want > to see the data as soon as it changes). PostgresSQL - or, in fact, any relational database - isn't really a great choice for this particular role. You don't care about retaining the data, you're not that bothered by long-term data integrity, and you're really just using the DB as a communications tool. (If you *DO* care about the history, then that's different, but you're also talking serious hardware). Personally I think that for your real-time monitoring you might be much better off using something like memcached for your intermediary. Or, for that matter, a text file. It does depend on the complexity of the queries you want to run on the data, though. If you do want to use SQL, but you don't care about the history and only want it as a communication intermediary, I'd actually suggest MySQL with MyISAM tables for this one isolated role. While horrifyingly unsafe, with MyISAM tables it's also blazingly fast. Do NOT store anything you care about keeping that way, though. If you're using Pg anyway for other things, if you do also intend to store the history of the changes (say, as a change log in another table on safe storage), etc, then you might want to look at temp tables in a RAM-disk based tablespace. Do some testing and see how you go. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general