On 07/11/2012 07:46 PM, Andy Halsall
wrote:
PL/PgSQL or SQL stored proc? There's a definite calling overhead for PL/PgSQL compared to plain SQL functions. SQL functions in turn cost more than a direct statement. These costs aren't big. They're massively outweighed by any kind of disk access or any non-trivial query. They start to add up if you have a lot of procs that wrap a simple "SELECT * FROM x WHERE x.id = $1" though. 0.0015 to 0.002 milliseconds? That's ... um ... fast. Presumably that's during a loop where your no-op is run repeatedly without connection setup costs, etc. UNIX domain sockets are typically at least as fast and somewhat lower overhead.
I'm not sure how much a faster server would help with single query response time. It'll help with response to many parallel queries, but may not speed up a single query, especially a tiny lightweight one, particularly dramatically. The Xeon 7040: http://ark.intel.com/products/27226/Intel-Xeon-Processor-7040-(4M-Cache-3_00-GHz-667-MHz-FSB) is not the newest beast out there, but it's not exactly slow. Honestly, PostgreSQL's focus is on performance with bigger units of work, not on being able to return a response to a tiny query in microseconds. If you are converting an application that has microsecond response time requirements and hammers its database with millions of tiny queries, PostgreSQL is probably not going to be your best bet. If you're able to adapt the app to use set-oriented access patterns instead of looping, eg instead of (pseudocode): customer_ids = [ ... array from somewhere ... ] for customer_id in ids: c = SELECT c FROM customer c WHERE customer_id = :id if c.some_var: UPDATE customer SET c.some_other_var = 't' you can do: UPDATE customer SET c.some_other_var = [_expression_] WHERE [_expression_] then you'll get much better results from Pg. -- Craig Ringer |