Tom Lane wrote:
Steve Eckmann <eckmann@xxxxxxxxxxxx> writes:
<>Thanks for the suggestion, Tom. Yes, I
think I could do that. But I
thought what I was doing now was effectively the same, because the
PostgreSQL 8.0.0 Documentation says (section 27.3.1): "It is allowed to
include multiple SQL commands (separated by semicolons) in the command
string. Multiple queries sent in a single PQexec call are processed in
a
single transaction...." Our simulation application has nearly 400 event
types, each of which is a C++ class for which we have a corresponding
database table. So every thousand events or so I issue one PQexec()
call
for each event type that has unlogged instances, sending INSERT
commands
for all instances. For example,
>
PQexec(dbConn, "INSERT INTO FlyingObjectState VALUES (...); INSERT
INTO FlyingObjectState VALUES (...); ...");
Hmm. I'm not sure if that's a good idea or not. You're causing the
server to take 1000 times the normal amount of memory to hold the
command parsetrees, and if there are any O(N^2) behaviors in parsing
you could be getting hurt badly by that. (I'd like to think there are
not, but would definitely not swear to it.) OTOH you're reducing the
number of network round trips which is a good thing. Have you actually
measured to see what effect this approach has? It might be worth
building a test server with profiling enabled to see if the use of such
long command strings creates any hot spots in the profile.
regards, tom lane
No, I haven't measured it. I will compare this approach with others
that have been suggested. Thanks. -steve
|