I am working with the rsyslog developers to improve it's performance in
inserting log messages to databases.
currently they have a postgres interface that works like all the other
ones, where rsyslog formats an insert statement, passes that the the
interface module, that sends it to postgres (yes, each log as a seperate
transaction)
the big win is going to be in changing the core of rsyslog so that it can
process multiple messages at a time (bundling them into a single
transaction)
but then we run into confusion.
off the top of my head I know of several different ways to get the data
into postgres
1. begin; insert; insert;...;end
2. insert into table values (),(),(),()
3. copy from stdin
(how do you tell it how many records to read from stdin, or that you
have given it everything without disconnecting)
4. copy from stdin in binary mode
and each of the options above can be done with prepared statements, stored
procedures, or functions.
I know that using procedures or functions can let you do fancy things like
inserting the row(s) into the appropriate section of a partitioned table
other than this sort of capability, what sort of differences should be
expected between the various approaches (including prepared statements vs
unprepared)
since the changes that rsyslog is making will affect all the other
database interfaces as well, any comments about big wins or things to
avoid for other databases would be appriciated.
David Lang
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance