2012/12/24 Philipp Kraus <philipp.kraus@xxxxxxxxxxxx>: > I need some ideas for creating a PG based logger. I have got a > job, which can run more than one time. So the PK is at the > moment jobid & cycle number. The inserts in this table are in > parallel with the same username from different host > (clustering). The user calls in the executable "myprint" and > the message will insert into this table, but at the moment I > don't know a good structure of the table. Each print call can > be different length, so I think a text field is a good choice, > but I don't know how can I create a good PK value. IMHO a > sequence can be create problems that I'm logged in with the > same user on multiple hosts, a hash key value like SHA1 based > on the content are not a good choice, because content is not > unique, so I can get key collisions. I would like to create > on each "print" call a own record in the table, but how can I > create a good key value and get no problems in parallel > access. I think there can be more than 1000 inserts each > second. > > Does anybody can post a good idea? Why is it neccesry to have a primary key? What is the "cycle number"? For what it is worth, I put all my syslog in PG and have so far been fine without primary keys. (I keep only an hour there at a time, though, and it's only a few hundred megs.) In the past, I have had trouble maintaining a high TPS while having lots (hundreds) of connected clients; maybe you'll want to use a connection pool. -- Jason Dusek pgp // solidsnack // C1EBC57DC55144F35460C8DF1FD4C6C1FED18A2B -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general