Search Postgresql Archives

Re: Can LISTEN/NOTIFY deal with more than 100 every second?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----BEGIN PGP SIGNED MESSAGE-----                                 
Hash: RIPEMD160                                                    


>>> CREATE OR REPLACE RULE send_notify AS ON INSERT TO log DO
>>> ALSO NOTIFY logevent;                                    
>                                                            
>> This is better off as a statement-level trigger, so you won't have
>> to issue one notify per insert, but one per group of inserts.     

> I need all detail info of a row to be logged, though I defined only
> one data column in this case.                                      

You misundertood - replace the RULE that does the NOTIFY with a 
statement-level trigger, but keep the log insertion as row-level.

>> It also looks like you might be reinventing a wheel - maybe can you do
>> what you want with Slony's log shipping?                              
> Thanks for your reminder. I have quickly gone through the documents of 
> Slony-I today, though I can't fully understand how it works, it seems  
> that the slon does this work by polling when a threshold value is      
> exceeded. Maybe this is a good solution when the performance is not    
> satisfied by LISTEN/NOTIFY signal.                                     

Probably not - Slony uses LISTEN and NOTIFY as well. I meant more of the
wheel reinvention of logging changes and shipping them elsewhere. But if
it's just one table, you are probably better off rolling your own.

> Sorry that I didn't tell this clearly. My demo client waits on the
> select(3C) for the notifies, once received, it does select query on
> log table and prints the result on the screen, then delete them from
> log table. When the INSERT rate is about 75 every second, the client
> didn't have any output for about several hundreds of seconds,
> meantime, I can see the rows in log table increased persistently to
> about 30K+ before the client deleted them from 'SELECT COUNT(*) from
> log' in psql. I guess the backend can't deal with the signals on time.

It's still not entirely clear what's going on, but we have a better idea now.
Why would the table ever have 30,000 rows? At 75 per second, that means
about a seven minute gap - are you saying that's how long it takes before the
client notices the NOTIFY? If so, that's very wrong - the time lag should be
measured in sub-second intervals, so perhaps your client is doing something
wrong.

- --
Greg Sabino Mullane greg@xxxxxxxxxxxx
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201002021022
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----

iEYEAREDAAYFAktoQ0wACgkQvJuQZxSWSsgSbQCgoXZkrq/nDxx4vJRDx7o4IT1A
BSMAoNK5y9KpQrAYNeb5MktoXxhCj9lU
=Rb0o
-----END PGP SIGNATURE-----



-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux