On Wed, Nov 30, 2016 at 8:04 AM, Cachique <cachique@xxxxxxxxx> wrote:
Walter.You can try pg_cron.It looks like what you want.
https://github.com/citusdata/pg_cron
"pg_cron is a simple cron-based job scheduler for PostgreSQL (9.5 or higher) that runs inside the database as an extension. It uses the same syntax as regular cron, but it allows you to schedule PostgreSQL commands directly from the database"On Tue, Nov 29, 2016 at 10:40 PM, Patrick B <patrickbakerbr@xxxxxxxxx> wrote:2016-11-30 14:21 GMT+13:00 John R Pierce <pierce@xxxxxxxxxxxx>:On 11/29/2016 5:10 PM, Patrick B wrote:
Yep.. once a minute or so. And yes, I need to store a history with timestamp.
Any idea? :)
so create a table with a timestamptz, plus all the fields you want, have a script (perl? python? whatever your favorite poison is with database access) that once a minute executes those two queries (you'll need two database connections since only the slave knows how far behind it is), and inserts the data into your table.
-- john r pierce, recycling bits in santa cruzCan't I do it on the DB size? Using a trigger maybe? instead of using Cron?Patrick
>The OP wants to run queries on the master and the slave, and combine them.
Another option, although a bit convoluted, would be to extract the data to a csv file, scp it to destination server, and then copy in from there
Contents of bash script
===================
#!/bin/bash
select now() as time_pk,
client_addr,
state,
sent_location,
write_location,
flush_location,
replay_location,
sync_priority
from pg_stat_replication;
\q
scp results.csv destination_server/tmp/.
psql -U postgres -h destination_server/tmp/.
COPY data_table
FROM '\tmp\results.csv'
WITH csv;
FROM '\tmp\results.csv'
WITH csv;
\q
--
Melvin Davidson
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.