Greetings
Our company is writing a small ad-hoc implementation of a load balancer
for Postgres (`version()` = PostgreSQL 9.2.9 on
x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat
4.4.7-4), 64-bit).
We're using both streaming and WAL shipping based replication.
Most mainstream solutions seem to implement load balancing with plain
round robin over a connection pool. Given that our cloud nodes are
diversely capable and subject to noisy neighborhood conditions, we need
to factor in instantaneous load profiles (We achieved this by exporting
some /sys and /proc paths through custom views and everything works as
expected).
We're now adding functionality to temporarily blacklist hot standby
clusters based on their WAL records lag and pg_xlog_location_diff()
seems to be the key tool for this, but we're perhaps misusing it.
The current draft implementation uses the following queries and compares
the output to determine how many bytes a given slave is lagging.
Is there any shortcoming to such approach?
--------------------------------
-- ON MASTER:
--------------------------------
SELECT
pg_xlog_location_diff(pg_current_xlog_location(), '000/00000000')
;
--------------------------------
--------------------------------
-- ON STANDBY:
--------------------------------
SELECT
pg_xlog_location_diff(
COALESCE(
pg_last_xlog_receive_location(),
pg_last_xlog_replay_location()
),
'000/00000000'
)
;
--------------------------------
Thanks in advance
Fabio Ugo Venchiarutti
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general