I don’t have suitable data to hand to test, but I would expect finding the max on a timestamp column would be PDQ, especially if it is an indexed column.
For the other column I’m guessing more work has to be done finding the length for each row. Maybe maintaining a column containing the current length for each row would help?
Anyway, can you test the max timestamp and the max col length separately to confirm where the time goes?
Re-reading the original question makes me think there may be more complexity to your querie(s) then I realise.
Dave
From: sbob [mailto:sbob@xxxxxxxxxxxxxxxxxxxxx]
Sent: 02 June 2021 20:18
To: David G. Johnston
Cc: Pgsql-admin
Subject: Re: column max length / max timestamps from system tables
Unfortunately this takes hours since the table has > 500million rows
just wanted to see if there was a shorrtcut / alternate
On 6/2/21 11:13 AM, David G. Johnston wrote:
On Wed, Jun 2, 2021, 10:05 sbob <sbob@xxxxxxxxxxxxxxxxxxxxx> wrote:
Hi all;
we need to pull the max timestamp for a set of columns and the max
string length for another set of columns. Can I get this from system
tables? If so, how up to date would it be?
No. Just query the table directly.
David J.
1) you can create an aggregate table, and save the relevant aggregates as per your requirement.
then you can have triggers, insert/update to check if the new agg is greater than ones in aggregate table, update that table else keep it unchanged.
when delete, delete the row from the agg table if the deleted row values equal to the agg values. the agg table is now empty.
then create a function to query the agg table to return relevant values, if not found, query the aggregates from the base table, update the agg table and then return those values.
that way, unless you delete the row containing the max value, you should be able to get agg values pretty quickly.
but it would slow down DMLs by a small amount of time.
On Thu, 3 Jun 2021 at 01:31, <dave@xxxxxxxxxxxxxx> wrote:
Thanks,
Vijay
Mumbai, India