Search Postgresql Archives

Re: Best practices to manage custom statistics

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry for late reply... i'm in some quite rough days....

Il 08/11/2016 21:28, Adrian Klaver ha scritto:
On 11/08/2016 12:13 PM, Moreno Andreo wrote:
[...]

In your experience, would this approach help me lower server load?
Are there any other approach I can try?

Instead of pushing why not pull.
Excuse me Adrian, but I can't get what you mean by not pushing but pulling.
We are now pulling data from clients for about everything we need... what I'd like to do is either
- the database pushes updates when needed,
or
- the client pulls data from database, but querying a reduced dataset (just a row values with all values for that user and not thousands of rows)
In other words do the users really check/need the statistics every 20 secs?
Ideally, I need a real value to be read when that value changes. But on Earth I'm happy with a consistent value (If it should be 800 and it reads 799 it's not an earthquake) at least on a regular basis. This means that if there's no activity, we will be uselessly polling the database, so here's why I thought about "pushing" data from backend to client, that would be the nearest to ideal solution.
Given that you say exact is not important over the course of day, why not create a mechanism for the user to poll the database when they need the information.

This is what we did in the past. The result was that users _did not_ update values (clicking an "update" button) and made disasters working with "old" data (they forgot to do it, they didn't want to do it because "it's another click, I waste my time", and so many, even stupid, excuses... but they're the customers, they pay, and here we say that "customer is always right")

So we changed: now we check for values and for data (not every 20 but 60 seconds... I just checked the right value). I need something that's lighter for the DB backend, at least for the values procedure. If we had only a database, I think that queries and datasets would be stuck in cache, so response times would be faster. With more than 350 databases, that's not possible (or we have to grow RAM size to values very big...)

I've also thought about using LISTEN/NOTIFY to send value updates to client only when needed, but with NPgSQL I read that we need to keep an open connection, and that's not a good idea AFAIK.

Thanks
Moreno



If more details are needed, just ask.

Thanks in advance and sorry for the long message (but I had to explain
such a complex thing)
Moreno.-









--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux