Search Postgresql Archives

Re: [SPAM] Re: Best practices to manage custom statistics

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/23/2016 05:24 AM, Moreno Andreo wrote:
Il 15/11/2016 18:19, Adrian Klaver ha scritto:
On 11/15/2016 07:39 AM, Moreno Andreo wrote:
Sorry for late reply... i'm in some quite rough days....

Il 08/11/2016 21:28, Adrian Klaver ha scritto:
On 11/08/2016 12:13 PM, Moreno Andreo wrote:
[...]

In your experience, would this approach help me lower server load?
Are there any other approach I can try?



So does the user need only their data or do they need the other users
data also?
they can be able to download also their group mates' data (stats,
numbers), if they want. Numbers can be confusing because evey user can
have more than one workplace.



How can they be working with 'old' data? The queries you are running
are compiling stats on data that exist at the time they are run and at
any point in time between stats runs the user is working with current
data regardless of what the last stats say.
Since we are on ADO.NET (with Npgsql) and we don't keep connections open
(query, fill a structure and disconnect), in the time slice between two
updates they will have both data and stats that are not "real" in that
moment...
I'll try to make an example
You and me are teammates and work everyone at his place (several miles
away). You read data from my database and get both stats and work data.
Suddenly I make a change. This change won't be visible to you until you
refresh data (that was pressing "Update" button, now it's automated
every 60 secs).
Hope It's clearer now...


Aah, I get it now. You are refreshing forms every 60 seconds over 350 users each with their own database. Actually worse then that as there is user overlap over databases(up to 10 per user), so the same database can be hit multiple times at a given refresh. Seems physics is at work here as you have already pointed out. Namely fetching all that data at regular intervals taxes the bandwith as well as the CPU/storage.

High levels solutions that come to mind to spread the load out:

1) Replication to spread data across multiple machines.
Or just split the databases over multiple non-replicated Postgres instances on separate machines

2) Caching results from each individual database so subsequent calls for the information do not touch the database. You already touched on this with your counter table.

3) Spreading out the refresh interval. Not sure if the 60 second interval is synced across users. Still maybe setting different refresh intervals and/or changing it on the fly when load increases. A back off equation so to speak.



Except when they are wrong:) Still been there.
Don't tell me.... :-)



Thanks in advance and sorry for the long message (but I had to explain
such a complex thing)
Moreno.-



--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx


--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux