Re: Restricting Postgres

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2004-11-03 at 21:25, Martin Foster wrote:
> Simon Riggs wrote:
> > On Tue, 2004-11-02 at 23:52, Martin Foster wrote:
> > 
> >>Is there a way to restrict how much load a PostgreSQL server can take 
> >>before dropping queries in order to safeguard the server?    I was 
> >>looking at the login.conf (5) man page and while it allows me to limit 
> >>by processor time this seems to not fit my specific needs.
> >>
> >>Essentially, I am looking for a sort of functionality similar to what 
> >>Sendmail and Apache have.   Once the load of the system reaches a 
> >>certain defined limit the daemon drops tasks until such a time that it 
> >>can resume normal operation.
> > 
> > 
> > Sounds great... could you give more shape to the idea, so people can
> > comment on it?
> > 
> > What limit? Measured how? Normal operation is what?
> > 
> > Drop what? How to tell?
> > 
> > 
> 
> Let's use the example in Apache, there is the Apache::LoadAvgLimit 
> mod_perl module which allows one to limit based on the system load 
> averages.   Here is an example of the configuration one would find:
> 
>    <Location /perl>
>      PerlInitHandler Apache::LoadAvgLimit
>      PerlSetVar LoadAvgLimit_1 3.00
>      PerlSetVar LoadAvgLimit_5 2.00
>      PerlSetVar LoadAvgLimit_15 1.50
>      PerlSetVar LoadAvgRetryAfter 120
>    </Location>
> 
> The end state is simple, once the load average moves above 3.00 for the 
> 1 minute average the web server will not process the CGI scripts or 
> mod_perl applications under that directory.  Instead it will return a 
> 503 error and save the system from being crushed by ever increasing load 
> averages.
> 
> Only once the load average is below the defined limits will the server 
> process requests as normal.   This is not necessarily the nicest or 
> cleanest way or doing things, but it does allow the Apache web server to 
> prevent a collapse.
> 
> There are ways of restricting the size of files, number of concurrent 
> processes and even memory being used by a daemon.  This can be done 
> through ulimit or the login.conf file if your system supports it. 
> However, there is no way to restrict based on load averages, only 
> processor time which is ineffective for a perpetually running daemon 
> like PostgreSQL has.
> 

All workloads are not created equally, so mixing them can be tricky.
This will be better in 8.0 because seq scans don't spoil the cache.

Apache is effectively able to segregate the workloads because each
workload is "in a directory". SQL isn't stored anywhere for PostgreSQL
to say "just those ones please", so defining which statements are in
which workload is the tricky part.

PostgreSQL workload management could look at userid, tables, processor
load (?) and estimated cost to decide what to do.

There is a TODO item on limiting numbers of connections per
userid/group, in addition to the max number of sessions per server.

Perhaps the easiest way would be to have the Apache workloads segregated
by PostgreSQL userid, then limit connections to each.

> For example using Apache::DBI or pgpool the DBMS may be required to 
> spawn a great deal of child processed in a short order of time.   This 
> in turn can cause a major spike in processor load and if unchecked by 
> running as high demand queries the system can literally increase in load 
> until the server buckles.
> 

That's been nicely covered off by John and Matt on the other threads, so
you're sorted out for now and doesn't look like a bug in PostgreSQL.

> Of course, I am not blaming PostgreSQL, there are probably some 
> instabilities in the AMD64 port of FreeBSD 5.2.1 for dual processor 
> systems that lead to an increased chance of failure instead of recovery. 

Good!

-- 
Best Regards, Simon Riggs



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux