Search Postgresql Archives

Re: allow LIMIT in UPDATE and DELETE

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 19, 2006 at 10:25:19AM -0700, Shelby Cain wrote:
> ----- Original Message ----
> >From: Csaba Nagy <nagy@xxxxxxxxxxxxxx>
> >To: Shelby Cain <alyandon@xxxxxxxxx>
> >Cc: SCassidy@xxxxxxxxxxxxxxxxxxx; Postgres general mailing list ><pgsql-general@xxxxxxxxxxxxxx>; pgsql-general-owner@xxxxxxxxxxxxxx
> >Sent: Friday, May 19, 2006 11:46:42 AM
> >Subject: Re: [GENERAL] allow LIMIT in UPDATE and DELETE
> >
> >Well, sometimes it's not that easy. How would you handle a batch
> >processing system which stores the incoming requests in a queue table in
> >the data base, and then periodically processes a batch of it, with the
> >additional constraint that it is allowed to process at most 1000 at a
> >time so it won't produce a too long running transaction ? Suppose the
> >processing is quite costly, and the queue can have bursts of incoming
> >requests which then have to be slowly processed... the requests are
> >coming from the web and must be processed asynchronously, the insert
> >into the data base must be very fast.
> 
> I can't imagine a case where a properly tuned Postgresql installation with appropriate hardware backing it couldn't handle that particular kind of workload pattern.  However, I usually work with Oracle so tables used as queues don't have the same performance issues you'd run into with Postgresql.

Just try and do (what should stay) a small queue table in the same
database as long-running reporting transactions. As long as a
long-running report is going you might as well suspend all vacuuming on
that queue table, because it won't do you any good; the report
transaction means that vacuum can't remove anything.

I've seen a case where a queue table should always fit into a single
database page; 2 at most. But because some transactions will run for a
minute or two, that table is normally about 40 pages, almost entirely
all dead space. Of course the same problem affects all the indexes on
that table as well.

I can't imagine how bad this would be if the database actually had
hour-long reports that had to run... and luckily the system is quiet at
night when pg_dump runs.

> Regardless, this type of queue problem can also be tackled by having your data layer persisting the input from the web in memory (which maintains a low perceived response time to the client) and posting to the table as fast as the database allows.

Uh, and just what happens when your web front-end crashes then??
-- 
Jim C. Nasby, Sr. Engineering Consultant      jnasby@xxxxxxxxxxxxx
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux