Re: COPY TO and VACUUM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Guys,

we found a suitable solution for our process we run every 5-6 hours a CLUSTER stement for our big table: this "lock" activities but allow us to recover all available space.

When testing this task we discover another issues and that's why I'm coming back to you for your experience:

duting our process we run multiple simoultaneously "COPY... FROM" in order to load data into our table but a t the same time we run also "COPY ... TO" statement in parallel to export data for other clients.

We found that COPY .. TO queries sometimes are pending for more than 100 minutes and the destination file continues to be at 0 Kb. Can you advise me how to solve this issue?
Is it here a best way to bulk download data avoiding any kind of block when running in parallel?

Many thanks in advance


----- Messaggio originale -----
Da: "Jeff Janes" <jeff.janes@xxxxxxxxx>
A: "Roberto Grandi" <roberto.grandi@xxxxxxxxxxxxxx>
Cc: "Kevin Grittner" <kgrittn@xxxxxxxxx>, pgsql-performance@xxxxxxxxxxxxxx
Inviato: Giovedì, 5 settembre 2013 20:14:26
Oggetto: Re:  COPY TO and VACUUM

On Thu, Sep 5, 2013 at 9:05 AM, Roberto Grandi
<roberto.grandi@xxxxxxxxxxxxxx> wrote:
> Hi Jeff,
>
> the proble is that when continously updloading vendors listing on our "big" table the autovacuum is not able to free space as we would.

It might not be able to free it (to be reused) as fast as you need it
to, but it should be freeing it eventually.

> Secondarly, if we launch a Vacuum after each "upload" we collide with other upload taht are running in parallel.

I wouldn't do a manual vacuum after *each* upload.  Doing one after
every Nth upload, where N is estimated to make up about 1/5 of the
table, should be good.  You are probably IO limited, so you probably
don't gain much by running these uploads in parallel, I would try to
avoid that.  But in any case, there shouldn't be a collision between
manual vacuum and a concurrent upload.  There would be one between two
manual vacuums but you could code around that by explicitly locking
the table in the correct mode nowait or with a timeout, and skipping
the vacuum if it can't get the lock.

>
> Is it possible, form your point of view, working with isolation levels or table partitioning to minimize table space growing?

Partitioning by vendor might work well for that purpose.

Cheers,

Jeff


-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance





[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux