On Fri, Mar 14, 2014 at 4:30 AM, acanada <acanada@xxxxxxx> wrote:
Hello,
I'm having time issues when adding new fields to a big table. I hope you can point me some hints to speed up the updates of a table with 124 million rows...
This is what I do:
First I create a tmp_table with the data that will be added to the big table:
\d+ svm_confidence_id_tmp
Table "public.svm_confidence_id_tmp"
Column | Type | Modifiers | Storage | Stats target | Description
---------------+------------------+-----------+---------+--------------+-------------
id | integer | not null | plain | |
svmconfidence | double precision | | plain | |
Indexes:
"svm_confidence_id_tmp_pkey" PRIMARY KEY, btree (id)
....
Then I update the svmConfidence field of the document table like this:
update document as d set "svmConfidence" = st.svmconfidence from svm_confidence_id_tmp as st where st.id = d.id;
But it takes too much time.
Is there something to take into account? Any hints?
Should I do it in a different way?
If your concern is how much time it has the rows locked for, you can break it into a series of shorter transactions:
with t as (delete from svm_confidence_id_tmp where id in (select id from svm_confidence_id_tmp limit 10000) returning * )
Cheers,
Jeff