On 3/25/2015 2:19 AM, ginkgo36 wrote:
Hi all,
I have 1 table have:
- 417 columns
- 600.000 rows data
- 34 indexs
when i use query on this table, it so long. ex:
update master_items set
temp1 = '' where temp1 <> '' --Query returned successfully: 435214 rows
affected, 1016137 ms execution time.
that query is modifying 435000 rows of your table, and if temp1 is an
indexed field, the index has to be updated 435000 times, too.
note that in postgres, a 'update' translates into a INSERT and a DELETE
alter table master_items add "TYPE-DE" varchar default ''
-- Query returned successfully with no result in 1211019 ms.
that is rewriting all 600000 rows, to add this new field with its
default empty string content
update master_items set "feedback_to_de" = 'Yes'
--Query returned successfully: 591268 rows affected, 1589335 ms execution
time.
that is modifying 591000 rows, essentially rewriting the whole table.
Can you help me find any way to increase performance?
more/faster storage. faster CPU. more RAM.
or, completely rethink how you store this data and normalize it as
everyone else has said.
--
john, recycling bits in santa cruz
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general