Search Postgresql Archives

processing large amount of rows with plpgsql

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hello everybody,

we are trying to move the data from table1 into table2 using a plpgsql stored procedure which is performing simple a data conversion

there are about 50 million rows

the tables are relatively simple, less than a dozen columns, most are integer, a couple are char(32) and one is varchar holding URLs

what happens is that when we execute the stored procedure, the execution eats up the 17 GB free space on the server and the server crashes

if we apply a limit of 1 million rows, the execution completes successfully in about a minute

we understand, by now, that plpgsql functions are executed in their own transaction, which commits when the function comes to an end

the question is -

are plpgsql functions suitable for such massive operations on data and can this work without so much disk space is being eaten for something which should be simply "read-change-write, read-change-write, read-change-write, …"? i. e. any way to force commit inside, or so?

or should we rather implement this operation in some external scripting language (speed is not that important, this is a one time conversion)

thank you,
geert
-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux