plsql gets "out of memory"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,
I'm a newbie here. I'm trying to test pgsql with my mysql data. If the performance is good, I will migrate from mysql to pgsql. I installed pgsql 9.1rc on my Ubuntu server. I'm trying to import a large sql file dumped from mysql into pgsql with 'plsql -f'. The file is around 30G with bulk insert commands in it. It rans several hours and then aborted with an "out of memory" error. This is the tail of the log I got:
INSERT 0 280
INSERT 0 248
INSERT 0 210
INSERT 0 199
invalid command \n
out of memory

On server side, I only found these errors related to invalid UTF-8 characters which is related to escape characters when exported from mysql. 2011-08-29 19:19:29 CST ERROR: invalid byte sequence for encoding "UTF8": 0x00
2011-08-29 19:55:35 CST LOG:  unexpected EOF on client connection

My understanding is this is a client side issue and not related to any server memory setting. But how can ajust the memory setting of the psql program?

To handle the escape character '\' which is default in mysql but not in pgsql, I have already made some rough modification to the exported sql dump file: sed "s/,'/,E'/g" |sed 's/\\0/ /g'. I guess there might still be some characters missing handling and that might cause the insert command to be split to several invalid pgsql commands. Would that be the cause of the "out of memory" error?


--
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux