".ep" <erick.papa@xxxxxxxxx> writes: > Hello, > > I would like to convert a mysql database with 5 million records and > growing, to a pgsql database. > > All the stuff I have come across on the net has things like > "mysqldump" and "psql -f", which sounds like I will be sitting forever > getting this to work. > > Is there anything else? Where's the "*huge*" database? 5 million records is nothing; I'll run that on my laptop, forget about having a real server... If memory serves, the mysqldump will generate a dump consisting of: 1. Schema information - which will need to get edited a bit to get rid of manifest MySQL-isms. For instance... "TYPE=ISAM PACK_KEYS=1" needs to be trimmed out... There may be some column types that exist in MySQL that do not have the same names in PostgreSQL. Those will need to be changed. 2. It will then consist of a series of INSERT statements. Those will insert mighty slowly if loaded as a transaction apiece. If you add a BEGIN; every once in a while followed by a COMMIT;, perhaps surrounding each table's data, that will cause all that data to be loaded as a single transaction, which will be much quicker. If you could dump out each table in something like tab-delimited form, PostgreSQL could use COPY to load the data, which tends to be way, way, faster. But you only have 5 million records, so it's hardly a large database requiring special measures. -- (reverse (concatenate 'string "ofni.secnanifxunil" "@" "enworbbc")) http://linuxfinances.info/info/finances.html Rules of the Evil Overlord #189. "I will never tell the hero "Yes I was the one who did it, but you'll never be able to prove it to that incompetent old fool." Chances are, that incompetent old fool is standing behind the curtain." <http://www.eviloverlord.com/>