Hi All,
We are evaluating Postgresql as a db platform for one of our future
applications. Some tables in the database will contain more than
10.000.000 records, which as i understand it, should be no problem with
postgresql.
We have been trying to find the most effective/fastest way to manipulate
external data and insert them in the database. Our developper has
written some stored procedures in PL/PGSQL to do all this.
Everything works fine but we are constantly running into the same error.
After inserting about 2.000.000 records we get this error:
CETERROR: XX001: invalid page header in block 22182 of relation "dunn_main"
Some more error codes associated with this error:
CETCONTEXT: SQL statement "SELECT phone FROM dunn_main WHERE source_id
= $1 AND duns = $2 "
PL/pgSQL function "proc_dunn" line 29 at select into variables
CETLOCATION: ReadBuffer, bufmgr.c:257
CETSTATEMENT: SELECT proc_dunn ('Carriage Transfer','SL2
4BP','211121699','1','01','S','01753-648
900','5531',155,154,75176)
From what I could find about this error, I understand that it's
basically a data corruption in the table.
My main question is: why is this occuring?
We are running on a HP DL380G4 machine. That is running Novell OES sp2,
which is basically a Suse Linux Enterprise 9 setup + some Novell
extensions. I've compiled Postgresql 8.1.3 from source.
The test database runs on a single SCSI disk. No mirror, no Raid 5,
nothing. It is however connected to a HP Smartarray controller.
The server also runs other Postgres databases under another instance
without (until now) any problems. They are however not so big and don't
have a heavy insert/update load.
Does anybody have any clues or can tell me how to more exactly pinpoint
the problem? Hardware/bug in Postgresql?
Any advice would be more than welcome, because I'm afraid the developper
will demand to have an Oracle installation if I can't fix this (it was
also tested and ran without any issues.
Kind regard,
Jo De Haes.