Hello!
I've managed to import into postgre DB 3 800 000 rows of data (500 MB pure
CSV ~ 2 GB SQL DB)
It looks like this
"69110784","69111807","US","UNITED
STATES","ILLINOIS","BLOOMINGTON","40.4758","-88.9894","61701","LEVEL 3
COMMUNICATIONS INC","DSL-VERIZON.NET"
"69111808","69112831","US","UNITED
STATES","TEXAS","GRAPEVINE","32.9309","-97.0755","76051","LEVEL 3
COMMUNICATIONS INC","DSL-VERIZON.NET"
"69112832","69113087","US","UNITED
STATES","TEXAS","DENTON","33.2108","-97.1231","76201","LEVEL 3
COMMUNICATIONS INC","DSL-VERIZON.NET"
CREATE TABLE ipdb2
(
ipFROM int4 NOT NULL,
ipTO int4 NOT NULL ,
countrySHORT CHARACTER(2) NOT NULL,
countryLONG VARCHAR(64) NOT NULL,
ipREGION VARCHAR(128) NOT NULL,
ipCITY VARCHAR(128) NOT NULL,
ipLATITUDE DOUBLE PRECISION,
ipLONGITUDE DOUBLE PRECISION,
ipZIPCODE VARCHAR(5),
ipISP VARCHAR(255) NOT NULL,
ipDOMAIN VARCHAR(128) NOT NULL
);
I've indexed first two columns they are IPfrom, IPto also table is btree
version of postgre is 7.4.8, on hosting
I ask db like this SELECT * FROM ipdb2 WHERE '3229285376' BETWEEN ipfrom
AND ipto;
and get answer after 3-10 seconds, is there a way to speed it up somehow?
any tweaks and tuneups possible with it?
thanks!
----------------
eugene