Re: Problem with 11 M records table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





idc danny wrote:
Hi everybody,

I'm fairly new to PostgreSQL and I have a problem with
a query:

SELECT * FROM "LockerEvents" LIMIT 10000 OFFSET
10990000

The table LockerEvents has 11 Mlillions records on it
and this query takes about 60 seconds to complete.
Moreover, even after making for each column in the
table a index the EXPLAIN still uses sequential scan
instead of indexes.

The EXPLAIN is:
"Limit  (cost=100245579.54..100245803.00 rows=10000
width=60) (actual time=58414.753..58482.661 rows=10000
loops=1)"
" -> Seq Scan on "LockerEvents" (cost=100000000.00..100245803.00 rows=11000000
width=60) (actual time=12.620..45463.222 rows=11000000
loops=1)"
"Total runtime: 58493.648 ms"

The table is:

CREATE TABLE "LockerEvents"
(
  "ID" serial NOT NULL,
  "IDMoneySymbol" integer NOT NULL,
  "IDLocker" integer NOT NULL,
  "IDUser" integer NOT NULL,
  "IDEventType" integer NOT NULL,
  "TimeBegin" timestamp(0) without time zone NOT NULL,
  "Notes" character varying(200),
  "Income" double precision NOT NULL DEFAULT 0,
  "IncomeWithRate" double precision NOT NULL DEFAULT
0,
  CONSTRAINT pk_lockerevents_id PRIMARY KEY ("ID"),
  CONSTRAINT fk_lockerevents_ideventtype_eventtypes_id
FOREIGN KEY ("IDEventType")
      REFERENCES "EventTypes" ("ID") MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT fk_lockerevents_idlocker_lockers_id
FOREIGN KEY ("IDLocker")
      REFERENCES "Lockers" ("ID") MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT
fk_lockerevents_idmoneysymbol_moneysymbols_id FOREIGN
KEY ("IDMoneySymbol")
      REFERENCES "MoneySymbols" ("ID") MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT fk_lockerevents_iduser_users_id FOREIGN
KEY ("IDUser")
      REFERENCES "Users" ("ID") MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (OIDS=FALSE);


CREATE INDEX idx_col_lockerevents_income
  ON "LockerEvents"
  USING btree
  ("Income");

CREATE INDEX idx_col_lockerevents_incomewithrate
  ON "LockerEvents"
  USING btree
  ("IncomeWithRate");

CREATE INDEX idx_col_lockerevents_notes
  ON "LockerEvents"
  USING btree
  ("Notes");

CREATE INDEX idx_col_lockerevents_timebegin
  ON "LockerEvents"
  USING btree
  ("TimeBegin");

CREATE INDEX
idx_fk_lockerevents_ideventtype_eventtypes_id
  ON "LockerEvents"
  USING btree
  ("IDEventType");

CREATE INDEX idx_fk_lockerevents_idlocker_lockers_id
  ON "LockerEvents"
  USING btree
  ("IDLocker");

CREATE INDEX
idx_fk_lockerevents_idmoneysymbol_moneysymbols_id
  ON "LockerEvents"
  USING btree
  ("IDMoneySymbol");

CREATE INDEX idx_fk_lockerevents_iduser_users_id
  ON "LockerEvents"
  USING btree
  ("IDUser");

CREATE UNIQUE INDEX idx_pk_lockerevents_id
  ON "LockerEvents"
  USING btree
  ("ID");


If I do the query :
SELECT * FROM "LockerEvents" LIMIT 10000 OFFSET 0
then this query takes under a second to complete - I
believe this is because the sequential scan starts
from beginning.

I need the query to complete under 10 seconds and I do
not know how to do it. Please help me!

Thank you,
Danny


I recall it being mentioned on one of these lists that with offset, all the rows in between still have to be read. So, you may get better results if you use a 'where id > 10000' clause in the query.

-salman



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux