Bad plan on a view

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




I have a table with a few small numeric fields and several text fields, on pg. 8.1.2.

The numeric fields are used for searching (category_id, price, etc).
The text fields are just a description of the item, comments, email address, telephone, etc.

So, in order to speed up requests which need a full table scan, I wanted to put the text fields in another table, and use a view to make it look like nothing happened. Also, the small table used for searching is a lot more likely to fit in RAM than the big table with all the text which is only used for display.

However the query plan for the view is sometimes very bad (see below)

Here is a simplification of my schema with only 2 columns :

CREATE TABLE items (
  id SERIAL PRIMARY KEY,
  price FLOAT NULL,
  category INTEGER NOT NULL,
  description TEXT
);

CREATE TABLE items_data (
  id SERIAL PRIMARY KEY,
  price FLOAT NULL,
  category INTEGER NOT NULL
);

CREATE TABLE items_desc (
  id INTEGER NOT NULL REFERENCES items_data(id) ON DELETE CASCADE,
  PRIMARY KEY (id ),
  description TEXT
);

INSERT INTO items about 100K rows

INSERT INTO items_data (id,price,category) SELECT id,price,category FROM items;
INSERT INTO items_desc (id,description) SELECT id,description FROM items;
alter table items_data ALTER price set statistics 100;
alter table items_data ALTER category set statistics 100;
VACUUM ANALYZE;

CREATE VIEW items_view1 AS SELECT a.id, a.price, a.category, b.description FROM items_data a, items_desc b WHERE a.id=b.id; CREATE VIEW items_view2 AS SELECT a.id, a.price, a.category, b.description FROM items_data a LEFT JOIN items_desc b ON a.id=b.id;

Now, an example query :

** From the plain table

EXPLAIN ANALYZE SELECT * FROM items WHERE price IS NOT NULL AND category=1 ORDER BY price DESC LIMIT 10;
                                                       QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------
Limit (cost=10308.21..10308.23 rows=10 width=229) (actual time=391.373..391.379 rows=10 loops=1) -> Sort (cost=10308.21..10409.37 rows=40466 width=229) (actual time=391.371..391.375 rows=10 loops=1)
         Sort Key: price
-> Seq Scan on items (cost=0.00..4549.57 rows=40466 width=229) (actual time=0.652..91.125 rows=42845 loops=1)
               Filter: ((price IS NOT NULL) AND (category = 1))
 Total runtime: 399.511 ms

** From the data only table (no descriptions)

EXPLAIN ANALYZE SELECT * FROM items_data WHERE price IS NOT NULL AND category=1 ORDER BY price DESC LIMIT 10;
                                                         QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------
Limit (cost=5250.92..5250.95 rows=10 width=16) (actual time=275.765..275.769 rows=10 loops=1) -> Sort (cost=5250.92..5357.83 rows=42763 width=16) (actual time=275.763..275.766 rows=10 loops=1)
         Sort Key: price
-> Seq Scan on items_data (cost=0.00..1961.58 rows=42763 width=16) (actual time=0.411..57.610 rows=42845 loops=1)
               Filter: ((price IS NOT NULL) AND (category = 1))
 Total runtime: 278.023 ms

It is faster to access the smaller table. Note that I only added the description column in this example. With all the other columns like telephone, email, etc of my production table, which are used for display only and not for searching, it takes about 1.2 seconds, simply because the table is a lot larger (yes, it fits in RAM... for now).

Now, let's check out the 2 views : the plans are exactly the same

EXPLAIN ANALYZE SELECT * FROM items_view2 WHERE price IS NOT NULL AND category=1 ORDER BY price DESC LIMIT 10; QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=13827.38..13827.41 rows=10 width=222) (actual time=584.704..584.712 rows=10 loops=1) -> Sort (cost=13827.38..13934.29 rows=42763 width=222) (actual time=584.703..584.709 rows=10 loops=1)
         Sort Key: a.price
-> Merge Left Join (cost=0.00..7808.02 rows=42763 width=222) (actual time=1.708..285.663 rows=42845 loops=1)
               Merge Cond: ("outer".id = "inner".id)
-> Index Scan using items_data_pkey on items_data a (cost=0.00..2439.74 rows=42763 width=16) (actual time=0.692..86.330 rows=42845 loops=1)
                     Filter: ((price IS NOT NULL) AND (category = 1))
-> Index Scan using items_desc_pkey on items_desc b (cost=0.00..4585.83 rows=99166 width=210) (actual time=0.038..104.957 rows=99165 loops=1)
 Total runtime: 593.068 ms

Wow. This is a lot slower because it does the big join BEFORE applying the sort.

Here is the plain query generated by the view :
SELECT a.id, a.price, a.category, b.description FROM items_data a LEFT JOIN items_desc b ON a.id=b.id WHERE price IS NOT NULL AND category=1 ORDER BY price DESC LIMIT 10;

I would have expected the planner to rewrite it like this :

EXPLAIN ANALYZE SELECT foo.*, b.description FROM (SELECT * FROM items_data a WHERE price IS NOT NULL AND category=1 ORDER BY price DESC LIMIT 10) AS foo LEFT JOIN items_desc b ON foo.id=b.id ORDER BY price DESC LIMIT 10;

This query should be equivalent to the view with LEFT JOIN. I am aware it is not equivalent to the view with a simple join.

                                                                 QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=5250.92..5281.31 rows=10 width=222) (actual time=273.300..273.363 rows=10 loops=1) -> Nested Loop Left Join (cost=5250.92..5281.31 rows=10 width=222) (actual time=273.299..273.361 rows=10 loops=1) -> Limit (cost=5250.92..5250.95 rows=10 width=16) (actual time=273.267..273.269 rows=10 loops=1) -> Sort (cost=5250.92..5357.83 rows=42763 width=16) (actual time=273.266..273.267 rows=10 loops=1)
                     Sort Key: a.price
-> Seq Scan on items_data a (cost=0.00..1961.58 rows=42763 width=16) (actual time=0.423..67.149 rows=42845 loops=1)
                           Filter: ((price IS NOT NULL) AND (category = 1))
-> Index Scan using items_desc_pkey on items_desc b (cost=0.00..3.01 rows=1 width=210) (actual time=0.006..0.007 rows=1 loops=10)
               Index Cond: ("outer".id = b.id)
 Total runtime: 275.608 ms

The second form is faster, but more importantly, it does nearly its IO in the small table, and only fetches the needed 10 rows from the large table. Thus if the large table is not in disk cache, this is not so bad, which is the whole point of using a view to split this.

With indexes, fast plans are picked, but they all perform the join before doing the sort+limit. Only if there is an index on the "ORDER BY" column, it is used. And bitmap index scan also comes in to save the day (I love bitmap index scan).

However, I will have a lot of searchable columns, and ORDER BY options. Ideally I would like to create a few indexes for the common searches and order-by's. I would prefer not to create about 15 indexes on this table, because this will slow down updates. Besides, some of the ORDER BY's are expressions.

A seq scan or an index scan of the small table, followed by a sort and limit, then joining to the other table, wouls be more logical.

Suppose I create an index on price and on category :

EXPLAIN ANALYZE SELECT * FROM items_view2 WHERE price IS NOT NULL AND category IN (4,32) ORDER BY price LIMIT 10; QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..31.54 rows=10 width=224) (actual time=0.737..0.964 rows=10 loops=1) -> Nested Loop Left Join (cost=0.00..112594.96 rows=35700 width=224) (actual time=0.735..0.958 rows=10 loops=1) -> Index Scan using item_data_price on items_data a (cost=0.00..4566.76 rows=35700 width=16) (actual time=0.696..0.753 rows=10 loops=1) Filter: ((price IS NOT NULL) AND ((category = 4) OR (category = 32))) -> Index Scan using items_desc_pkey on items_desc b (cost=0.00..3.01 rows=1 width=212) (actual time=0.018..0.018 rows=1 loops=10)
               Index Cond: ("outer".id = b.id)
 Total runtime: 0.817 ms

Now, with a subtly different order by :

EXPLAIN ANALYZE SELECT * FROM items_view2 WHERE price IS NOT NULL AND category IN (4,32) ORDER BY price,category LIMIT 10; QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=12931.79..12931.82 rows=10 width=224) (actual time=1121.426..1121.433 rows=10 loops=1) -> Sort (cost=12931.79..13021.04 rows=35700 width=224) (actual time=1121.424..1121.428 rows=10 loops=1)
         Sort Key: a.price, a.category
-> Merge Left Join (cost=0.00..7967.65 rows=35700 width=224) (actual time=0.060..530.815 rows=36705 loops=1)
               Merge Cond: ("outer".id = "inner".id)
-> Index Scan using items_data_pkey on items_data a (cost=0.00..2687.66 rows=35700 width=16) (actual time=0.051..116.995 rows=36705 loops=1) Filter: ((price IS NOT NULL) AND ((category = 4) OR (category = 32))) -> Index Scan using items_desc_pkey on items_desc b (cost=0.00..4585.83 rows=99166 width=212) (actual time=0.003..205.652 rows=95842 loops=1)
 Total runtime: 1128.972 ms

ORDER BY price,category disables the use of index for sort, and thus a large join is performed. With the rewritten query :

EXPLAIN ANALYZE SELECT foo.*, b.description FROM (SELECT * FROM items_data a WHERE price IS NOT NULL AND category IN (4,32) ORDER BY price,category DESC LIMIT 10) AS foo LEFT JOIN items_desc b ON foo.id=b.id ORDER BY price,category DESC LIMIT 10; QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=4229.26..4259.64 rows=10 width=224) (actual time=222.353..222.410 rows=10 loops=1) -> Nested Loop Left Join (cost=4229.26..4259.64 rows=10 width=224) (actual time=222.352..222.405 rows=10 loops=1) -> Limit (cost=4229.26..4229.28 rows=10 width=16) (actual time=222.318..222.324 rows=10 loops=1) -> Sort (cost=4229.26..4318.51 rows=35700 width=16) (actual time=222.317..222.322 rows=10 loops=1)
                     Sort Key: a.price, a.category
-> Bitmap Heap Scan on items_data a (cost=239.56..1529.69 rows=35700 width=16) (actual time=6.926..34.018 rows=36705 loops=1) Recheck Cond: ((category = 4) OR (category = 32))
                           Filter: (price IS NOT NULL)
-> BitmapOr (cost=239.56..239.56 rows=37875 width=0) (actual time=6.778..6.778 rows=0 loops=1) -> Bitmap Index Scan on item_data_cat (cost=0.00..229.61 rows=36460 width=0) (actual time=6.295..6.295 rows=36400 loops=1)
                                       Index Cond: (category = 4)
-> Bitmap Index Scan on item_data_cat (cost=0.00..9.95 rows=1415 width=0) (actual time=0.482..0.482 rows=1340 loops=1)
                                       Index Cond: (category = 32)
-> Index Scan using items_desc_pkey on items_desc b (cost=0.00..3.01 rows=1 width=212) (actual time=0.006..0.006 rows=1 loops=10)
               Index Cond: ("outer".id = b.id)
 Total runtime: 224.476 ms

It is not very fast (the sort takes most of the time), but still is a lot faster !

Now, what should I do ?...







[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux