Re: [EXTERNAL] Re: Need help with slow query - postgres 12 on linux RHEL 8

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have added an index to the granule_file table and ran a few explains - here are the results....  thanks for the look!!

-------------------------------------
Here is the before analyze :

Sort  (cost=3208325.03..3208325.33 rows=117 width=997) (actual time=56683.259..56683.307 rows=4 loops=1)
   Sort Key: granule.uuid
   Sort Method: quicksort  Memory: 32kB
   Buffers: shared hit=40 read=795724, temp read=630171 written=630171
   ->  Hash Left Join  (cost=1844145.13..3208321.02 rows=117 width=997) (actual time=56683.080..56683.184 rows=4 loops=1)
         Hash Cond: (granule.visibility_id = visibility_1.id)
         Buffers: shared hit=37 read=795724, temp read=630171 written=630171
         ->  Hash Right Join  (cost=1844142.96..3208260.02 rows=117 width=1678) (actual time=56682.840..56682.891 rows=4 loops=1)
               Hash Cond: (granule_file_1.granule_uuid = granule.uuid)
               Buffers: shared hit=36 read=795724, temp read=630171 written=630171
               ->  Hash Join  (cost=1752547.97..3034700.90 rows=21856786 width=224) (actual time=21966.799..55011.964 rows=21855206 loops=1)
                     Hash Cond: (granule_file_1.file_id = file_1.id)
                     Buffers: shared hit=2 read=794153, temp read=630171 written=630171
                     ->  Seq Scan on granule_file granule_file_1  (cost=0.00..357270.86 rows=21856786 width=20) (actual time=0.334..3267.188 rows=21855206 loops=1)
                           Buffers: shared read=138703
                     ->  Hash  (cost=871329.32..871329.32 rows=21587732 width=208) (actual time=13425.791..13425.795 rows=21587732 loops=1)
                           Buckets: 8388608  Batches: 8  Memory Usage: 710896kB
                           Buffers: shared hit=2 read=655450, temp written=537221
                           ->  Seq Scan on file file_1  (cost=0.00..871329.32 rows=21587732 width=208) (actual time=0.277..5520.726 rows=21587732 loops=1)
                                 Buffers: shared hit=2 read=655450
               ->  Hash  (cost=91594.67..91594.67 rows=26 width=1470) (actual time=189.702..189.736 rows=4 loops=1)
                     Buckets: 1024  Batches: 1  Memory Usage: 13kB
                     Buffers: shared hit=34 read=1571
                     ->  Nested Loop Left Join  (cost=91434.88..91594.67 rows=26 width=1470) (actual time=189.653..189.704 rows=4 loops=1)
                           Buffers: shared hit=34 read=1571
                           ->  Limit  (cost=91434.60..91434.67 rows=26 width=1414) (actual time=189.444..189.473 rows=4 loops=1)
                                 Buffers: shared hit=23 read=1570
                                 ->  Sort  (cost=91434.60..91446.86 rows=4903 width=1414) (actual time=189.441..189.462 rows=4 loops=1)
                                       Sort Key: granule.uuid
                                       Sort Method: quicksort  Memory: 32kB
                                       Buffers: shared hit=23 read=1570
                                       ->  Nested Loop  (cost=0.56..91294.86 rows=4903 width=1414) (actual time=22.534..189.403 rows=4 loops=1)
                                             Buffers: shared hit=23 read=1570
                                             ->  Seq Scan on collection  (cost=0.00..653.62 rows=1 width=4) (actual time=3.706..14.783 rows=4 loops=1)
                                                   Filter: (((entry_id)::text ~~ 'AJAX_CO2_CH4_1'::text) OR ((entry_id)::text ~~ 'AJAX_O3_1'::text) OR ((entry_id)::text ~~ 'AJAX_CH2O_1'::text) OR ((entry_id)::text ~~ 'AJAX_MMS_1'::text))
                                                   Rows Removed by Filter: 2477
                                                   Buffers: shared hit=2 read=602
                                             ->  Index Scan using ix_granule_collection_id on granule  (cost=0.56..90455.52 rows=18572 width=1414) (actual time=21.662..43.645 rows=1 loops=4)
                                                   Index Cond: (collection_id = collection.id)
                                                   Filter: (is_active AND (((properties #>> '{temporal_extent,range_date_times,0,beginning_date_time}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}
'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,start_date}'::text[]) > '2015-10-06T23:59:59+00:00'::text)) AND (((properties #>> '{temporal_extent,range_date_times,0,end_date_time}'::text[]) < '
2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,end_date}'::text[]) < '2015-10-09T00:00:00+00:00'::text
)))
                                                   Rows Removed by Filter: 243
                                                   Buffers: shared hit=21 read=968
                           ->  Index Scan using collection_pkey on collection collection_1  (cost=0.28..6.14 rows=1 width=56) (actual time=0.052..0.052 rows=1 loops=4)
                                 Index Cond: (id = granule.collection_id)
                                 Buffers: shared hit=11 read=1
         ->  Hash  (cost=1.52..1.52 rows=52 width=16) (actual time=0.054..0.054 rows=52 loops=1)
               Buckets: 1024  Batches: 1  Memory Usage: 11kB
               Buffers: shared hit=1
               ->  Seq Scan on visibility visibility_1  (cost=0.00..1.52 rows=52 width=16) (actual time=0.032..0.036 rows=52 loops=1)
                     Buffers: shared hit=1
 Planning Time: 14.580 ms
 Execution Time: 56764.259 ms
(52 rows)


Then added the index:
CREATE INDEX granule_file_file_id_key ON granule_file USING btree(file_id);
CREATE INDEX

vacuum (analyze, verbose) granule_file;


& heres the new analyze:

 Sort  (cost=3208262.52..3208262.79 rows=105 width=997) (actual time=64720.414..64720.435 rows=4 loops=1)
   Sort Key: granule.uuid
   Sort Method: quicksort  Memory: 32kB
   Buffers: shared hit=140349 read=655418, temp read=630171 written=630171
   ->  Hash Left Join  (cost=1844145.13..3208259.00 rows=105 width=997) (actual time=64720.258..64720.325 rows=4 loops=1)
         Hash Cond: (granule.visibility_id = visibility_1.id)
         Buffers: shared hit=140346 read=655418, temp read=630171 written=630171
         ->  Hash Right Join  (cost=1844142.96..3208204.03 rows=105 width=1678) (actual time=64720.083..64720.105 rows=4 loops=1)
               Hash Cond: (granule_file_1.granule_uuid = granule.uuid)
               Buffers: shared hit=140345 read=655418, temp read=630171 written=630171
               ->  Hash Join  (cost=1752547.97..3034652.34 rows=21854840 width=224) (actual time=11945.807..63203.012 rows=21855206 loops=1)
                     Hash Cond: (granule_file_1.file_id = file_1.id)
                     Buffers: shared hit=138740 read=655418, temp read=630171 written=630171
                     ->  Seq Scan on granule_file granule_file_1  (cost=0.00..357251.40 rows=21854840 width=20) (actual time=0.017..3103.893 rows=21855206 loops=1)
                           Buffers: shared hit=138703
                     ->  Hash  (cost=871329.32..871329.32 rows=21587732 width=208) (actual time=11891.143..11891.146 rows=21587732 loops=1)
                           Buckets: 8388608  Batches: 8  Memory Usage: 710896kB
                           Buffers: shared hit=34 read=655418, temp written=537221
                           ->  Seq Scan on file file_1  (cost=0.00..871329.32 rows=21587732 width=208) (actual time=0.081..3996.438 rows=21587732 loops=1)
                                 Buffers: shared hit=34 read=655418
               ->  Hash  (cost=91594.67..91594.67 rows=26 width=1470) (actual time=19.728..19.740 rows=4 loops=1)
                     Buckets: 1024  Batches: 1  Memory Usage: 13kB
                     Buffers: shared hit=1605
                     ->  Nested Loop Left Join  (cost=91434.88..91594.67 rows=26 width=1470) (actual time=19.684..19.708 rows=4 loops=1)
                           Buffers: shared hit=1605
                           ->  Limit  (cost=91434.60..91434.67 rows=26 width=1414) (actual time=19.650..19.660 rows=4 loops=1)
                                 Buffers: shared hit=1593
                                 ->  Sort  (cost=91434.60..91446.86 rows=4903 width=1414) (actual time=19.648..19.656 rows=4 loops=1)
                                       Sort Key: granule.uuid
                                       Sort Method: quicksort  Memory: 32kB
                                       Buffers: shared hit=1593
                                       ->  Nested Loop  (cost=0.56..91294.86 rows=4903 width=1414) (actual time=2.765..19.609 rows=4 loops=1)
                                             Buffers: shared hit=1593
                                             ->  Seq Scan on collection  (cost=0.00..653.62 rows=1 width=4) (actual time=1.789..8.057 rows=4 loops=1)
                                                   Filter: (((entry_id)::text ~~ 'AJAX_CO2_CH4_1'::text) OR ((entry_id)::text ~~ 'AJAX_O3_1'::text) OR ((entry_id)::text ~~ 'AJAX_CH2O_1'::text) OR ((entry_id)::text ~~ 'AJAX_MMS_1'::text))
                                                   Rows Removed by Filter: 2477
                                                   Buffers: shared hit=604
                                             ->  Index Scan using ix_granule_collection_id on granule  (cost=0.56..90455.52 rows=18572 width=1414) (actual time=1.311..2.881 rows=1 loops=4)
                                                   Index Cond: (collection_id = collection.id)
                                                   Filter: (is_active AND (((properties #>> '{temporal_extent,range_date_times,0,beginning_date_time}'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}
'::text[]) > '2015-10-06T23:59:59+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,start_date}'::text[]) > '2015-10-06T23:59:59+00:00'::text)) AND (((properties #>> '{temporal_extent,range_date_times,0,end_date_time}'::text[]) < '
2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,single_date_times,0}'::text[]) < '2015-10-09T00:00:00+00:00'::text) OR ((properties #>> '{temporal_extent,periodic_date_times,0,end_date}'::text[]) < '2015-10-09T00:00:00+00:00'::text
)))
                                                   Rows Removed by Filter: 243
                                                   Buffers: shared hit=989
                           ->  Index Scan using collection_pkey on collection collection_1  (cost=0.28..6.14 rows=1 width=56) (actual time=0.008..0.008 rows=1 loops=4)
                                 Index Cond: (id = granule.collection_id)
                                 Buffers: shared hit=12
         ->  Hash  (cost=1.52..1.52 rows=52 width=16) (actual time=0.045..0.045 rows=52 loops=1)
               Buckets: 1024  Batches: 1  Memory Usage: 11kB
               Buffers: shared hit=1
               ->  Seq Scan on visibility visibility_1  (cost=0.00..1.52 rows=52 width=16) (actual time=0.026..0.029 rows=52 loops=1)
                     Buffers: shared hit=1
 Planning Time: 7.354 ms
 Execution Time: 64789.927 ms
(52 rows)


On 12/28/23, 7:49 AM, "depesz@xxxxxxxxxx <mailto:depesz@xxxxxxxxxx> on behalf of hubert depesz lubaczewski" <depesz@xxxxxxxxxx <mailto:depesz@xxxxxxxxxx>> wrote:


CAUTION: This email originated from outside of NASA. Please take care when clicking links or opening attachments. Use the "Report Message" button to report suspicious messages to the NASA SOC.








On Wed, Dec 27, 2023 at 09:40:41PM +0000, Wilson, Maria Louise (LARC-E301)[RSES] wrote:
> Hello folks!
>
> I am having a complex query slowing over time increasing in duration.
> If anyone has a few cycles that they could lend a hand or just point
> me in the right direction with this – I would surely appreciate it!
> Fairly beefy Linux server with Postgres 12 (latest) – this particular
> query has been getting slower over time & seemingly slowing everything
> else down. The server is dedicated entirely to this particular
> database. Let me know if I can provide any additional information!!
> Thanks in advance!


Hi,
we'd need `explain (analyze, buffers) select ...` for your query.


Ideally, could you put it on https://explain.depesz.com/ <https://explain.depesz.com/>, with query and
`\d`, and provide us with link?


Best regards,


depesz









[Index of Archives]     [Postgresql Home]     [Postgresql General]     [Postgresql Performance]     [Postgresql PHP]     [Postgresql Jobs]     [PHP Users]     [PHP Databases]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Yosemite Forum]

  Powered by Linux