Search Postgresql Archives

Re: *Regarding brin_index on required column of the table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Thu, Sep 20, 2018 at 7:25 PM Durgamahesh Manne <maheshpostgres9@xxxxxxxxx> wrote:


On Thu, Sep 20, 2018 at 6:39 PM Andreas Kretschmer <andreas@xxxxxxxxxxxxxxx> wrote:


Am 20.09.2018 um 13:11 schrieb Durgamahesh Manne:
> Query was executed at less time without distinct
>
> As well as query was taking around 7 minutes to complete execution
> with distinct
>
>  select   distinct  sub_head."vchSubmittersCode" ,rec."vchFileName" ,
> rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  , 
> sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec
> join table2 sub_head on
> rec."vchSubmittersCode"=sub_head."vchSubmittersCode" where
> rec."bFetch"=false and sub_head."bFetch"=false ;
>
> I need to execute above distinct query at less time as distinct query
> was taking more time to execute  even i have created indexes on
> required columns of the tables
>

 >  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
time=326397.551..389515.863 rows=3700000
loops=1)                                                 |
 > |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89)
(actual time=326397.550..372470.846 rows=40500000
loops=1)                                                  |
 > |         Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
sub_head."vchValuationDate", rec."vchAdvisorLabel" |
 > |         Sort Method: external merge  Disk:
3923224kB                                                  |


as you can see: there are 40.500.000 rows to sort to filter out
duplicate rows, the result contains 'only' 3.700.000 rows. But for this
step the database needs nearly 4TB on-disk. This will, of course, need
some time.

If you have enough ram you can try to set work_mem to 5 or 6 GB to
change the plan to a in-memory - sort. But keep in mind, this is
dangerous! If the machine don't have enough free ram the kernal can
decide to Out-Of-Memory - killing processes.

What kind of disks do you have? Maybe you can use a separate fast SSD as
temp_tablespaces?


Regards, Andreas
--

2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com



Hi 

sdb[HDD]
sdc[HDD]
sda[HDD] 

i checked that there are hdd's in linux 

Regards



hi 

 distinct query executed very fast as i have increased work_mem value to 3gb temporarily  

Thank you very much for this valuable information 

now i would like to ask one question related to built in bdr replication 

when can be available bdr built in replication for use in production 

can i use v3 built in replication in prod?

please let me know  about the configuration of v3 bdr built in replication



Regards

Durgamahesh Manne 

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux