Search Postgresql Archives

Re: Archiving Data to Another DB?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/11/2018 11:59 AM, Kumar, Virendra wrote:
Does this apply to SELECT calls as well or only for DMLs.

Easy enough to test:

test=# \d projection

Foreign table "public.projection"
  ...

Server: fdw_test_server
FDW options: (schema_name 'public', table_name 'projection')

test=# select count(*) from projection ;
 count
-------
 28430
(1 row)

With log_statement = 'all':

select * from projection ;

postgres-2018-04-11 13:04:33.871 PDT-0LOG: statement: select * from projection ; postgres-2018-04-11 13:04:33.872 PDT-0LOG: statement: START TRANSACTION ISOLATION LEVEL REPEATABLE READ postgres-2018-04-11 13:04:33.872 PDT-0LOG: execute <unnamed>: DECLARE c1 CURSOR FOR SELECT line_id, p_item_no, c_id, method, year, qty, sub_method, proj_note, item_key, pot_ct, trial, ts_insert, ts_update, user_insert, user_update, link_key, v_number FROM public.projection
postgres-2018-04-11 13:04:33.872 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.873 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.874 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.875 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.876 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.877 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.878 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.879 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.880 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.881 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.882 PDT-0LOG:  statement: FETCH 100 FROM c1

...

test=# explain analyse select * from projection ;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Foreign Scan on projection (cost=100.00..115.34 rows=178 width=435) (actual time=0.844..163.493 rows=28430 loops=1)
 Planning time: 0.077 ms
 Execution time: 164.735 ms
(3 rows)



I am planning to use postgres_fdw but if it is going by one row at a time there will be a lot of round trip and defeat the purpose.

Regards,

Virendra.

*From:*Don Seiler [mailto:don@xxxxxxxxx]
*Sent:* Wednesday, April 11, 2018 2:53 PM
*To:* Adrian Klaver
*Cc:* pgsql-general@xxxxxxxxxxxxxx
*Subject:* Re: Archiving Data to Another DB?

On Wed, Apr 11, 2018 at 1:38 PM, Don Seiler <don@xxxxxxxxx <mailto:don@xxxxxxxxx>> wrote:

Yeah, I saw the same with a 132 row insert. Now imagine that with a monthly 50 million row insert or delete. :p Thanks for the confirmation!

I went back to look at the postgres logs on my dev server. These logs are rotated once they hit 2G in size. One typical log from the middle of my test last night hit that in 13 minutes and had over 5.2 million DELETE calls. There a quite a few logs like this. That would not be fun for the disk space on the log volume, either.

Don.

--

Don Seiler
www.seiler.us <http://www.seiler.us>


------------------------------------------------------------------------

This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux