Search Postgresql Archives

Re: Foreign table performance issue / PostgreSQK vs. ORACLE

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ingolf,

> On 29. Jan 2021, at 13:56, Markhof, Ingolf <ingolf.markhof@xxxxxxxxxxxxxx> wrote:
> 
> Hi!
>  
> I am struggling with the slow performance when running queries referring to foreign tables. – Yes, I know… - Please read the whole story!

Done and it rings a bell or two.

> The set-up basically is a production database and a reporting database. As names indicate, the production database is used for production, the reporting database is for analysis. On the reporting database, the only way to access product data is via foreign tables that link to the related production tables.
>  
> Now, while some queries on the reporting service run fine, some don't even return any data after hours.
>  
> However, the same set-up worked fine in Oracle before. Reporting wasn't always fast, but it delivered results in acceptable time. A query executed on the Oracle reporting server returns data in e.g. 30 seconds. But running the query translated to PostgreSQL on the PostgreSQL DB does not deliver a single row after hours (!) of run time.
>  
> So, I wonder: Is there a fundamental difference between Oracle database links and foreign tables in PostgreSQL that could explain the different run times? Could there be some tuning option in PostgreSQL to make queries via foreign tables faster (e.g. I heard about option fetch_size)?

You did not explicitly mention it, but I assume you are using postgres_fdw to connect from reporting (R) to production (P). Thomas and Tom already mentioned incomplete/non-existing/non-applicable filter pushdowns. I want to add another probable root cause to the list explaining the behavior you experience.

The postgres_fdw uses a CURSOR on P to execute the query. While this guarantees transaction safety, it also prohibits parallelism (PostgreSQL server-side cursors enforce a sequential plan).

As a result, depending on the size of tables, indexes, and filters pushed down (or not), this probably results in slow-running queries. IMO, the worst-case scenario is that a sequential table scan without any filtering, and a single worker runs on the target.

Of course, you can try to optimize schemas on P and queries on R, enabling more filter pushdown and eventually a faster execution. However, I believe this does not work with your entire workload, i.e. there will always be performance gaps.

The parallelism issue is theoretically fixable by utilizing partitions on P. R then connects to P with multiple postgres_fdw-backed child tables. However, this will only work with a patch to postgres_fdw to implement "IsForeignScanParallelSafe" (see [1] for a possible implementation). Without this method, there will be no parallelism again. Without, the partitions scan occurs sequentially, not showing a performance gain.

I want to mention there are proprietary options available (re-)enabling PostgreSQL parallelism with cursors. Such an extension can potentially fix your performance issue. However, I have not tried it so far with a setup similar to yours.

Cheers,
Sebastian


[1]: https://github.com/swarm64/parallel-postgres-fdw-patch


--

Sebastian Dressler, Solution Architect, Swarm64 AS
+49 30 994 0496 72 | sebastian@xxxxxxxxxxx





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux