Hi Kenneth, Andreas,
Thanks for your tips!
I increased shared_buffers to 8GB but it has no measurable effect at all. I think that is logical: shared buffers are important for querying but not for inserting; for that the speed to write to disk seems most important- no big reason to cache the data if the commit requires a full write anyway.
I also changed the code to do only one commit; this also has no effect I can see.
It is true that Oracle had more memory assigned to it (1.5G), but unlike Postgres (which is completely on a fast SSD) Oracle runs on slower disk (ZFS)..
I will try copy, but I first need to investigate how to use it- its interface seems odd to say the least ;) I'll report back on that once done.
Any other tips would be welcome!
Regards,
Frits
On Fri, Jun 9, 2017 at 3:30 PM Kenneth Marshall <ktm@xxxxxxxx> wrote:
On Fri, Jun 09, 2017 at 03:24:15PM +0200, Andreas Kretschmer wrote:
>
>
> Am 09.06.2017 um 15:04 schrieb Frits Jalvingh:
> >Hi all,
> >
> >I am trying to improve the runtime of a big data warehouse
> >application. One significant bottleneck found was insert
> >performance, so I am investigating ways of getting Postgresql to
> >insert data faster.
>
> * use COPY instead of Insert, it is much faster
> * bundle all Insert into one transaction
> * use a separate disk/spindel for the transaction log
>
>
>
> >
> >I already changed the following config parameters:
> >work_mem 512MB
> >synchronous_commit off
> >shared_buffers 512mb
> >commit_delay 100000
> >autovacuum_naptime 10min
> >
> >Postgres version is 9.6.3 on Ubuntu 17.04 64 bit, on a i7-4790K
> >with 16GB memory and an Intel 750 SSD. JDBC driver is
> >postgresql-42.1.1.
> >
>
> increase shared_buffers, with 16gb ram i would suggest 8gb
+1 Without even checking, I think Oracle is configured to use a LOT
more memory than 512mb.
Regards,
Ken
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance