My greetings to everybody!
I recently faced with the observation which I can not explain. Why insertion throughput can be reduced with an increase of batch size?Brief description of the experiment.
- PostgreSQL 9.5.4 as server
- https://github.com/sfackler/rust-postgres library as client driver
- one relation with two indices (scheme in attach)
Experiment steps:
- populate DB with 259200000 random records
- start insertion for 60 seconds with one client thread and batch size = m
- record insertions per second (ips) in clients code
Plot median ips from m for m in [2^0, 2^1, ..., 2^15] (in attachment).
On figure with can see that from m = 128 to m = 256 throughput have been reduced from 13 000 ips to 5000.
I hope someone can help me understand what is the reason for such behavior?
--
Best regards
Filonov Pavel
Attachment:
postgres.sql
Description: application/sql
Attachment:
postgres.png
Description: PNG image
-- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general