Hi, I'have rather odd case.
I have a "cache" table in Postgres and I need to insert 100K - 1M records in parallel from different sources. Sources can try to insert duplicated data.
I'm too lazy to write complex sync code around INSERT process that is why I do it this way:
1. create UNLOGGED table WITH (autovacuum_enabled=false)
2. do insert into TABLE (foo,bar) values (1,2) on conflict do nothing.
I'm fine with lower perf compared to the COPY command since writing synchronization is 100 time more expensive than slow insert.
BUT the performance is waaay to slow.
It takes around 10.000 ms to insert 50.000 rows.
Each row has 150 columns.
Table has single PK (I can't drop it)
Why is it so slow?
Can I do something with it?