Hi all,
I have a query that runs much slower in Postgres on Windows than on
Linux, and I'm so far unable to explain why - the execution plans are
identical and the hardware is reasonably the same caliber.
Using explain analyze on the database running on Windows I get
-> Index Scan using event_pkey on event t1 (cost=0.56..0.95 rows=1
width=295) (actual time=0.075..0.075 rows=0 loops=229227)
The server is Postgres 12, and for reasons outside of my control it runs
on Windows 2012 on a virtual server. It has 4 cores and 32 GB ram
allocated on a Xeon E5 4660 v4, and running winsat shows satisfactory
disk and memory bandwidth and CPU performance.
If I copy the database to my laptop running Linux (Postgres 12 on Fedora
32, i7-9750H, 16 GB ram) I get the exact same execution plan. Explain
analyze says
-> Index Scan using event_pkey on event t1 (cost=0.56..0.95 rows=1
width=295) (actual time=0.008..0.008 rows=0 loops=229227)
Note that the index scans are more than 9 times faster on my laptop, and
the entire query executes about 12 times faster. I realize that each
core in the laptop CPU is faster than a server core and that
virtualization doesn't help performance, but I wouldn't expect that to
make the Windows box 10 times slower.
The table is freshly vacuumed. It has about 10M rows and takes about
2.6G disk space; the index is about 600M. Everything is cached; there's
basically no disk I/O happening while the query is executing.
The only Postgres configuration difference between the Windows and Linux
environments is shared_buffers, which is 4G on my laptop and 512M on the
Windows server, and effective_cache_size which are 8G on the laptop and
16G on the server.
I suspect that something is rotten in for example the provisioning of
the virtualization environment, but before I start pestering the
operations people I would really appreciate any comments on whether the
performance difference is to be expected or if there's some obvious
tuning to try.
Best regards & thanks,
Mikkel Lauritsen