We have a C-written application server which uses ESQL/C on top of PostgreSQL 13.1 on Linux. The application in question always serves the same search in a librarian database, given to the server as commands over the network, login into the application and doing a search: SLNPServerInit User:zfl SLNPEndCommand SLNPSearch HitListName:Zfernleihe Search:1000=472214284 SLNPEndCommand To fulfill the search, the application server has to do some 100 ESQL/C calls and all this should not take longer than 1-2 seconds, and normally it does not take longer. But, in some situations it takes longer than 180 seconds, in 10% of the cases. The other 90% are below 2 seconds, i.e. this is digital: Or 2 seconds, or more than 180 seconds, no values between. We can easily simulate the above with a small shell script just sending over the above two commands with 'netcat' and throwing away its result (the real search is done by an inter library loan software which has an timeout of 180 seconds to wait for the SLNPSearch search result -- that's why we got to know about the problem at all, because all this is running automagically with no user dialogs). The idea of the simulated search was to get to know with the ESQL/C log files which operation takes so long and why. Well, since some day, primary to catch the situation, we send over every 10 seconds this simulated searches and since then the problem went away at all. The Linux server where all this is running is highly equipped with memory and CPUs and 99% idle. The picture, that the problem went away with our test search every 10 seconds, let me think in something like "since we keep the PostgreSQL server busy that way it has not chance to go into some kind of deeper sleep" (for example being swapped out or whatever). Any ideas about this? matthias -- Matthias Apitz, ✉ guru@xxxxxxxxxxx, http://www.unixarea.de/ +49-176-38902045 Public GnuPG key: http://www.unixarea.de/key.pub