>> >> >> >>Yes, loading a large dictionary is known to be a fairly expensive >>operation. There's been discussions about how to make it cheaper, but >>nothing's been done yet. >> >> regards, tom lane > > Hi Tom, > > thanks for the quick response. Bad news for me ;( > We develop ajax-driven web apps, which sort of rely on quick calls to data > services. Each call to a service opens a new connection. This makes the > search service, if using fts and ispell, about 100 times slower than a > "dumb" ILIKE-implementation. > > Is there any way of hack or compromise to achieve good performance without > losing fts ability? > I am thinking, for example, of a way to permanently keep a loaded > dictionary in memory instead of loading it for every connection. As I > wrote in response to Pavel Stehule's post, connection pooling is not > really an option. > Our front-end is strictly PHP, so I was thinking about using a single > persistent connection > (http://de.php.net/manual/en/function.pg-pconnect.php) for all calls. Is > there some sort of major disadvantage in this approach from the database > point of view? > > Kind regards Hi, opening a completely new connection for each request may be a bit expensive, so I'd recommend using some king od connection pooling, especially when you're doing 'small' transactions (because that's when the overhead matters). We had exactly the same problem and persistent connection solved it. But it has some drawbacks too - each conneection has it's own copy of the dictionary. So if the dictionary takes 30MB and you have 10 connections, then 300 MB of memory is used. regards Tomas -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general