boraldomaster wrote > But if held cursor was created as fast as unheld - I could change my > opinion. > I don't understand why is this really impossible. > When I create unheld cursor - it takes 1 ms. Why cannot held cursor do the > same (but store in session - or even better in whole db - anything it > stores in transaction when being unheld). > Even algorythmically - this should be possible. > If I make * > select * from z * > - it actually shouldn't fetch anything - just save this query. > When I do * > fetch 10 from mycursor * > - it should fetch first 10 records but not more. > And so on. > So - why is this really impossible ? Your original examples only create the cursor and do not actually use it. You should be comparing how long it takes both examples to fetch the first 10 pages of records to get a meaningful comparison. It won't matter if the DECLARE only takes 3ms in the non-hold case if retrieving the first page of records take 2s. Very few things are impossible, and this likely is not one of them, but unless you don't want to go live with your solution for another couple of years it likely does not matter whether the held cursor performance can be improved by re-engineering the code. The current implementation saves the hold cursor's results to a temporary area as a snapshot so that it can be used consistently outside of transactions without the hassle of creating an actual persistent temporary table. Basically you save having to send the entire result-set to the caller but instead cache the results and feed only a fraction of them at each request. It does not seem intended to solve the problem of dynamic fetching in that if it hasn't executed the query then how is it supposed to know what the "next 10" records are? This also ignores the fact that held cursor wants to return the same data that existed at the time of declaration - which is why a non-held cursor can only work in a transaction where that state information is saved. Since that state is discarded the held version has to compromise by creating a snapshot of all the data and persist it. In theory the held cursor could save the meaningful state information and during fetch the system could pretend it is operating some time in the past. You are welcome to try and convince someone to explore this and other theories - I am not that person nor do I have the time/need to perform the convincing. I do not have any meaningful experience with your scenario but if a page is a very small fraction of the total table size then using indexes, limits, and offsets should give you solid performance probably 98%+ of the time. Yes, large offsets can be problematic but reverse ordering can help AND in most use cases the frequency of high page numbers compared to lower ones is significantly less. David J. -- View this message in context: http://postgresql.1045698.n5.nabble.com/How-to-create-a-cursor-that-is-independent-of-transactions-and-doesn-t-calculated-when-created-tp5762401p5762567.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general