Hi, I need to generate aggregates of data coming from a stream. I could easily doing it inserting data coming from the stream into a table, and then query it using something like: select <my aggregation function> from atable group by <a column list> The problem with this approach is that I would have to wait for the whole stream to be finished before making the above query; since we're talking about 20M+ rows, it would take some time for the query to finish. What if I do something like: select <my aggregation function> from my_fifo_function([...]) group by <a column list> where my_fifo_function reads data from the stream and returns "rows" as soon as they are available on the stream? This way I would get the reply as soon as the stream has finished (assuming postgresql can keep up with that). In other words, the query would be made even before the stream has "started", and would last at least as long as the stream. (of course, I don't need data from the stream to be saved in any way, that's why I don't need to store it in any table). Does postgresql read data from a function returning a SETOF row by row? Or it waits the whole function to be finished (caching the whole resultset) before starting to use the returned data? If it reads row by row I think it could work... Would that make sense? -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general