Hi Pavel,
1. I believe we have lots of memory. How much is needed to read one array of 30K float number?
2. What do we need to avoid possible repeated detost, and what it is?
3. We are not going to update individual elements of the arrays. We might occasionally replace the whole thing. When we benchmarked, we did not notice slowness. Can you explain how to reproduce slowness?
TIA!
On Fri, Feb 14, 2014 at 11:03 PM, Pavel Stehule [via PostgreSQL] <[hidden email]> wrote:
HelloI worked with 80K float fields without any problem.There are possible issues:* needs lot of memory for detoast - it can be problem with more parallel queries
* there is a risk of possible repeated detost - some unhappy usage in plpgsql can be slow - it is solvable, but you have to identify this issue* any update of large array is slow - so these arrays are good for write once data
RegardsPavel2014-02-14 23:07 GMT+01:00 lup <[hidden email]>:
Would 10K elements of float[3] make any difference in terms of read/write--
performance?
Or 240K byte array?
Or are these all functionally the same issue for the server? If so,
intriguing possibilities abound. :)
View this message in context: http://postgresql.1045698.n5.nabble.com/Is-it-reasonable-to-store-double-arrays-of-30K-elements-tp5790562p5792099.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via pgsql-general mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
http://postgresql.1045698.n5.nabble.com/Is-it-reasonable-to-store-double-arrays-of-30K-elements-tp5790562p5792144.htmlIf you reply to this email, your message will be added to the discussion below:
View this message in context: Re: Is it reasonable to store double[] arrays of 30K elements
Sent from the PostgreSQL - general mailing list archive at Nabble.com.