Paul A Jungwirth <pj@xxxxxxxxxxxxxxxxxxxxxxxx> writes: > I'm considering a table structure where I'd be continuously appending > to long arrays of floats (10 million elements or more). Keeping the > data in arrays gives me much faster SELECT performance vs keeping it > in millions of rows. > But since these arrays keep growing, I'm wondering about the UPDATE > performance. It's going to suck big-time :-(. You'd be constantly replacing all of a multi-megabyte toasted field. Even if the UPDATE speed per se seemed tolerable, this would be pretty nasty in terms of the vacuuming overhead and/or bloat it would impose. My very first use of Postgres, twenty years ago, involved time series data which perhaps is much like what you're doing. We ended up keeping the time series data outside the DB; I doubt the conclusion would be different today. I seem to recall having heard about a commercial fork of PG that is less bad for this type of data, but the community code is not the weapon you want. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general