As I said, the record size is applied on the file creation :-) so by copying your data from one directory to another one you've made the new record size applied on the newly created files :-) (equal to backup restore if there was not enough space).. Did you try to redo the same but still keeping record size equal 8K ? ;-) I think the problem you've observed is simply related to the copy-on-write nature of ZFS - if you bring any modification to the data your sequential order of pages was broken with a time, and finally the sequential read was transformed to the random access.. But once you've re-copied your files again - the right order was applied again. BTW, 8K is recommended for OLTP workloads, but for DW you may stay with 128K without problem. Rgds, -Dimitri On 5/10/10, Josh Berkus <josh@xxxxxxxxxxxx> wrote: > On 5/9/10 1:45 AM, Dimitri wrote: >> Josh, >> >> it'll be great if you explain how did you change the records size to >> 128K? - as this size is assigned on the file creation and cannot be >> changed later - I suppose that you made a backup of your data and then >> process a full restore.. is it so? > > You can change the recordsize of the zpool dynamically, then simply copy > the data directory (with PostgreSQL shut down) to a new directory on > that zpool. This assumes that you have enough space on the zpool, of > course. > > We didn't test how it would work to let the files in the Postgres > instance get gradually replaced by "natural" updating. > > -- > -- Josh Berkus > PostgreSQL Experts Inc. > http://www.pgexperts.com > > -- > Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance > -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance