On 12/15/2016 08:26 AM, Tom DalPozzo wrote:
https://www.postgresql.org/docs/9.5/static/storage-toast.html
<https://www.postgresql.org/docs/9.5/static/storage-toast.html>
"The TOAST management code is triggered only when a row value to be
stored in a table is wider than TOAST_TUPLE_THRESHOLD bytes
(normally 2 kB). The TOAST code will compress and/or move field
values out-of-line until the row value is shorter than
TOAST_TUPLE_TARGET bytes (also normally 2 kB) or no more gains can
be had. During an UPDATE operation, values of unchanged fields are
normally preserved as-is; so an UPDATE of a row with out-of-line
values incurs no TOAST costs if none of the out-of-line values change."
Pupillo
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx <mailto:adrian.klaver@xxxxxxxxxxx>
I see. But in my case rows don't reach that thresold (I didn't check if
2K but I didn't change anything). So I'm wondering if there is any other
chance except the TOAST to get the rows compressed or not.
Are you really sure you want that? For small files the overhead of
compression tends to out weigh the benefits. A contrived example biased
to making my point:
aklaver@killi:~> dd if=/dev/urandom of=file.txt bs=10 count=10
10+0 records in
10+0 records out
100 bytes (100 B) copied, 0.253617 s, 0.4 kB/s
aklaver@killi:~> l -h file.txt
-rw-r--r-- 1 aklaver users 100 Dec 15 13:07 file.txt
aklaver@killi:~> gzip file.txt
aklaver@killi:~> l -h file.txt.gz
-rw-r--r-- 1 aklaver users 132 Dec 15 13:07 file.txt.gz
I noticed that, when I use constant data, the total IO writes (by
iostat) are more or less 1/2 of the the total IO writes when using
random or other data hard to compress.
Define constant data?
I thought the data you are inputting is below the compression threshold?
Is I/O causing a problem or to put it another way, what is the problem
you are trying to solve?
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general