>>I think with packing he was referring to simply having more values in
>>the same disk space by using int2 instead of int4. (half the storage space)
I see yes, the values I'm dealing with are a bit too large to do that but yes good technique. Were they smaller I would use that.
It looks like if I do CREATE TYPE with variable length, I can make a type that's potentially toastable. I could decrease the toast threshold and recompile.
I'm not sure if that's very practical I know next to nothing about using create type. But if I can essentially make a toastable integer column type, that's indexable and doesn't have an insane performance penalty for using, that would be great.
Looks like my daily data is about 25 mbs before insert (ex via) COPY table to 'somefile';). After insert, and doing vacuum full and reindex, it's at about 75 megs.
If i gzip compress that 25 meg file it's only 6.3 megs so I'd think if I could make a toastable type it'd benefit.
Need to look into it now, I may be completely off my rocker.
Thank you
Shane Ambler <pgsql@xxxxxxxxxx> wrote:
Zoolin Lin wrote:
> Thank you for the reply
>
>>> Primary table is all integers like:
>>>
>>> date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8
>>> -------------------------------------------------------------------------------------------------
>>> primary key is on date to num->6 columns
>>>> What types are num1->8?
>> They are all integer
>
>>> Hmm - not sure if you'd get any better packing if you could make some
>>> int2 and put them next to each other. Need to test.
>
> Thanks, I find virtually nothing on the int2 column type? beyond brief mention here
> http://www.postgresql.org/docs/8.2/interactive/datatype-numeric.html#DATATYPE-INT
>
> Could i prevail on you to expand on packing wtih int2 a bit more, or point me in the right direction for documentation?
int4 is the internal name for integer (4 bytes)
int2 is the internal name for smallint (2 bytes)
Try
SELECT format_type(oid, NULL) AS friendly, typname AS internal,
typlen AS length FROM pg_type WHERE typlen>0;
to see them all (negative typlen is a variable size (usually an array or
bytea etc))
I think with packing he was referring to simply having more values in
the same disk space by using int2 instead of int4. (half the storage space)
>
> If there's some way I can pack multipe columns into one to save space, yet still effectively query on them, even if it's a lot slower, that would be great.
Depending on the size of data you need to store you may be able to get
some benefit from "Packing" multiple values into one column. But I'm not
sure if you need to go that far. What range of numbers do you need to
store? If you don't need the full int4 range of values then try a
smaller data type. If int2 is sufficient then just change the columns
from integer to int2 and cut your storage in half. Easy gain.
The "packing" theory would fall under general programming algorithms not
postgres specific.
Basically let's say you have 4 values that are in the range of 1-254 (1
byte) you can do something like
col1=((val1<<0)&(val2<<8)&(val3<<16)&(val4<<24))
This will put the four values into one 4 byte int.
So searching would be something like
WHERE col1 & ((val1<<8)&(val3<<0))=((val1<<8)&(val3<<0))
if you needed to search on more than one value at a time.
Guess you can see what your queries will be looking like.
(Actually I'm not certain I got that 100% correct)
That's a simple example that should give you the general idea. In
practice you would only get gains if you have unusual length values, so
if you had value ranges from 0 to 1023 (10 bits each) then you could
pack 3 values into an int4 instead of using 3 int2 cols. (that's 32 bits
for the int4 against 64 bits for the 3 int2 cols) and you would use <<10
and <<20 in the above example.
You may find it easier to define a function or two to automate this
instead of repeating it for each query. But with disks and ram as cheap
as they are these days this sort of packing is getting rarer (except
maybe embedded systems with limited resources)
> My current scheme, though as normalized and summarized as I can make it, really chews up a ton of space. It might even be chewing up more than the data files i'm summarizing, I assume due to the indexing.
>
--
Shane Ambler
pgSQL@xxxxxxxxxx
Get Sheeky @ http://Sheeky.Biz
---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Bored stiff? Loosen up...
Download and play hundreds of games for free on Yahoo! Games.