Question,
How exactly is Postgres and Linux use the memory?
I have serveral databases that have multi GB indexes on very large
tables. On our current servers, the indexes can fit into memory
but not the data (servers have 8 - 12 GB). However, my boss is
wanting to get new servers for me but does not want to keep the memory
requirements as high as they are now (this will allow us to get more
servers to spread our 200+ databases over).
Question, if I have a 4GB+ index for a table on a server with 4GB ram,
and I submit a query that does an index scan, does Postgres read the
entire index, or just read the index until it finds the matching value
(our extra large indexes are primary keys).
I am looking for real number to give to my boss the say either having a
primary key larger than our memory is bad (and how to clearly justfify
it), or it is ok.
If it is ok, what are the trade offs in performance?\
Obviously, I want more memory, but I have to prove the need to my boss since it raises the cost of the servers a fair amount.
Thanks for any help,
Chris