On Fri, Feb 15, 2013 at 1:00 AM, Nicolas Charles <nicolas.charles@xxxxxxxxxxxxx> wrote: > On 14/02/2013 20:27, Jeff Janes wrote: >> >> On Thu, Feb 14, 2013 at 7:35 AM, Nicolas Charles >> <nicolas.charles@xxxxxxxxxxxxx> wrote: >>> >>> It contains 11018592 entries, with the followinf patterns : >>> 108492 distinct executiontimestamp >>> 14 distinct nodeid >>> 59 distinct directiveid >>> 26 distinct ruleid >>> 35 distinct serial >> >> How many entries fall within a typical query interval of >> executiontimestamp? > > > Around 65 000 entries > > . >>> >>> I'm surprised that the executiontimestamp index is not used, since it >>> seems >>> to be where most of the query time is spent. >> >> I do not draw that conclusion from your posted information. Can you >> highlight the parts of it that lead you to this conclusion? > > The index scan are on nodeid_idx and configurationruleid_idx, not on > executiontimestamp > Or am I misreading the output ? You are correct about the use of nodeid_idx and configurationruleid_idx. But the part about most of the time going into filtering out rows that fail to match executiontimestamp is the part that I don't think is supported by the explained plan. The information given by the explain plan is not sufficient to decide that one way or the other. Some more recent improvements in "explain (analyze, buffers)" would make it easier (especially with track_io_timing on) but it would still be hard to draw a definitive conclusion. The most definitive thing would be to do the experiment of adding executiontimestamp as a *trailing* column to the end of one of the existing indexes and see what happens. >> If the indexes fit in RAM, they fit in RAM. If anything, increasing >> shared_buffers could make it harder to fit them entirely in RAM. If >> your shared buffers undergo a lot of churn, then the OS cache and the >> shared buffers tend to uselessly mirror each other, meaning there is >> less space for non-redundant pages. > > Oh ! > So I completely misunderstood the meaning of shared_buffer; I figured that > it was somehow the place where the data would be stored by postgres (like > indexes) That is correct, it is the space used by postgres to cache data. But, the rest of RAM (beyond shared_buffers) will also be used to cache data; but by the OS rather than by postgres. On a dedicated server, the OS will generally choose to (or at least attempt to) use this space for the benefit of postgres anyway. If shared_buffers > RAM/2, it won't be very successful at this, but it will still try. The kernel and postgres do not have intimate knowledge of each other, so it is hard to arrange that all pages show up in just one place or the other but not both. Cheers, Jeff -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance