Search Postgresql Archives

Re: query performance, though it was timestamps,maybe just table size?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 3, 2012 at 5:56 AM, Henry Drexler <alonup8tb@xxxxxxxxx> wrote:
> On Sun, Dec 2, 2012 at 12:44 AM, Jeff Janes <jeff.janes@xxxxxxxxx> wrote:
>>
>> Could you do it for the recursive
>> SQL (the one inside the function) like you had previously done for the
>> regular explain?
>>
>> Cheers,
>>
>> Jeff
>
>
> Here they are:
>
> for the 65 million row table:
> "Index Scan using ctn_source on massive  (cost=0.00..189.38 rows=1 width=28)
> (actual time=85.802..85.806 rows=1 loops=1)"
> "  Index Cond: (ctn = 1302050134::bigint)"
> "  Filter: (dateof <@ '["2012-07-03 14:00:00","2012-07-10
> 14:00:00"]'::tsrange)"
> "  Buffers: shared read=6"
> "Total runtime: 85.891 ms"

If you execute it repeatedly (so that the data is in buffers the next
time) does it then get faster?

> for the 30 million row table:
> "Index Scan using ctn_dateof on massive  (cost=0.00..80.24 rows=1 width=24)
> (actual time=0.018..0.020 rows=1 loops=1)"
> "  Index Cond: (ctn = 1302050134::bigint)"
> "  Filter: (dateof <@ '[2012-07-03,2012-07-11)'::daterange)"
> "  Buffers: shared hit=5"
> "Total runtime: 0.046 ms"

The obvious difference is that this one finds all 5 buffers it needs
in buffers already, while the first one had to read them in.  So this
supports the idea that your data has simply grown too large for your
RAM.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux