Re: Large number of tables slow insert

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On this smaller test, the indexes are over the allowed memory size (I've got
over 400.000 readings per sensor) so they are mostly written in disk. And on the
big test, I had small indexes (< page_size) because I only had about 5-10 rows
per table, thus it was 3000*8kb = 24mb which is lower than the allowed memory.

btw which is the conf parameter that contains the previously read indexes ?

I cannot do test this weekend because I do not have access to the machine but I
will try on monday some tests.

Thanks for your answers thought

Selon Scott Marlowe <scott.marlowe@xxxxxxxxx>:

> On Sat, Aug 23, 2008 at 1:35 PM,  <tls.wydd@xxxxxxx> wrote:
> > Actually, I've got another test system with only few sensors (thus few
> tables)
> > and it's working well (<10ms insert) with all the indexes.
> > I know it's slowing down my performance but I need them to interogate the
> big
> > tables (each one can reach millions rows with time) really fast.
>
> It's quite likely that on the smaller system the indexes all fit into
> memory and only require writes, while on the bigger system they are
> too large and have to be read from disk first, then written out.
>
> A useful solution is to remove most of the indexes on the main server,
> and set up a slony slave with the extra indexes on it to handle the
> reporting queries.
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>




[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux