Re: improve performance in a big table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Dec 13, 2007 7:15 AM, olivier boissard <olivier.boissard@xxxxxxxxx> wrote:
> A.Burbello a écrit :
> > Hi people,
> >
> > I have a case and I got a doubt what should be the
> > best ways.
> >
> > One table has more than 150 million of rows, and I
> > thought that could divided by state.
> > For each row has person ID, state and other
> > informations, but the search would be done only by
> > person ID (number column).
> >
> > I can improve the query by putting index in that
> > column, but is there any other ways?
>
> I also studies how to improve performances on big tables.
> Like you , I don't know how to improve without index. It's the only way
> I found .
> I find postgresql is fast on small table but I got real performance
> problem when increases the number of rows
> Do anyone know if  there is specific postgresql tuning parameters in
> .conf file for big tables ?
>
> max_fsm_pages ?
> max_fsm_relations ?

More often than not, the answer lies not in tuning but in rearranging
how you think of your data and how you create indexes.

If you guys post some schema and queries (with explain analyze) that
aren't running so fast, we'll try to help.  although pgsql-perform is
the better place to do that.

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux