Re: Read performance on Large Table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 21, 2015 at 9:18 AM, Scott Ribe <scott_ribe@xxxxxxxxxxxxxxxx> wrote:
> On May 21, 2015, at 9:05 AM, Scott Marlowe <scott.marlowe@xxxxxxxxx> wrote:
>>
>> I've done a lot of partitioning of big data sets in postgresql and if
>> there's some common field, like data, that makes sense to partition
>> on, it can be a huge win.
>
> Indeed. I recently did it on exactly this kind of thing, a log of activity. And the common queries weren’t slow at all.
>
> But if I wanted to upgrade via dump/restore with minimal downtime, rather than set up Slony or try my luck with pg_upgrade, I could dump the historical partitions, drop those tables, then dump/restore, then restore the historical partitions at my convenience. (In this particular db, history is unusually huge compared to the live data.)

I use an interesting method to setup partitioning. I setup my
triggers, then insert the data in chunks from the master table to
itself.

insert into master_table select * from only master_table limit 10000;

and run that over and over. The data is all in the same "table" to the
application. But it's slowly moving to the partitions without
interruption.

Note: ALWAYS use triggers for partitioning. Rules are way too slow.


-- 
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux