Re: Performance on Bulk Insert to Partitioned Table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 20, 2012 at 9:29 AM, Charles Gomes <charlesrg@xxxxxxxxxxx> wrote:
> Hello guys
>
>
>
> I’m doing 1.2 Billion inserts into a table partitioned in
> 15.
>
>
>
> When I target the MASTER table on all the inserts and let
> the trigger decide what partition to choose from it takes 4 hours.
>
> If I target the partitioned table directly during the
> insert I can get 4 times better performance. It takes 1 hour.

How do you target them directly?  By implementing the
"trigger-equivalent-code" in the application code tuple by tuple, or
by pre-segregating the tuples and then bulk loading each segment to
its partition?

What if you get rid of the partitioning and just load data to the
master, is that closer to 4 hours or to 1 hour?

...
>
>
> What I noticed that iostat is not showing an I/O bottle
> neck.
>
> iostat –xN 1
>
> Device:
> rrqm/s   wrqm/s     r/s
> w/s   rsec/s   wsec/s avgrq-sz avgqu-sz
> await  svctm  %util
>
> Pgresql--data
> 0.00     0.00    0.00
> 8288.00     0.00 66304.00
> 8.00    60.92    7.35
> 0.01   4.30

8288 randomly scattered writes per second sound like enough to
bottleneck a pretty impressive RAID.  Or am I misreading that?

Cheers,

Jeff


-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux