Re: Writing 1100 rows per second

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am using npgsql in my c# program,
I have a table in which we need to dump data from a flat file approx. 5.5 million rows once in every 24 hours.
Soon it will grow to 10 million rows.
We are using npgsql binary import and all this is done (5.5 million) inserts in < 3 minutes.
Then we create two indexes on the table which takes 25-30 seconds.

With Warm Regards,

Amol P. Tarte
Project Manager,
Rajdeep Info Techno Pvt. Ltd.


On Mon, Feb 10, 2020 at 1:00 AM Jeff Janes <jeff.janes@xxxxxxxxx> wrote:
On Wed, Feb 5, 2020 at 12:25 PM Arya F <arya6000@xxxxxxxxx> wrote:
If I run the database on a server that has enough ram to load all the indexes and tables into ram. And then it would update the index on the HDD every x seconds. Would that work to increase performance dramatically?

Perhaps.  Probably not dramatically though.  If x seconds (called a checkpoint) is not long enough for the entire index to have been dirtied, then my finding is that writing half of the pages (randomly interspersed) of a file, even in block order, still has the horrid performance of a long sequence of random writes, not the much better performance of a handful of sequential writes.  Although this probably depends strongly on your RAID controller and OS version and such, so you should try it for yourself on your own hardware.

Cheers,

Jeff

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux