I finally figured it out as follows:
1. modified the corresponding data type of the columns to the csv file
2. if null values existed, defined the data type to varchar. The null values cause problem too.
so 1100 culumns work well now.
This problem wasted me three days. I have lots of csv data to COPY.
---- On 星期三, 04 一月 2017 08:39:42 -0800 Adrian Klaver <adrian.klaver@xxxxxxxxxxx> wrote ----
On 01/04/2017 08:32 AM, Steve Crawford wrote:> ...>> Numeric is expensive type - try to use float instead, maybe double.>>> If I am following the OP correctly the table itself has all the> columns declared as varchar. The data in the CSV file is a mix of> text, date and numeric, presumably cast to text on entry into the table.>>> But a CSV *is* purely text - no casting to text is needed. Conversion is> only needed when the strings in the CSV are text representations of> *non*-text data.Yeah, muddled thinking.>> I'm guessing that the OP is using all text fields to deal with possibly> flawed input data and then validating and migrating the data in> subsequent steps. In that case, an ETL solution may be a better> approach. Many options, both open- closed- and hybrid-source exist.>> Cheers,> Steve--Adrian Klaver--Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)To make changes to your subscription: