Search Postgresql Archives

Re: COPY: row is too big

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



OK, maybe the final solution is to split it into half.

---- On 星期三, 04 一月 2017 06:53:31 -0800 Adrian Klaver <adrian.klaver@xxxxxxxxxxx> wrote ----

On 01/04/2017 05:00 AM, vod vos wrote:
> Now I am confused about I can create 1100 columns in a table in
> postgresql, but I can't copy 1100 values into the table. And I really

As pointed out previously:

Maximum Columns per Table    250 - 1600 depending on column types

That being dependent on both the number of columns and the actual data
in the columns. Empty columns are not the problem, it is when you start
filling them that you get the error.

> dont want to split the csv file to pieces to avoid mistakes after this
> action.
>
> I create a table with 1100 columns with data type of varchar, and hope
> the COPY command will auto transfer the csv data that contains some

I am afraid the solution is going to require more then hope. You are
going to need to break the data up. I suspect that just splitting it
into half would do the trick. So:

Table 1
column 1 for a primary key(assuming first column of your present data)
columns 2-550

Table 2
column 1 for a primary key(assuming first column of your present data)
columns 551-1100

Using the program I mentioned previously:


That translates into:

csvcut -c 1,2-550 your_big.csv > table_1.csv

csvcut -c 1,551-1100 your_big.csv > table_2.csv


> character and date, most of which are numeric.

Is this a different data set?
Previously you said:
"The most of the data type are text or varhcar, ..."
>
> I use the command: COPY rius FROM "/var/www/test/test.csv" WITH
> DELIMITER ';' ;
>
> Then it shows:
>
> ERROR: row is too big: size 11808, maximum size 8160
>
>
>
>
>
>
>
> ---- On 星期二, 03 一月 2017 05:24:18 -0800 *John McKown
>
> On Mon, Jan 2, 2017 at 2:57 PM, Rob Sargent <robjsargent@xxxxxxxxx
> <mailto:robjsargent@xxxxxxxxx>>wrote:
>
> Perhaps this is your opportunity to correct someone else's
> mistake. You need to show the table definition to convince us
> that it cannot be improved. That it may be hard work really
> doesn't mean it's not the right path.
>
>
> ​This may not be possible. The data might be coming in from an
> external source. I imagine you've run into the old "well, _we_ don't
> have any problems, so it must be on your end!" scenario.
>
> Example: we receive CSV files from an external source. These files
> are _supposed_ to be validated. But we have often received files
> where NOT NULL fields have "nothing" in them them. E.g. a customer
> bill which has _everything_ in it _except_ the customer number (or
> an invalid one such as "123{"); or missing some other vital piece of
> information.
>
> In this particular case, the OP might want to do what we did in a
> similar case. We had way too many columns in a table. The
> performance was horrible. We did an analysis and, as usual, the
> majority of the selects were for a subset of the columns, about 15%
> of the total. We "split" the table into the "high use" columns table
> & the "low use" columns table. We then used triggers to make sure
> that if we added a new / deleted an old row from one table, the
> corresponding row in the other was created / deleted.
>
>
>
>
>
> --
> Sent via pgsql-general mailing list
> To make changes to your subscription:
>
>
>
>
> --
> There’s no obfuscated Perl contest because it’s pointless.
>
> —Jeff Polk
>
> Maranatha! <><
> John McKown
>
>


--
Adrian Klaver


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux