Hello Richard, > > We have such a testsystem now, 4 x AMD Opteron (double core techn.) > > 885 2.6ghz , running with 32 GB Ram and fast 10.000rpm SAS discs, > > How many disks? What RAID? 4 Disks, RAID1. > Is your disk I/O saturating? Try vmstat/iostat (or whatever the Solaris > equivalent is). Hard to say, due to account restrictions I cannot control more than this vmstat output gives me: This is during loading Data, 98% idle: -bash-3.00$ vmstat 5 kthr memory page disk faults cpu r b w swap free re mf pi po fr de sr m1 m1 m1 m2 in sy cs us sy id 0 0 0 31865080 27723492 248 43 138 118 79 0 5 1 1 1 0 6873 4235 1481 2 1 98 0 0 0 31675384 27518448 603 14 0 0 0 0 0 0 0 0 0 15808 1695 1243 1 1 98 0 0 0 31675384 27518436 609 0 0 0 0 0 0 0 0 0 0 16027 1703 1266 1 1 98 0 0 0 31675372 27518408 602 0 0 0 0 0 0 0 0 0 0 15808 1682 1231 1 1 98 0 0 0 31675356 27513660 1280 0 0 0 0 0 0 0 0 0 0 44234 1908 861 1 1 98 0 0 0 31675356 27511444 369 0 0 0 0 0 0 0 0 0 0 13969 1270 952 1 1 99 0 0 0 31675356 27510328 343 0 0 0 0 0 0 4 4 4 0 13053 1192 908 1 1 99 0 0 0 31675344 27509688 426 1 0 0 0 0 0 0 0 0 0 14353 1371 1028 1 1 98 0 0 0 31675332 27513672 552 0 0 0 0 0 0 0 0 0 0 17503 1675 1243 1 1 98 0 0 0 31675332 27518468 570 0 0 0 0 0 0 0 0 0 0 17269 1737 1276 1 1 98 0 0 0 31675320 27519528 561 0 0 0 0 0 0 0 0 0 0 15942 1709 1276 1 1 98 0 0 0 31675320 27519440 598 169 0 0 0 0 0 0 0 0 0 15553 1738 1248 1 1 98 0 0 0 31674428 27527404 3311 222 0 120 120 0 0 0 0 0 0 44244 3384 1259 1 2 97 0 0 0 31675320 27539800 1665 0 0 0 0 0 0 0 0 0 0 37718 2151 1022 1 1 98 0 0 0 31674644 27520952 408 148 0 80 80 0 0 0 0 0 0 13916 1459 1066 1 1 99 0 0 0 31674196 27512960 403 146 0 40 40 0 0 0 0 0 0 13958 1425 1031 1 1 99 0 0 0 31675320 27510932 437 222 0 80 80 0 0 0 0 0 0 17203 1550 1097 1 1 98 0 0 0 31675320 27513352 402 75 0 0 0 0 0 0 0 0 0 16961 1425 1039 1 1 99 kthr memory page disk faults cpu r b w swap free re mf pi po fr de sr m1 m1 m1 m2 in sy cs us sy id 0 0 0 31674644 27516248 354 148 0 40 40 0 0 0 0 0 0 15408 1283 953 1 1 99 0 0 0 31674644 27519036 345 296 0 80 80 0 0 0 0 0 0 14854 1297 915 1 1 99 0 0 0 31675320 27519880 357 0 0 0 0 0 0 0 0 0 0 14000 1254 957 1 1 99 0 0 0 31675320 27518796 335 0 0 0 0 0 0 0 0 0 0 28471 1575 886 1 1 98 0 0 0 31675288 27515356 252 2 0 6 6 0 0 0 0 0 0 15763 1650 1245 1 1 98 0 0 0 31675288 27517308 265 0 0 0 0 0 0 0 0 0 0 16911 1741 1285 1 1 98 0 0 0 31675288 27519112 281 0 0 0 0 0 0 0 0 0 0 28401 1858 1167 1 1 98 0 0 0 31674396 27518676 391 217 0 0 0 0 0 0 0 0 0 14281 1334 950 1 1 99 0 0 0 31675236 27519480 386 3 0 80 80 0 0 0 0 0 0 14389 1305 971 1 1 99 0 0 0 31675160 27517528 522 0 0 0 0 0 0 0 0 0 0 14958 1559 1142 1 1 99 0 0 0 31674968 27515712 522 0 0 0 0 0 0 0 0 0 0 15553 1568 1159 1 1 98 0 0 0 31674924 27518104 533 0 0 0 0 0 0 0 0 0 0 16687 1595 1173 1 1 99 0 0 0 31674924 27517816 529 0 0 0 0 0 0 0 0 0 0 15607 1581 1174 1 1 99 0 0 0 31674520 27516436 570 0 0 0 0 0 0 0 0 0 0 15876 1612 1172 1 1 99 0 0 0 31674520 27516280 588 0 0 0 0 0 0 0 0 0 0 15998 1612 1192 1 1 98 0 0 0 31674412 27515228 1312 0 0 0 0 0 0 0 0 0 0 46397 2165 1031 1 1 98 0 0 0 31673996 27510360 318 0 0 0 0 0 0 0 0 0 0 12950 1082 849 0 1 99 0 0 0 31673996 27508960 408 203 0 40 40 0 0 0 0 0 0 14094 1318 883 0 1 99 0 0 0 31673996 27507568 443 0 0 0 0 0 0 0 0 0 0 14330 1373 1052 1 1 98 and so on. During the whole import of data it keeps mainly 99% idle. This is during "working" on indexes creation: -bash-3.00$ vmstat 5 kthr memory page disk faults cpu r b w swap free re mf pi po fr de sr m1 m1 m1 m2 in sy cs us sy id 0 0 0 31851400 27708172 252 44 129 111 74 0 5 1 0 0 0 7169 4006 1419 2 1 98 0 0 0 31598656 27425288 50 14 0 0 0 0 0 0 0 0 0 1608 72 156 13 0 87 0 0 0 31598656 27425224 6973 0 0 0 0 0 0 0 0 0 0 147690 3516 161 12 4 84 0 0 0 31598656 27424696 13194 0 0 3 3 0 0 0 0 0 0 279048 6620 147 12 7 81 0 0 0 31598656 27424116 14528 0 0 0 0 0 0 0 0 0 0 307360 7285 153 12 8 80 0 0 0 31598656 27423852 13503 0 0 0 0 0 0 0 0 0 0 285909 6767 149 12 7 81 0 0 0 31598656 27424584 3258 0 0 326 326 0 0 0 0 0 0 69522 1682 197 12 2 85 0 0 0 31598652 27425244 0 0 0 0 0 0 0 0 0 0 0 480 37 184 13 0 87 0 0 0 31598636 27425228 0 0 0 0 0 0 0 0 0 0 0 420 33 156 13 0 87 0 0 0 31598636 27425228 0 0 0 0 0 0 0 0 0 0 0 418 32 152 13 0 87 0 0 0 31598636 27425228 0 0 0 0 0 0 0 0 0 0 0 433 34 166 13 0 87 0 0 0 31598636 27425228 0 0 0 0 0 0 0 0 0 0 0 423 35 156 13 0 87 0 0 0 31598616 27425208 0 0 0 0 0 0 0 0 0 0 0 413 33 150 13 0 87 0 0 0 31598616 27425208 0 0 0 0 0 0 0 0 0 0 0 424 33 157 12 0 87 0 0 0 31598616 27425208 0 0 0 0 0 0 0 0 0 0 0 430 40 161 13 0 87 0 0 0 31598616 27425216 0 0 0 0 0 0 0 0 0 0 0 415 31 146 13 0 87 0 0 0 31598616 27425216 0 0 0 0 0 0 0 0 0 0 0 416 31 153 13 0 87 0 0 0 31598616 27425216 0 0 0 0 0 0 0 0 0 0 0 430 36 162 13 0 87 Looks fine, doesn't it? > I think your work_mem is too large (100MB) and maintenance_work_mem too > small, for a restore with 32GB of RAM anyway. I decreased the import time from >40 minutes to 32 minutes with changing the parameters to: shared_buffers = 450000 # kept same temp_buffers = 100000 # kept same work_mem = 32768 # now 32mb , before: 102400 = 100mb maintenance_work_mem = 2048000 # now 2.048mb, before: 32768 = 32mb Should I encrease these parameters? Thank you for helping, Andre