> I made a fwe tests to try to point out when high loads happens (which > are not really scientific :). Note that between each tests I made sure > the load average got back below 0.15. > > But first, here are my 2 questions: > 1- Should I really expect loads that high using that driver? Remember that uptime type load also measure system wait so high disk I/O counts as load but not CPU usage. High CPU usage shouldn't be a problem for any DMA based disk With the newer ICH controllers and especially if your disk does NCQ you should see better performance with libata as the libata drivers support AHCI and NCQ which allows multiple outstanding commands. > "Multi-Platform E-IDE driver Revision: 7.00alpha2" and detected both my > hd & dvdrw as scsi devices BUT finally ended up in a oops: > pivot_root: No such file or directory > /sbin/init: 432: Cannot open dev/console: No such file > Kernel panic - not syncing: Attempted to kill init Thats really for the Debian lists, it sounds as if Debian still isn't using disk labels in the initrd. > TEST 2: 10000 1mb files (medium size files): > Note that I stopped the test a the 4134th file) > ----------------------------------------------- > Max load average (a simple while loop + sleep 5 + cat /proc/loadavg) > 16.36 12.64 7.86 2/122 3602 If you are doing all the I/O in parallel then that isn't unexpected - disk performance is very very seek dependant snd a lot of parallel I/O will cause a lot of seeking. If they are occuring in serial then this is a bit odd and would warrant more investigating. I don't however think this is likely to be a disk layer problem so I doubt libata v oldIDE makes that much difference here - a bit because of NCQ but not a lot. Alan - To unsubscribe from this list: send the line "unsubscribe linux-ide" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html