> I had these problems as well. There's a long thread a few months back > where it was confirmed that I tried *everything*. Eventually I gave up > and bought an IDE drive. Hey presto, problems gone. Now the only xruns > happen when there's scsi disk activity, according to vmstat. This is craziness, and turns things completely on their head. IDE, even with bus mastering, requires more I/O and IRQ turnarounds to complete a given transaction to a drive. I've heard of PCI bus greedy graphics cards (in the pre-AGP era), but bus-greedy SCSI controllers take the cake. > The only time I didn't have these problems was with a MSI dual-athlon > board that had a 64-bit PCI slot where the 29160 was plugged in. I only > realised later (after the motherboard was on a ship to Taiwan, for a > different reason) that the 64-bit slot was on a separate bus from the > 32-bit slot where the audio card (Terratec EWS88MT) was plugged in. That > board didn't give me xruns either. > I think that's a plausible > explanation, but then again it may have been because there were 2 cpus. Two CPUs don't make PCI bus contention any better. I also highly doubt the Adaptec SCSI driver disables IRQs for so long that it becomes an issue of having a free CPU around to handle other I/O activity while being tied up in an IRQ handler for the Adaptec controller. I suspect the 64-bit PCI bus had sufficient free transactional capacity such that other devices could get onto the bus and xfer their I/O in a timely manner. > On the other hand, I can't help wondering if the bus-greediness of the > Adaptec controllers isn't a driver issue. I had a brief look at the > driver sources, but I don't know enough about kernels and drivers to > have made sense of them. Look for an innocuous message from the kernel after booting (use "dmesg") during the Adaptec driver startup phase and keep an eye out for any "Setting PCI latency to XXXX". =MB= -- A focus on Quality.