optimal disk performance, ubuntu

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I was reading over this thread while researching a fix for my problem.   I thought I would share what I happening on my machine, and hopefully it will provide you with some insight into your situation.   I don’t have the answer yet, but maybe I can provide a different view of what the problem is.

 

I am currently trying to get 4 PCI cards with 8 channels to work with a new motherboard (MSI Neo 3FR).   I also have an nvidia GeForce 9600 NGO video card, an Intel Core2 Extreme Quad processor, 4 GB of memory, and 4 Sata II disks installed.   I am currently running Fedora Core 8, with CCRMA modifications for sound.   The audio card is an old Gadget Labs card, and there is a linux driver, but it isn’t part of the Alsa tree yet.   I have to compile and install a module to make the Gadget Labs cards work.

 

I can easily get 3 cards working (24 channels), but I am needing to do some ‘black magic’ to get 4 cards working.   I am not getting any sound out of the fourth soundcard.  So far, it sounds familiar, doesn’t it? (not enough channels coming out).      One of the benefits of having the new linux driver is that there is already some basic diagnostics built into the driver, and you can see reports in /var/log/messages of how long it takes to open or close the card to it’s memory buffer.   This was originally used because the interface chip was so old that it didn’t have DMA, and a ‘software DMA’ had to be written, and this was the mechanism to test the card/driver and figure out its performance.

 

For my particular system, because I can run a test and see the driver performance in /var/log/messages (I’m using Jack/Ardour for ‘high level stuff,  aplay, arecord, etc for low level stuff), I have seen that the video card driver is sending a bunch of interrupts down the PCI bus (even though it is on PCI Express).   The particular driver that is sending the interrupts down the PCI bus is called NV, and is an open source implementation of NVidia drivers.  I get rid of all graphics by issuing ‘init 3’ instead of being at the normal runlevel 5 (graphics).

 

As an example for playback only, here are some statistics for one card in the graphics mode:

[root@localhost tmp]# cat /var/log/messages |grep ISR

...

Dec 13 12:18:23 localhost kernel: api_gl824_stop ISR took between 47 and 137 samples

Dec 13 12:18:23 localhost kernel: api_gl824_stop Buffer transfers TO CARD within ISR took between 11 and 100 samples

Dec 13 12:18:23 localhost kernel: api_gl824_stop Buffer transfers FROM CARD within ISR took between 35 and 125 samples

(There is a  128  sample-memory on the card.  It is easier to deal with samples than time, so the 44100khz or 48000khz can be scaled more easily.   What you are seeing is that I am about to run out of time on the PCI bus if I only have 128 samples to play with)

 

When I go to a non-graphics mode with init 3, I get the following results:

[root@localhost tmp]# cat /var/log/messages

...

Dec 13 22:03:22 localhost kernel: pcm_gl824_output_trigger 0

Dec 13 22:03:22 localhost kernel: api_gl824_stop Master card's clock position stopped at 1540

Dec 13 22:03:22 localhost kernel: api_gl824_stop ISR took between 2 and 4 samples

Dec 13 22:03:22 localhost kernel: api_gl824_stop Buffer transfers TO CARD within ISR took between 2 and 4 samples

Dec 13 22:03:22 localhost kernel: api_gl824_stop Buffer transfers FROM CARD within ISR took between 0 and 1 samples

Dec 13 22:07:48 localhost kernel: pcm_gl824_output_trigger 0

Dec 13 22:07:48 localhost kernel: api_gl824_stop Master card's clock position stopped at 1932

Dec 13 22:07:48 localhost kernel: api_gl824_stop ISR took between 10 and 12 samples

Dec 13 22:07:48 localhost kernel: api_gl824_stop Buffer transfers TO CARD within ISR took between 10 and 11 samples

Dec 13 22:07:48 localhost kernel: api_gl824_stop Buffer transfers FROM CARD within ISR took between 0 and 1 samples

Dec 13 22:07:48 localhost kernel: pcm_gl824_output_hw_free

Dec 13 22:07:48 localhost kernel: pcm_gl824_output_close

 

What this means is that some interrupt in the graphics mode was taking up all the PCI bus.

 

Now, further testing in nongraphic mode revealed that there is still an issue with hard disk interrupts taking up a lot of PCI bus time.

I’m running arecord and aplay for about 30 seconds to see what the performance is.   I’m recording and playing back 8 channels at once in the 32 bit noninterleaved mode.  After I record 8 channels for about 30 seconds with arecord (creating one honkin’ big file) and playing it back with aplay, I get the following stats when looking at the output of the card:

 

Dec 14 09:39:15 localhost kernel: pcm_gl824_output_trigger 0

Dec 14 09:39:15 localhost kernel: api_gl824_stop Master card's clock position stopped at 397

Dec 14 09:39:15 localhost kernel: api_gl824_stop ISR took between 11 and 1498 samples

Dec 14 09:39:15 localhost kernel: api_gl824_stop Buffer transfers TO CARD within ISR took between 0 and 98 samples

Dec 14 09:39:15 localhost kernel: api_gl824_stop Buffer transfers FROM CARD within ISR took between 0 and 39 samples

Dec 14 09:39:15 localhost kernel: pcm_gl824_output_hw_free

 

This means that there are still interrupts happening on the PCI bus while the card is trying to play.   I’m reading the following link to determine what is the best next thing to do, but I’m also looking at a modern equivalent to ‘hdparm’ (and maybe hdparm if I can get it to work with my Sata disks).

 

Here’s a link that gives an approach to solving the problem:

http://www.gentoo.org/doc/en/articles/hardware-stability-p2.xml

 

In my opinion, the problem I’m having is because the hard disks are ‘too fast’ with ‘too much priority’ and are taking up too much bandwidth on the PCI bus.   You may be having the same thing happen.   It may be that instead of your problem being that your disks are too slow, the real problem is that your disk may be too fast.   I remember having very severe problems with disk interrupts when I first started using this software and scsi disks on a Sun V20z (which are much faster than IDE or SATA).   Once I switched to a normal PC (with slower IDE disks), the underruns just disappeared.   Fortunately with that setup, you could actually hear the disks move and see the underruns happen at the same time.

 

If anyone else has been down this road and knows of how to solve the underlying issues (sata hard disk interrupts taking up too much PCI bandwidth), I am still searching for an answer and I’m VERY interested in hearing something from you.

 

Thanks in advance,

 

Mike Mazarick

 

 

If someone else has been down this road

 

 

_______________________________________________
Linux-audio-user mailing list
Linux-audio-user@xxxxxxxxxxxxxxxxxxxx
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-user

[Index of Archives]     [Linux Sound]     [ALSA Users]     [Pulse Audio]     [ALSA Devel]     [Sox Users]     [Linux Media]     [Kernel]     [Photo Sharing]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux