Hi, I would be happy if someone can help me with understanding the cmipci.c snd_cmipci_pcm_prepare(): Line 789 starts the calculation of Channel 0 Frame Register 2 field Base count of samples at Codec: /* buffer and period sizes in frame */ rec->dma_size = runtime->buffer_size << rec->shift; The register expects to get its count in sample granularity. This means that the rec->dma_size should be number of samples. The runtime->buffer_size is said to be in frames (in the above comment). Can someone explain the above calculation? Why you multiply the buffer_size by two if format size is greater than 16? What is the connection between the format size and the sample number? Aren't those correlate by HW? Thanks in advance. ------------------------------------------------------------------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 _______________________________________________ Alsa-devel mailing list Alsa-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.sourceforge.net/lists/listinfo/alsa-devel