> -----Original Message----- > From: Mark Brown [mailto:broonie@xxxxxxxxxxxxxxxxxxxxxxxxxxx] > Sent: Thursday, November 26, 2009 9:00 PM > To: Aggarwal, Anuj > Cc: 'Troy Kisky'; alsa-devel@xxxxxxxxxxxxxxxx; linux-omap@xxxxxxxxxxxxxxx; > Arun KS > Subject: Re: [alsa-devel] [PATCH] ASoC: AM3517: Fix AIC23 suspend/resume > hang > > On Thu, Nov 26, 2009 at 08:52:08PM +0530, Aggarwal, Anuj wrote: > > > [Aggarwal, Anuj] I am still surprised how this could be a NFS writeout > issue > > as we are seeing a consistent read/write rate of 2Mbps over tftp. When > dd > > command is used for read/write to further check NFS performance, 2Mbps > for write and 4Mbps for read is observed. > > Does that still mean nfs is the culprit? What could be tweaked in > audio/network driver to avoid this problem, any suggestions? > > There can also be issues with the way the data gets pushed into NFS > interacting poorly - it's not just the raw data rate that's in play > here, it's also things like how often the writes are done and how big > they are. Possibly also overhead from interacting with the ethernet > chip but that's not normally an issue for anything modern. > > The fact that this only happens when NFS is in use seems a fairly clear > pointer to an interacton there. [Aggarwal, Anuj] We were able to fine tune NFS and use arecord to capture large files. But some more problems cropped up when tried to suspend / resume. Basic playback is working fine wrt suspend/resume. But capture, either tried independently or with playback, is creating a system wide hang. I fixed that infinite-loop in resume path but I believe something else needs cleanup too. Any pointers? > > > [Aggarwal, Anuj] Any other utility to try capture which does error > > recovering too? > > Not for the console off the top of my head, and TBH I don't really know > how good the error handling is in the various apps. You could also try > playing with the buffer size options in arecord. -- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html