Putting kernelnewbies back in the cc line. (please don't drop linux kernel mailing lists from replies) On Tue, Oct 7, 2008 at 2:56 PM, pradeepkumar soman <pradeep2481@xxxxxxxxx> wrote: > Hi Greg, > I am working with UDF file system(block size of 4KB) in Linux. I > tried to read a 1GB file with different record size. On conducting > performance test the read throughput is going down as the record size > increases. I got nearly 70MB/sec for a record size of 32KB and for record > sizes starting from 128KB I am getting a throughput 31MB/Sec. Actually > what > may be the reason for this performance degradation. > > Regards, > Pradeepkumar S Pradeepkumar, I have done the majority of my tests with hard drives and tapes and none at all with the UDF file system, but I'll hazard a guess. ==> Pure guess from here on The UDF file system is using a 32KB readahead buffer cache. When you use the same size blocksize to read the data from userspace, you get optimal performance. If you use something bigger than 32KB blocksize from userspace, the readahead cache is not able to function as designed, and instead a bug in the logic is causing it to degrade. ie. I think that you have identified a bug in UDF, but again I know nothing about UDF so I'm just guessing. Good luck pursuing that. Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com -- To unsubscribe from this list: send an email with "unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx Please read the FAQ at http://kernelnewbies.org/FAQ