On Thu, Jan 22, 2009 at 8:50 PM, Greg Freemyer <greg.freemyer@xxxxxxxxx> wrote: > On Thu, Jan 22, 2009 at 3:49 AM, Sandeep K Sinha > <sandeepksinha@xxxxxxxxx> wrote: > >> >> Help us with the benchmarking ? > > My first question would be: Why are you benchmarking at all? > A few lines from one of previous mails. "Suspiciously fast. At first glance I don't trust your benchmarking methodology. Or are you using a ramdisk for both of your tiers?" This made think of something like that. > I can see a basic benchmark just to prove you are actually moving data > in a reasonably efficient way. > > Disk drives are notoriously slow, so once you hit 100% of max > throughput a simple benchmark is rather pointless. Unless you are > tuning your block allocation code to try and create defrag-ed files. > I assume that functionality is down the road. > Yes, this can done at some later point in time. Which we intend to offer as a optional feature to the user, handled through a switch. > SSDs may be faster the HDD, but to go really fast you will need to > have both the original tier and the destination tier on SSD. (You can > just partition one in half I assume.) > > But the only production SSD that is fast to randomly write that I am > aware of is the Intel line. Is that what you are testing with? > > If not, do you believe your destination blocks are defragged? Come to > think of it, I don't even know what defrag means on a SSD? Given > there is a mapping layer, how would one even try to do it? > Sorry, a small correction, it wasn't SSD, it was a MTD device. > Anyway the most important benchmark would be to simply assure you are > maxing out the theoretical max of the storage devices you are testing > with. > Yes this is required for sure. > Have you done that yet? > > FYI: A simple userspace dd can effectively do that in most cases. So > that gives you a quick and dirty reference. If you are not at least > as fast as userspace, you have broken code. if you are 2x faster than > user space, I would be very suspicious you also have broken code. Or > at least a broken benchmark. > Will provide you this comparative figure in some time. Its on the way. > To me once you get that basic benchmark achieved, functional testing > would a much higher priority than benchmarking. > True. > Greg > -- > Greg Freemyer > Litigation Triage Solutions Specialist > http://www.linkedin.com/in/gregfreemyer > First 99 Days Litigation White Paper - > http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf > > The Norcross Group > The Intersection of Evidence & Technology > http://www.norcrossgroup.com > -- Regards, Sandeep. "To learn is to change. Education is a process that changes the learner." -- To unsubscribe from this list: send an email with "unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx Please read the FAQ at http://kernelnewbies.org/FAQ