Re: OHSM benchmarking [Was Re: Copying Data Blocks]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 22, 2009 at 11:18 PM, Sandeep K Sinha
<sandeepksinha@xxxxxxxxx> wrote:
> On Thu, Jan 22, 2009 at 8:50 PM, Greg Freemyer <greg.freemyer@xxxxxxxxx> wrote:
>> On Thu, Jan 22, 2009 at 3:49 AM, Sandeep K Sinha
>> <sandeepksinha@xxxxxxxxx> wrote:
>>
>>>
>>> Help us with the benchmarking ?
>>
>> My first question would be: Why are you benchmarking at all?
>>
>
> A few lines from one of previous mails.
>
> "Suspiciously fast.  At first glance I don't trust your benchmarking
> methodology.  Or are you using a ramdisk for both of your tiers?"
>
> This made think of something like that.
>
>> I can see a basic benchmark just to prove you are actually moving data
>> in a reasonably efficient way.
>>
>> Disk drives are notoriously slow, so once you hit 100% of max
>> throughput a simple benchmark is rather pointless.  Unless you are
>> tuning your block allocation code to try and create defrag-ed files.
>> I assume that functionality is down the road.
>>
>
> Yes, this can done at some later point in time. Which we intend to
> offer as a optional feature to the user, handled through a switch.
>
>> SSDs may be faster the HDD, but to go really fast you will need to
>> have both the original tier and the destination tier on SSD.  (You can
>> just partition one in half I assume.)
>>
>> But the only production SSD that is fast to randomly write that I am
>> aware of is the Intel line.  Is that what you are testing with?
>>
>> If not, do you believe your destination blocks are defragged?  Come to
>> think of it, I don't even know what defrag means on a SSD?  Given
>> there is a mapping layer, how would one even try to do it?
>>
>
> Sorry, a small correction, it wasn't SSD, it was a MTD device.
>
>
>
>> Anyway the most important benchmark would be to simply assure you are
>> maxing out the theoretical max of the storage devices you are testing
>> with.
>>
>
> Yes this is required for sure.
>
>> Have you done that yet?
>>
>> FYI: A simple userspace dd can effectively do that in most cases.  So
>> that gives you a quick and dirty reference.  If you are not at least
>> as fast as userspace, you have broken code.  if you are 2x faster than
>> user space, I would be very suspicious you also have broken code.  Or
>> at least a broken benchmark.
>>
>
> Will provide you this comparative figure in some time. Its on the way.
>
>> To me once you get that basic benchmark achieved, functional testing
>> would a much higher priority than benchmarking.
>>
> True.

So, all you want is to run as basic of a benchmark as you can that
will provide reliable info.

I would work with a test file or two that is at least twice as big as your ram.

Then first do a dd test to see how fast it works from pure userspace.  ie.

dd if=/dev/zero of=<testfile> bs=1m count=2000    // build a 2 GB test file

time (dd if=<testfile-on-tier1> of=<testfile-on-tier2> bs=1m; sync)

repeat at least a couple times to verify the times.

Then test/time your move code for a file of the same size.

What other issues are you concerned about?

Greg
-- 
Greg Freemyer
Litigation Triage Solutions Specialist
http://www.linkedin.com/in/gregfreemyer
First 99 Days Litigation White Paper -
http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf

The Norcross Group
The Intersection of Evidence & Technology
http://www.norcrossgroup.com

--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx
Please read the FAQ at http://kernelnewbies.org/FAQ


[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux