On Thu, Mar 18, 2010 at 11:47 PM, Nicolae Mihalache <mache@xxxxxxxxxxxx> wrote: > Actually my problem as written in the subject of the mail was that the > sequential read was slow. Somebody suggested to use a file instead of > the raw partition. If the file was detected as sparse (who does that??), > it would be even faster to read not slower. > > nicolae > > > On 03/18/2010 03:40 AM, Michael Evans wrote: >> First off, why not use a hard disk benchmark utility (their names >> escape me aside from Bonnie++) which has these issues worked out? >> >> Second, if you absolutely must try to do a benchmark with basic tools >> (that buffer and use cache) try this: >> >> dd if=/dev/zero bs=1M count=10000 | tr '\0' 't' > testfile >> dd if=testfile of=/dev/null bs=1M >> >> You may note that you'll be writing a file with Ts instead of a file >> with 0's; my method should not be detected as sparse, where as the >> case with zeros probably will be detected as sparse and simply not >> stored. >> >> If in doubt you can check the size of the file on disk with ls -ls >> If I'm reading the output correctly the left most column (size on >> disk) is in kilobyte units, even on a 4kb cluster EXT4 filesystem > > Some versions of standard system utilities may do that by default. They only have to ensure that the data is processed with similar content; not identical on disk structure. I've been told (by developers on the gnu project that includes it) that dd is supposed to do it (at least in recent versions); probably other utilities like cp could have it on by default as well. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html