Hi, On Tue, Dec 1, 2009 at 10:30 AM, Ryousei Takano <ryousei@xxxxxxxxx> wrote: > Hi Matthew and Kashyap, > > Thanks for your comments! > > On Tue, Dec 1, 2009 at 1:11 AM, Matthew Wilcox <matthew@xxxxxx> wrote: >> On Mon, Nov 30, 2009 at 08:07:58PM +0530, Desai, Kashyap wrote: >>> > for i in 1 4 16 64 256 512 1024 2048 4096 8192 16384 32768 65536; do >>> > bs=$((BS * i)) >>> > count=$((COUNT / i)) >>> > >>> > echo bs=$bs count=$count >>> > sudo mount /dev/sdb1 /media/test >>> > dd if=/dev/zero of=/media/test/foo bs=$bs count=$count >>> > sudo umount /media/test >>> > sleep 1 >>> > sudo mount /dev/sdb1 /media/test >>> > dd if=/media/test/foo of=/dev/null bs=$bs count=$count >>> Replace /media/test/foo with /dev/sdb1, you will see raw read >>> > rm /media/test/foo >>> > sudo umount /media/test >>> > done >>> > >>> This test is not purely RAW read/write test. In you test File system performance is also included. While read operation, (sequential read) File system buffering will give huge advantage to data transfer. >> >> Both filesystem and block access will use the page cache. You should >> use iflag=direct (or oflag=direct as appropriate) in order to bypass >> the page cache. >> >> -- >> Matthew Wilcox Intel Open Source Technology Centre >> "Bill, look, we understand that you're interested in selling us this >> operating system, but compare it to ours. We can't possibly take such >> a retrograde step." >> > > The bottleneck is in the file system. > I retried dd with the direct I/O option. The performance improves > with large block sizes. > The cross point is about 256 KB. > > bs write (MB/s) read (MB/s) > 1024 9.5 9.7 > 4096 34.4 28.3 > 16384 95.8 47.0 > 65536 186 121 > 262144 382 307 > 524288 417 366 > 1048576 449 380 > 2097152 497 467 > 4194304 511 532 > 8388608 498 560 > 16777216 523 545 > 33554432 555 541 > 67108864 554 543 > > My page is also updated. > > Best regards, > Ryousei > Here is the result on btrfs without direct I/O: bs write (MB/s) read (MB/s) 1024 176 605 4096 435 614 16384 641 620 65536 664 624 262144 676 618 524288 677 620 1048576 674 625 2097152 666 615 4194304 652 600 8388608 625 599 16777216 633 598 33554432 629 601 67108864 624 603 I got good performance. However, the continued usage (read&write) causes the write performance decrease independently of the block size. Anyway, my first question is resolved. Thanks, Ryousei -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html