On Sun, 2005-07-24 at 07:37 +1000, Neil Brown wrote: > On Saturday July 23, mingz@xxxxxxxxxxx wrote: > > 1048576 = 1024 * 1024 = 32 * 32768. :) > ^^^ > > > > so it should be 32 stripe writes. > > > > ming > > > > On Fri, 2005-07-22 at 23:14 -0700, Tyler wrote: > > > By my calculations, 1048756 is *not* a multiple of 32768 (32 > ^^^ > > > Kilobytes). Did I miscalculate? > > > > > A typo somewhere :-) yes, my dumb stupidness. a typo here. @Tyler, sorry about this. :P i checked again and what i did is ./write /dev/md0 1048576 1024 s and i still see plenty read. i build raid with this no resync. but should be ok, right? mkraid -c raidtab -R --dangerous-no-resync /dev/md0 my raidtab file is like this raiddev /dev/md0 raid-level 5 nr-raid-disks 3 chunk-size 32 parity-algorithm left-symmetric device /dev/sda raid-disk 0 device /dev/sdb raid-disk 1 device /dev/sdc raid-disk 2 > > > > > Regards, > > > Tyler. > > > > > > Ming Zhang wrote: > > > > > > >i created a 32KB chunk size 3 disk raid5. then write this disk with a > > > >small code i wrote. i found that even i write it with 1048756 in unit, > > > >which is multiple of stripe size, it still has a lot of read when seen > > > >from iostat. > > > > >sda 605.05 387.88 35143.43 384 34792 > > > >sdb 611.11 323.23 35143.43 320 34792 > > > >sdc 602.02 387.88 35143.43 384 34792 > > I wouldn't call this "a lot of read". The read requests are only 1% > of the write requests. So I would call it "some read". > > There is quite a lot of complexity between the 'write' system all and > the data actually getting to the device. Presumably the Linux VM > system is flushing dirty data to the device at times other than then > end of the write request. > I think the block layer also automatically flushes devices every > 200msecs. > This may be triggering flush requests which aren't stripe-aligned. > > You are certainly getting the vast majority of stripes written as > whole stripes. > > NeilBrown - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html