On Tue, Sep 14, 2004 at 04:07:43PM +0200, Lukas Kubin wrote: > Chunk Size : 128K (...) > Number Major Minor RaidDevice State > 0 8 16 0 active sync /dev/sdb > 1 8 32 1 active sync /dev/sdc > 2 8 48 2 active sync /dev/sdd > 3 8 64 3 active sync /dev/sde > 4 8 80 4 active sync /dev/sdf > 5 8 96 5 active sync /dev/sdg > 6 8 112 6 active sync /dev/sdh > 7 8 128 7 active sync /dev/sdi > 8 8 144 8 active sync /dev/sdj > 9 8 160 9 active sync /dev/sdk > 10 8 176 10 active sync /dev/sdl > 11 8 192 11 active sync /dev/sdm > 12 8 208 12 active sync /dev/sdn > 13 8 224 13 active sync /dev/sdo > 14 8 240 14 active sync /dev/sdp > 15 65 0 15 active sync /dev/sdq > 16 65 16 16 spare /dev/sdr I have a question about performance: What is the cost of writing a 'data-unit' in such an array? - Write data: 1 write - Calculate new checksum: 14 reads - Write checksum: 1 write Right or wrong? What is the granularity of thoses ckecksum updates? 512 bytes (sector size)? 4k (page size on i386)? chunk-size? Does Linux do read-ahead on thoses 14 disks reads? Thanks -- Seb, autocuiseur - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html