The IDE part referred to the link in the 3rd question.. sorry.. When I switched to single-partition-per-drive, all my problems went away... it isnt cabling and load, because the drive failed merely during construction when naming /dev/sda /dev/sdb /dev/sdc but didnt fail with 20 background bonnie++ hammering away and 26k blocks per second transfer rates reported by vmstat when naming /dev/sda1 /dev/sdb1 /dev/sdc1 ... (11 drives + spare). I didnt believe it myself, but I tried *four* times to build the raid5 array, using two different entirely rebuilt kernels, (2.4.11 and 2.4.17), and failed each time with scsi errors. As *soon* as the array went onto partitions, instead of disks, (one partition per disk) all the scsi problems vanished. *nothing else changed*.. the array is basically the same size and nice and fast. I burned it on over the last 24 hours. So there must be something different about using /dev/sda vs using /dev/sda1 (where 1 is the 'whole disk')... if you say that raid5 partitions can be slightly different sizes without triggering any bugs.. thanks -Justin On Tue, Feb 19, 2002 at 07:47:06PM +0100, Jakob Østergaard wrote: > On Mon, Feb 18, 2002 at 11:37:31PM -0500, Justin wrote: > > Hi, > > (1) can /etc/raidtab use raw disks instead of partitions? > > What you mean is probably /dev/hda rather than /dev/hda1 - correct ? > > That's not a "raw disk" in raw-disk sense, but the answer is "yes" > nonetheless :) > > > With kernel 2.4.17 it builds most of the array on /dev/sda,/dev/sdb, > > Didn't the subject say IDE ? > > > then I get bogus scsi hardware sense errors during the > > build (toward the end) .. on several attempts.. disks > > drop out, then the array fails. persistent superblock is on. > > You have bad cabling. > > > Switching to partitions, (/dev/sda1, /dev/sda2..) 11+spare > > worked fine! The enclosure was a sun A1000, connected to > > an HVD adaptec, in case that makes a difference. > > If you use multiple partitions on the disks, the transfer time > will significantly drop (because of massive seeking) - and this > could explain why the cable failure doesn't show up - I guess. > > > (2) do all partitions have to be *exactly* the same size > > for current raid5 code? 3 out of 12 18gb drives had slightly > > smaller number of cylinders in their single partition that > > made up the array. The array built fine, and tests fine, but > > I'm concerned about this.. > > No - for RAID-5 (and -1, and -4), the code will use the smallest > size as the size for all underlying partitions/disks. > > RAID-0 and RAID-linear will utilize differently sized disks fully. > > > (3) opinions sought if any on this IDE Terabyte rack > > mount array? http://www.raidweb.com/ide.html that is connected > > to the host via ultra160 scsi.. ? it would seem to save a lot > > of the hassle of finding the right combination of kernel, > > motherboard and controllers (to say nothing of wiring > > nightmares) to build a 1terabyte array from cheap > > 120gb+ IDE drives.. (obvious comments on $ cost not reqd!) > > How about a big enclosure (so that you can mount all the disks internally and > still use cabling within spec) ? > > With 160G drives, that doesn't have to be a lot of drives after all. > > -- > ................................................................ > : jakob@unthought.net : And I see the elder races, : > :.........................: putrid forms of man : > : Jakob Østergaard : See him rise and claim the earth, : > : OZ9ABN : his downfall is at hand. : > :.........................:............{Konkhra}...............: - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html