I made some strange obervations building a raid0 on an Intel ICH9R fakeraid controller(?). Things seem to work, but before I really use this partition I'd like to make sure that things are indeed in order. Ad Hardware: Motherboard Intel DP35DP with 3 disks attached to the onboard ICH9R SATA/RAID Controller, which in the BIOS is set to "Raid". In the Raid utility (the thing one enters with Ctrl-I), one disk is configured as a "plain disk", the remaining two disks are configured as RAID0. Ad Software: Clean install of Ubuntu 7.10 ("gutsy gibbon"), kernel is 2.6.22 based, the dmraid version is 1.0.0.rc13, installed after the initial reboot. The OS was installed to the first (=normal) disk (/dev/sda), so I had none of the problems addressed in the typical "fakeraid-howtos". Nevertheless, during my several attempts (including several reinstalls), I made a few observations, which make me wonder ... Observation 1 : The kernel always recognizes three disks (dev/sda, dev/sdb, /dev/sdc), even when dmraid is not installed (yet). According to what I had read, I had expected that without dmraid only the "normal" disk (/dev/sda) would be recognized. When ubuntu installs dmraid, it automatically activates it and /dev/mapper/isw_geiabefhb_<somename> appears, although issueing dmraid -ay doesn't seem to do any harm. Observation 2 : All my attempts to use gparted and parted to partition the raid0 area failed one way or the other; this may be entirely my fault since I am mostly used to fdisk. fdisk works, but requires a reboot before the partition becomes visible (Some Error 22 ...) Since I want one large scratch space, I just made one primary partion on /dev/mapper/isw_geiabefhb_<somename> Observation 3 : After the reboot, I have /dev/mapper/isw_geiabefhb_<somename>1 as expected, but I suddenly also have a /dev/sdb1 partition, i.e., a partition on the first of the two disks forming the raid array. Before the fdisk operation, this partition was not there (I had even zeroed out the disk label to make sure). And, during boot I get scary warnings: [ 47.192000] sd 2:0:0:0: [sdb] 145226112 512-byte hardware sectors (74356 MB) [ 47.192008] sd 2:0:0:0: [sdb] Write Protect is off [ 47.192010] sd 2:0:0:0: [sdb] Mode Sense: 00 3a 00 00 [ 47.192022] sd 2:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 47.192051] sd 2:0:0:0: [sdb] 145226112 512-byte hardware sectors (74356 MB) [ 47.192058] sd 2:0:0:0: [sdb] Write Protect is off [ 47.192060] sd 2:0:0:0: [sdb] Mode Sense: 00 3a 00 00 [ 47.192072] sd 2:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 47.192075] sdb: sdb1 [ 47.201271] sdb: p1 exceeds device capacity [ 47.201617] sd 2:0:0:0: [sdb] Attached SCSI disk [ 47.263907] sdb: rw=0, want=290438983, limit=145226112 [ 47.263909] Buffer I/O error on device sdb1, logical block 36304864 .... [ 47.264954] sdb: rw=0, want=290439135, limit=145226112 [ 47.264960] sdb: rw=0, want=290439079, limit=145226112 [ 47.264965] sdb: rw=0, want=290439127, limit=145226112 .... So, it seems that the partitioning info for the raid device got written on the first disk; obviously, when a check is made against the physical size of the device, then things don't match. I just did a fairly large compile on the disk, and things went smoothely (without any kernel warnings or problems for the compile), but then I hardly using the disk(s) to capacity. So, to make this short: Is this which should have happened. Can I disregard these messages as a nuisance, _or_ is there something fishy going on ... Please let me know if you need further information. At the moment the machine is not in production use, so I can try things / install a different OS etc. Many thanks in advance, Stefan -- Stefan Boresch Institute for Computational Biological Chemistry University of Vienna, Waehringerstr. 17 A-1090 Vienna, Austria Phone: -43-1-427752715 Fax: -43-1-427752790 _______________________________________________ Ataraid-list mailing list Ataraid-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/ataraid-list