I got 2 raid arrays, both raid0.
For some weeks the raid array md0 have had errors, indicating one of the disks where gonna die, so i jused the array as an temp partition untill the array died.
Today was the day, but somethign else seem to have happend.
The other raid array is also broke now, and i can't get it it working again, so im looking for suggestion for how to get it back and working again.(this array is fully working and the disks are new)
Heres the info on the raid arrays, fetched from raidtab
raiddev /dev/md0 raid-level 0 nr-raid-disks 2 persistent-superblock 1 chunk-size 4 device /dev/hde raid-disk 0 device /dev/hdh raid-disk 1
raiddev /dev/md1 raid-level 0 nr-raid-disks 2 persistent-superblock 1 chunk-size 4 device /dev/hdf raid-disk 0 device /dev/hdg raid-disk 1
The array md0 is the array with a broken disk,
MD1 is the array that is suposed to be working, never had any errors on it, the disks are relativly new.
Here is the error code, fetched from dmesg
md: raidstart(pid 4682) used deprecated START_ARRAY ioctl. This will not be supported beyond 2.6 md: could not lock unknown-block(34,64). md: could not import unknown-block(34,64), trying to run array nevertheless. md: autorun ... md: considering hde ... md: adding hde ... md: created md0 md: bind<hde> md: running: <hde> md0: setting max_sectors to 8, segment boundary to 2047 blk_queue_segment_boundary: set to minimum fff raid0: looking at hde raid0: comparing hde(78150656) with hde(78150656) raid0: END raid0: ==> UNIQUE raid0: 1 zones raid0: FINAL 1 zones raid0: too few disks (1 of 2) - aborting! md: pers->run() failed ... md :do_md_run() returned -22 md: md0 stopped. md: unbind<hde> md: export_rdev(hde) md: ... autorun DONE. md: raidstart(pid 4700) used deprecated START_ARRAY ioctl. This will not be supported beyond 2.6 md: could not lock unknown-block(34,0). md: could not import unknown-block(34,0), trying to run array nevertheless. md: autorun ... md: considering hdf ... md: adding hdf ... md: created md1 md: bind<hdf> md: running: <hdf> md1: setting max_sectors to 8, segment boundary to 2047 blk_queue_segment_boundary: set to minimum fff raid0: looking at hdf raid0: comparing hdf(156290816) with hdf(156290816) raid0: END raid0: ==> UNIQUE raid0: 1 zones raid0: FINAL 1 zones raid0: too few disks (1 of 2) - aborting! md: pers->run() failed ... md :do_md_run() returned -22 md: md1 stopped. md: unbind<hdf> md: export_rdev(hdf) md: ... autorun DONE.
So i realy need an suggestion whats wrong with md1, and how i can fix it
Best Regards Geir Råness - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html