Re: isw on Dell 8400 with 4 drives

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Heinz Mauelshagen wrote:

On Sat, Nov 27, 2004 at 12:33:24AM +0100, Matthijs Melchior wrote:


L.S.,


The RAID config on my Dell 8400, as set by BIOS and Win-XP is: --------------------------------------------------------------

Intel(R) Application Accelerator RAID Option ROM v4.0.0.6211
Copyright(C) 2003-04 Intel Corporation.  All Rights Reserved.

RAID Volumes:
ID Name Level Strip Size Status Bootable
0 ARRAY RAID1(Mirror) N/A 149.0GB Normal Yes
1 ARRAY2 RAID1(Mirror) N/A 74.5GB Degraded Yes
2 ARRAY3 RAID3(Stripe) 128KB 149.0GB Failed No


Physical Disks:
Port Drive Model Serial # Size Type/Status(Vol ID)
0 WDC WD1600JD-75H WD-WMAL91264840 149.0GB Member Disk(0)
1 WDC WD1600JD-00H WD-WMAL91985532 149.0GB Member Disk(1,2)
2 WDC WD1600JD-75H WD-WMAL91409154 149.0GB Member Disk(0)
3 WDC WD1600JD-00H WD-WMAL91887644 149.0GB Non-RAID Disk


Press <CTRL-I> to enter Configuration Utility.........

------------------------------------------------------------------------------
RAID volume 0 is a RAID1 set of two full disks, port 0 and port 2
RAID volume 1 is a RAID1 set of half of two disks, port 1 and port 3
RAID volume 2 is a RAID0 set of the orher half of two disks, port 1 and port 3


The RAID info on disk 3 has been erased, which causes the Degraded and Failed
states for volumes 1 and 2. Linux has been installed on disk 3.


The Linux system is running kernel 2.6.10-rc1, which has a working ahci module.

'dmraid' has the following view on this machine:

++ dmraid -V
dmraid version:                 1.0.0-rc5f (2004.11.24)
dmraid library version: 1.0.0-rc5f (2004.11.24)
device-mapper version:  4.1.0
++ dmraid -b
/dev/sda:    312500000 total, "WD-WMAL91264840"
/dev/sdb:    312581808 total, "WD-WMAL91905532"
/dev/sdc:    312500000 total, "WD-WMAL91409154"
/dev/sdd:    312581808 total, "WD-WMAL91887644"
++ dmraid -r
/dev/sda: isw, "isw_ebdcfejfgd", GROUP, ok, 312499998 sectors, data@ 0
/dev/sdb: isw, "isw_dbiacddhid", GROUP, ok, 312581805 sectors, data@ 0
/dev/sdc: isw, "isw_ebdcfejfgd", GROUP, ok, 312499998 sectors, data@ 0
++ dmraid -s -g
isw: unsupported map state 0x2 on /dev/sdb for ARRAY2
ERROR: adding /dev/sdb to RAID set "isw_dbiacddhid"
ERROR: removing RAID set "isw_dbiacddhid"
*** Superset
name   : isw_ebdcfejfgd
size   : 624999996
stride : 0
type   : GROUP
status : ok
subsets: 1
devs   : 2
spares : 0
--> Subset
name   : isw_ebdcfejfgd_ARRAY
size   : 312499200
stride : 256
type   : mirror
status : ok
subsets: 0
devs   : 2
spares : 0

------------------------------------------------------------------------------


It is unfortunate dmraid does not yet know about degraded RAID1 sets....

I want to help getting support for this, and for resyncing the device,
in dmraid.

Please give me some hints on how to proceed.



Matthijs,

see the metadata definitions for raid_dev and raid_set, how those get
silled in in isw.c and used by activate.c.


OK, I will investigate that....

Support for degraded RAID1 set resynchronization can be worked in, support
for AAID3 not because of no RAID3 device-mapper target yet.

It would be a start to just activate a degraded RAID1 set with just 1 disk,
and not remove it.

I do not understand where this remark about RAID3 is comming from,
currently, I am only interrested in RAID0 and RAID1......

Heinz


..........

--
Regards,
----------------------------------------------------------------  -o)
Matthijs Melchior                                       Maarssen  /\\
mmelchior@xxxxxxxxx                                  Netherlands _\_v
---------------------------------------------------------------- ----


[Index of Archives]     [Linux RAID]     [Linux Device Mapper]     [Linux IDE]     [Linux SCSI]     [Kernel]     [Linux Books]     [Linux Admin]     [GFS]     [RPM]     [Yosemite Campgrounds]     [AMD 64]

  Powered by Linux