Raid: cant hot add disk: / = md0, / = hdc1 at the same time?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hello RAIDers,

I have two large IDE disks, each partitioned identically, with each identical 
partition combined to create an mdX device. I can't seem to get one partition 
to come up; the other 7 are fine, but the root is not. 

The raidtab is:
	raiddev                 /dev/md0
	raid-level              1
	nr-raid-disks           2
	nr-spare-disks          0
	chunk-size              4

	device                  /dev/hdc1
	raid-disk               0

	device                  /dev/hda1
	raid-disk               1

<snip>


/proc/mdstat says:
<snip>
	md1 : active raid1 ide/host0/bus0/target0/lun0/part2[0] ide/host0/bus1/target0/lun0/part2[1]
	      4003712 blocks [2/2] [UU]
      
	md0 : active raid1 ide/host0/bus1/target0/lun0/part1[0]
	      4003648 blocks [2/1] [U_]
      
	unused devices: <none>

The pattern is the same: IDE disks A and C arethe two parts. It seems on 
md0 that disk A (/dev/hda1, or /dev/ide/host0/bus0/target0/lun0/part1) is 
not active. Yet trying to raidhotadd it says:
	# raidhotadd /dev/md0 /dev/hda1
	/dev/md0: can not hot-add disk: invalid argument.

Or using the devfs filename:
	# raidhotadd /dev/md0 /dev/ide/host0/bus0/target0/lun0/part1
	/dev/md0: can not hot-add disk: invalid argument.


Sure enough, the dmesg log says:
	md: trying to hot-add ide/host0/bus0/target0/lun0/part1 to md0 ... 
	md: can not import ide/host0/bus0/target0/lun0/part1, has active inodes!
	md: error, md_import_device() returned -16

And checking /proc/mounts says:
	/dev/root.old /initrd cramfs rw 0 0
	/dev/ide/host0/bus0/target0/lun0/part1 / reiserfs rw 0 0
	proc /proc proc rw 0 0
	devpts /dev/pts devpts rw 0 0
	/dev/md1 /usr reiserfs rw 0 0
	/dev/md2 /var reiserfs rw 0 0
	/dev/md3 /home reiserfs rw 0 0
	none /dev devfs rw 0 0
	/dev/md5 /usr/local reiserfs rw 0 0
	/dev/md0 / reiserfs rw 0 0


So it appears that the root parition is mounted *twice*, once is a 
raw disk, and once as md0. df says:
	# df -k
Filesystem           1k-blocks      Used Available Use% Mounted on
/dev/md0               4003520     97616   3905904   3% /
/dev/md1               4003584    323320   3680264   9% /usr
/dev/md2               4003584    301300   3702284   8% /var
/dev/md3               4003520   1353040   2650480  34% /home
/dev/md5              62035860  16205828  45830032  27% /usr/local
/dev/md0               4003520     97616   3905904   3% 


So it looks like the /dev/hda1 is not there by itself.


How can I unmount the hda1 from /proc/mounts, other than rebooting? 
Which entry is really the root FS?

Kernel is 2.4.8; raidtools is version 0.90.20010914-11. It's Debian/testing.  
Doco is http://www.james.rcpt.to/programs/debian/raid1/.

Many thanks,

  James Bromberger
-- 
 James Bromberger <james_AT_rcpt.to> www.james.rcpt.to
 Remainder moved to http://www.james.rcpt.to/james/sig.html
 Au National Linux Conference 2003: http://conf.linux.org.au/

Attachment: pgp00004.pgp
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux