Re: Help - raid not assembling right on boot (was: Resizing a RAID1)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 27 Jan 2011 06:20:39 -0600 Hank Barta <hbarta@xxxxxxxxx> wrote:

> Thanks for the suggestion:
> 
> =============================
> hbarta@oak:~$ sudo fdisk -luc /dev/sd[bc]
> 
> Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sdb1            2048    20973567    10485760   fd  Linux raid autodetect
> /dev/sdb2        20973568  3907029167  1943027800   fd  Linux raid autodetect

These start numbers are multiples of 64K.

With 0.90 metadata, md thinks that the metadata for a partition that starts
at a multiple of 64K and ends a the end of the device looks just like metadata
for the whole devices.

If you use 1.0 (or 1;1 or 1.2) metadata this problem will disappear.

NeilBrown


> 
> Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sdc1            2048    20973567    10485760   fd  Linux raid autodetect
> /dev/sdc2        20973568  3907029167  1943027800   fd  Linux raid autodetect
> hbarta@oak:~$
> =============================
> 
> Everything seems OK as far as I can see.
> 
> thanks,
> hank
> 
> 
> 
> On Thu, Jan 27, 2011 at 5:56 AM, Justin Piszcz <jpiszcz@xxxxxxxxxxxxxxx> wrote:
> > Hi,
> >
> > Show fdisk -l on both disks, are the partitions type 0xfd Linux raid Auto
> > Detect?  If not, you will have that exact problem.
> >
> > Justin.
> >
> > On Wed, 26 Jan 2011, Hank Barta wrote:
> >
> >> I followed the procedure below. Essentially removing one drive from a
> >> RAID1, zeroing the superblock, repartitioning the drive, starting a
> >> new RAID1 in degraded mode, copying over the data and repeating the
> >> process on the second drive.
> >>
> >> Everything seemed to be going well with the new RAID mounted and the
> >> second drive syncing right along. However on a subsequent reboot the
> >> RAID did not seem to come up properly. I was no longer able to mount
> >> it. I also noticed that the resync had restarted. I found I could
> >> temporarily resolve this by stopping the RAID1 and reassembling it and
> >> specifying the partitions. (e.g. mdadm ---assemble /dev/md2 /dev/sdb2
> >> /dev/sdc2) At this point, resync starts again and I can mount
> >> /dev/md2. The problem crops up again on the next reboot. Information
> >> revealed by 'mdadm --detail /dev/md2' changes between "from boot" and
> >> following reassembly. It looks like at boot the entire drives
> >> (/dev/sdb, /dev/sdc) are combined into a RAID1 rather than the desired
> >> partitions.
> >>
> >> I do not know where this is coming from. I tried zeroing the
> >> superblock for both /dev/sdb and /dev/sdc and mdadm reported they did
> >> not look like RAID devices.
> >>
> >> Results from 'mdadm --detail /dev/md2' before and after is:
> >>
> >> =============================
> >> root@oak:~# mdadm --detail /dev/md2
> >> /dev/md2:
> >>       Version : 00.90
> >>  Creation Time : Tue Jan 25 10:39:52 2011
> >>    Raid Level : raid1
> >>    Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
> >>  Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
> >>  Raid Devices : 2
> >>  Total Devices : 2
> >> Preferred Minor : 2
> >>   Persistence : Superblock is persistent
> >>
> >>   Update Time : Wed Jan 26 21:16:04 2011
> >>         State : clean, degraded, recovering
> >> Active Devices : 1
> >> Working Devices : 2
> >> Failed Devices : 0
> >>  Spare Devices : 1
> >>
> >> Rebuild Status : 2% complete
> >>
> >>          UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local to host oak)
> >>        Events : 0.13376
> >>
> >>   Number   Major   Minor   RaidDevice State
> >>      0       8       32        0      active sync   /dev/sdc
> >>      2       8       16        1      spare rebuilding   /dev/sdb
> >> root@oak:~#
> >> root@oak:~# mdadm --detail /dev/md2
> >> /dev/md2:
> >>       Version : 00.90
> >>  Creation Time : Tue Jan 25 10:39:52 2011
> >>    Raid Level : raid1
> >>    Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
> >>  Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
> >>  Raid Devices : 2
> >>  Total Devices : 2
> >> Preferred Minor : 2
> >>   Persistence : Superblock is persistent
> >>
> >>   Update Time : Wed Jan 26 21:25:40 2011
> >>         State : clean, degraded, recovering
> >> Active Devices : 1
> >> Working Devices : 2
> >> Failed Devices : 0
> >>  Spare Devices : 1
> >>
> >> Rebuild Status : 0% complete
> >>
> >>          UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local to host oak)
> >>        Events : 0.13382
> >>
> >>   Number   Major   Minor   RaidDevice State
> >>      0       8       34        0      active sync   /dev/sdc2
> >>      2       8       18        1      spare rebuilding   /dev/sdb2
> >> =============================
> >>
> >> Contents of /etc/mdadm/mdadm.conf are:
> >> =============================
> >> hbarta@oak:~$ cat /etc/mdadm/mdadm.conf
> >> # mdadm.conf
> >> #
> >> # Please refer to mdadm.conf(5) for information about this file.
> >> #
> >>
> >> # by default, scan all partitions (/proc/partitions) for MD superblocks.
> >> # alternatively, specify devices to scan, using wildcards if desired.
> >> DEVICE partitions
> >>
> >> # auto-create devices with Debian standard permissions
> >> CREATE owner=root group=disk mode=0660 auto=yes
> >>
> >> # automatically tag new arrays as belonging to the local system
> >> HOMEHOST <system>
> >>
> >> # instruct the monitoring daemon where to send mail alerts
> >> MAILADDR root
> >>
> >> # definitions of existing MD arrays
> >> #ARRAY /dev/md2 level=raid1 num-devices=2
> >> UUID=19d72028:63677f91:cd71bfd9:6916a14f
> >>  #spares=2
> >>
> >> # This file was auto-generated on Wed, 26 Jan 2011 09:53:42 -0600
> >> # by mkconf $Id$
> >> hbarta@oak:~$
> >> =============================
> >> (I commented out the two lines following "definitions of existing MD
> >> arrays" because I thought they might be the culprit.)
> >>
> >> They seem to match:
> >> =============================
> >> hbarta@oak:~$ sudo mdadm --examine --scan
> >> ARRAY /dev/md0 level=raid1 num-devices=2
> >> UUID=954a3be2:f23e1239:cd71bfd9:6916a14f
> >> ARRAY /dev/md2 level=raid1 num-devices=2
> >> UUID=19d72028:63677f91:cd71bfd9:6916a14f
> >>  spares=2
> >> hbarta@oak:~$
> >> =============================
> >> except for the addition of a second RAID which I added after installing
> >> mdadm.
> >>
> >> I have no idea how to fix this (*) and appreciate any help with how to do
> >> so.
> >>
> >>
> >> (*) All I can think of is to zero both entire drives and start from
> >> the beginning.
> >>
> >> On Tue, Jan 25, 2011 at 9:41 AM, Hank Barta <hbarta@xxxxxxxxx> wrote:
> >>>
> >>> My previous experiment with USB flash drives has not gone too far. I
> >>> can install Ubuntu Server 10.04 to a single USB flash drive and boot
> >>> my Eee PC 901 and Thinkpad T500 from it, but I cannot boot the Intel
> >>> D525MW from it. The Intel board will boot install media on USB flash,
> >>> but not a normal install. (This is an aside.) The desire to use an
> >>> alternate boot was to avoid having to fiddle with a two drive RAID1.
> >>> The drives have a single partition consisting of the entire drive
> >>> which is combined into the RAID1.
> >>>
> >>> My desire to get this system up and running is overrunning my desire
> >>> to get the USB flash raid to boot. My strategy is to
> >>>  - remove one drive from the raid,
> >>>  - repartition it to allow for a system installation
> >>>  - create a new RAID1 with that drive and format the new data
> >>> partition. (both would be  RAID1 and now both degraded to one drive)
> >>>  - copy data from the existing RAID1 data partition to the new RAID1
> >>> data partition.
> >>>  - stop the old RAID1
> >>>  - repartition the other drive (most recently the old RAID1) to match
> >>> the new RAID1
> >>>  - add the second drive to the new RAID1
> >>>  - watch it rebuild and breathe big sigh of relief.
> >>>
> >>> When convenient I can install Linux to the space I've opened up via
> >>> the above machinations and move this project down the road.
> >>>
> >>> That looks pretty straightforward to me, but I've never let that sort
> >>> of thing prevent me from cobbling things up in the past. (And at this
> >>> moment, I'm making a copy of the RAID1 to an external drive just in
> >>> case.) For anyone interested, I'll share the details of my plan to the
> >>> command level in the case that any of you can spot a problem I have
> >>> overlooked.
> >>>
> >>> A related question Is what are the constraints for partitioning the
> >>> drive to achieve best performance? I plan to create a 10G partition on
> >>> each drive for the system. Likewise, suggestions for tuning the RAID
> >>> and filesystem configurations would be appreciated. Usage for the RAID
> >>> is backup for my home LAN as well as storing pictures and more
> >>> recently my video library so there's a mix of large and small files.
> >>> I'm not obsessed with performance as most clients are on WiFi, but I
> >>> might as well grab the low hanging fruit in this regard.
> >>>
> >>> Feel free to comment on any aspects of the details listed below.
> >>>
> >>> many thanks,
> >>> hank
> >>>
> >>> This is what is presently on the drives:
> >>> ========================
> >>> root@oak:~# cat /proc/mdstat
> >>> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> >>> [raid4] [raid10]
> >>> md1 : active raid1 sdc1[0] sda1[1]
> >>>      1953511936 blocks [2/2] [UU]
> >>>
> >>> unused devices: <none>
> >>> root@oak:~# fdisk -l /dev/sda /dev/sdc
> >>>
> >>> Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
> >>> 255 heads, 63 sectors/track, 243201 cylinders
> >>> Units = cylinders of 16065 * 512 = 8225280 bytes
> >>> Sector size (logical/physical): 512 bytes / 512 bytes
> >>> I/O size (minimum/optimal): 512 bytes / 512 bytes
> >>> Disk identifier: 0x00000000
> >>>
> >>>   Device Boot      Start         End      Blocks   Id  System
> >>> /dev/sda1   *           1      243201  1953512001   fd  Linux raid
> >>> autodetect
> >>>
> >>> Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
> >>> 255 heads, 63 sectors/track, 243201 cylinders
> >>> Units = cylinders of 16065 * 512 = 8225280 bytes
> >>> Sector size (logical/physical): 512 bytes / 512 bytes
> >>> I/O size (minimum/optimal): 512 bytes / 512 bytes
> >>> Disk identifier: 0x00000000
> >>>
> >>>   Device Boot      Start         End      Blocks   Id  System
> >>> /dev/sdc1               1      243201  1953512001   fd  Linux raid
> >>> autodetect
> >>> root@oak:~#
> >>> ========================
> >>>
> >>> One drive is a Seagate ST32000542AS and the other a Samsung HD204UI.
> >>> The Samsung is one of those with 4K sectors. (I think the Seagate may
> >>> be too.)
> >>>
> >>> Selecting /dev/sdc to migrate first (and following more or less the
> >>> guide on
> >>> http://mkfblog.blogspot.com/2007/11/resizing-raid1-system-partition.html)
> >>>
> >>> Fail the drive:
> >>>>
> >>>> mdadm --manage /dev/md1 --fail /dev/sdc1
> >>>
> >>> Remove from the array:
> >>>>
> >>>> mdadm --manage /dev/md1 --remove /dev/sdc1
> >>>
> >>> Zero the superblock:
> >>>>
> >>>> mdadm --zero-superblock /dev/sdc1
> >>>
> >>> <Repartition drive with one 10G primary partition at the beginning and
> >>> a second primary partition using the remainder of the drive: /dev/sdc1
> >>> and /dev/sdc2>
> >>>
> >>> Create new RAID:
> >>>>
> >>>> mdadm --create /dev/md2 -n 2 --level=1 /dev/sdc2 missing
> >>>
> >>> Format:
> >>>>
> >>>> mkfs.ext4 /dev/md2
> >>>
> >>> Mount:
> >>>>
> >>>> mount /dev/md2 /mnt/md2
> >>>
> >>> Copy:
> >>>>
> >>>> rsync -av -H -K --partial --partial-dir=.rsync-partial /mnt/md1/
> >>>> /mnt/USB/
> >>>
> >>> Stop the old RAID:
> >>>>
> >>>> mdadm --stop /dev/md1
> >>>
> >>> Zero the superblock:
> >>>>
> >>>> mdadm --zero-superblock /dev/sda1
> >>>
> >>> Repartition to match the other drive
> >>>
> >>> Add the second drive to the RAID:
> >>>>
> >>>> mdadm --manage /dev/md2 --add /dev/sda2
> >>>
> >>> Watch the resync complete.
> >>>
> >>> Done! (Except for doing something with the new 10G partition, but
> >>> that's another subject.)
> >>>
> >>> Many thanks for reading this far!
> >>>
> >>> best,
> >>> hank
> >>>
> >>> --
> >>> '03 BMW F650CS - hers
> >>> '98 Dakar K12RS - "BABY K" grew up.
> >>> '93 R100R w/ Velorex 700 (MBD starts...)
> >>> '95 Miata - "OUR LC"
> >>> polish visor: apply squashed bugs, rinse, repeat
> >>> Beautiful Sunny Winfield, Illinois
> >>>
> >>
> >>
> >>
> >> --
> >> '03 BMW F650CS - hers
> >> '98 Dakar K12RS - "BABY K" grew up.
> >> '93 R100R w/ Velorex 700 (MBD starts...)
> >> '95 Miata - "OUR LC"
> >> polish visor: apply squashed bugs, rinse, repeat
> >> Beautiful Sunny Winfield, Illinois
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> >> the body of a message to majordomo@xxxxxxxxxxxxxxx
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> 
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux