Problem with mdadm 3.2.5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

FYI, We followed the below steps and At the end you can see the problem with the file system.

RAID operation on 8 harddisks each of size 1TB with 7 harddisks as raid devices and 1 hard disk as spare device got succeed.

#parted -s /dev/md0 print
Model: Linux Software RAID Array (md)
Disk /dev/md0: 6001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number  Start   End     Size    File system  Name     Flags
 1      1049kB  60.0GB  60.0GB  xfs          primary
 2      60.0GB  6001GB  5941GB  xfs          primary


Then We create 2 partitions md0p1 and md0p2.

#cat /proc/partitions
major minor  #blocks  name
  31        0       8192 mtdblock0
  31        1     131072 mtdblock1
   8        0  976762584 sda
   8        1  976760832 sda1
   8       16  976762584 sdb
   8       17  976760832 sdb1
   8       32  976762584 sdc
   8       33  976760832 sdc1
   8       48  976762584 sdd
   8       49  976760832 sdd1
   8       64  976762584 sde
   8       65  976760832 sde1
   8       80  976762584 sdf
   8       81  976760832 sdf1
   8       96  976762584 sdg
   8       97  976760832 sdg1
   8      112  976762584 sdh
   8      113  976760832 sdh1
   9        0 5860563456 md0
 259        0   58604544 md0p1
 259        1 5801957376 md0p2

***************************************************************************************************
                                                                         IT'S FINE UPTO HERE
***************************************************************************************************

Now we failed harddisk-1

# mdadm -f /dev/md0 /dev/sda1

# mdadm -D /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Wed Mar 27 11:10:24 2013
     Raid Level : raid5
     Array Size : 5860563456 (5589.07 GiB 6001.22 GB)
  Used Dev Size : 976760576 (931.51 GiB 1000.20 GB)
   Raid Devices : 7
  Total Devices : 7
Preferred Minor : 0
    Persistence : Superblock is persistent
  Intent Bitmap : Internal
    Update Time : Thu Mar 28 01:03:57 2013
          State : active, degraded, recovering
 Active Devices : 6
Working Devices : 7
 Failed Devices : 0
  Spare Devices : 1
         Layout : left-symmetric
     Chunk Size : 256K
 Rebuild Status : 0% complete
           UUID : debadbe0:49b4fe90:24472787:29621eca (local to host mpc8536ds)
         Events : 0.15
    Number   Major   Minor   RaidDevice State
       7       8      113        0      spare rebuilding   /dev/sdh1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
       4       8       65        4      active sync   /dev/sde1
       5       8       81        5      active sync   /dev/sdf1
       6       8       97        6      active sync   /dev/sdg1

Now harddisk-1 is revovering

#cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdh1[7] sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1]
      5860563456 blocks level 5, 256k chunk, algorithm 2 [7/6] [_UUUUUU]
      [>....................]  recovery =  0.1% (1604164/976760576) finish=324.2min speed=50130K/sec
      bitmap: 0/8 pages [0KB], 65536KB chunk


#parted -s /dev/md0 print
Model: Linux Software RAID Array (md)
Disk /dev/md0: 6001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number  Start   End     Size    File system  Name     Flags
 1      1049kB  60.0GB  60.0GB  xfs          primary
 2      60.0GB  6001GB  5941GB  xfs          primary


While recovering the harddisk, to test the power failure/ restarting situation, we unmount the partitions.

#umount /dev/md0p[12]


Again try to mount the partitions but failed.


#mount /dev/md0p1 /mnt/md0p1
UDF-fs: No partition found (1)
Filesystem "md0p1": Disabling barriers, trial barrier write failed

# mount /dev/md0p2 /mnt/md0p2
grow_buffers: requested out-of-range block 18446744072428564479 for device md0p2
grow_buffers: requested out-of-range block 18446744072428564223 for device md0p2
grow_buffers: requested out-of-range block 18446744072428564478 for device md0p2
grow_buffers: requested out-of-range block 18446744072428564222 for device md0p2
grow_buffers: requested out-of-range block 18446744072428564480 for device md0p2
grow_buffers: requested out-of-range block 18446744072428564224 for device md0p2
grow_buffers: requested out-of-range block 18446744072428564477 for device md0p2


#parted -s /dev/md0 print
Model: Linux Software RAID Array (md)
Disk /dev/md0: 6001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  60.0GB  60.0GB  xfs          primary
 2      60.0GB  6001GB  5941GB               primary

Filesystem is not shown.


Harddisk Recovery is completed

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdh1[0] sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1]
      5860563456 blocks level 5, 256k chunk, algorithm 2 [7/7] [UUUUUUU]
      bitmap: 1/8 pages [4KB], 65536KB chunk

#parted -s /dev/md0 print
Model: Linux Software RAID Array (md)
Disk /dev/md0: 6001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  60.0GB  60.0GB  xfs          primary
 2      60.0GB  6001GB  5941GB               primary

Filesystem is empty.........


Please tell me if I did any thing wrong.


Thanks
Tarak Anumolu





------- Original Message -------
Sender : Sam Bingner<sam@xxxxxxxxxxx>
Date : Mar 27, 2013 19:51 (GMT+09:00)
Title : Re: Need some information about mdadm 3.2.5

On Mar 26, 2013, at 11:28 PM, Hans-Peter Jansen wrote:

> Hi Tarak,
> 
> On Mittwoch, 27. März 2013 05:17:19 Tarak Anumolu wrote:
>> Hi
>> 
>> My name is TARAK.
>> 
>> We got some problem in using mdadm 3.2.5.
>> 
>> We are trying to do RAID operation on 8 harddisks each of size 1TB with 7
>> harddisks as raid devices and 1 hard disk as spare device.
> 
>> Command : mdadm -C /dev/md0 -f --meta-version 0.9 -l5 -n7 -x1 /dev/sd[a-h]1
> 
> Obviously, you already created partitions on your harddisks.
> 
>> After the RAID operation is completed when we check the status,
> 
> Beware, the raid creation is a long process, working in background.
> 
> To check your md, use: "cat /proc/mdstat". This is the most important command 
> in using linux md.
> 
>> We are
>> getting the following errors.
> 
>> # parted - s  /dev/md0 print
>> Model: Linux Software RAID Array (md)
>> Disk /dev/md0: 6001GB
>> Sector size (logical/physical): 512B/512B
>> Partition Table: gpt
>> Number Start End Size File system Name Flags
>> 1 1049kB 60.0GB 60.0GB xfs primary
>> 2 60.0GB 6001GB 5941GB primary
> 
> Now, you want to access the md partition as a harddisk?!?
> 
> What you're trying to do makes little sense. Think of the md partition as an 
> ordinary one. Partitioning happens *before* md creation (if necessary at all, 
> as you can create your mds directly on the harddisks, as long as you need just 
> one md, and don't want to boot from it). The *next* logical step here is 
> creating a filesystem on the md partition. 
> 
> E.g.: mkfs.xfs /dev/md0
> 
> Then assign a mount point (in /etc/fstab), and use it. Call back (to this ML), 
> when you reached this point, as there are a few more important steps to follow 
> for maximum enjoyment.
> 
> Cheers,
> Pete
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


I would only add that if you do want to split it into smaller sections, you may be interested in LVM on RAID.  I also wonder why you chose metadata 0.9 as that limits you in the future if you ever wish to use large devices (>2TB or 4TB depending on your kernel)ÿôèº{.nÇ+‰·Ÿ®‰­†+%ŠËÿ±éݶ¥Šwÿº{.nÇ+‰·¥Š{±þ¶¢wø§¶›¡Ü¨}©ž²Æ zÚ&j:+v‰¨þø¯ù®w¥þŠà2ŠÞ™¨è­Ú&¢)ß¡«a¶Úÿÿûàz¿äz¹Þ—ú+ƒùšŽŠÝ¢jÿŠwèþf





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux