On 02/12/14 08:14 PM, David McGuffey wrote:
Received the following message in mail to root:
Message 257:
From root@desk4.localdomain Tue Oct 28 07:25:37 2014
Return-Path: <root@desk4.localdomain>
X-Original-To: root
Delivered-To: root@desk4.localdomain
From: mdadm monitoring <root@desk4.localdomain>
To: root@desk4.localdomain
Subject: DegradedArray event on /dev/md0:desk4
Date: Tue, 28 Oct 2014 07:25:27 -0400 (EDT)
Status: RO
This is an automatically generated mail message from mdadm
running on desk4
A DegradedArray event had been detected on md device /dev/md0.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid1]
md0 : active raid1 dm-2[1]
243682172 blocks super 1.1 [2/1] [_U]
bitmap: 2/2 pages [8KB], 65536KB chunk
md1 : active raid1 dm-3[0] dm-0[1]
1953510268 blocks super 1.1 [2/2] [UU]
bitmap: 3/15 pages [12KB], 65536KB chunk
unused devices: <none>
& q
Held 314 messages in /var/spool/mail/root
You have mail in /var/spool/mail/root
Ran a madam query against both raid partitions:
[root@desk4 ~]# mdadm --query --detail /dev/md0
/dev/md0:
Version : 1.1
Creation Time : Thu Nov 15 19:24:17 2012
Raid Level : raid1
Array Size : 243682172 (232.39 GiB 249.53 GB)
Used Dev Size : 243682172 (232.39 GiB 249.53 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Dec 2 20:02:55 2014
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : desk4.localdomain:0
UUID : 29f70093:ae78cf9f:0ab7c1cd:e380f50b
Events : 266241
Number Major Minor RaidDevice State
0 0 0 0 removed
1 253 3 1 active sync /dev/dm-3
[root@desk4 ~]# [root@desk4 ~]# mdadm --query --detail /dev/md1
/dev/md1:
Version : 1.1
Creation Time : Thu Nov 15 19:24:19 2012
Raid Level : raid1
Array Size : 1953510268 (1863.01 GiB 2000.39 GB)
Used Dev Size : 1953510268 (1863.01 GiB 2000.39 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Dec 2 20:06:21 2014
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : desk4.localdomain:1
UUID : 1bef270d:36301a24:7b93c7a9:a2a95879
Events : 108306
Number Major Minor RaidDevice State
0 253 0 0 active sync /dev/dm-0
1 253 1 1 active sync /dev/dm-1
[root@desk4 ~]#
Appears to me that device 0 (/dev/dm-2) on md0 has been removed because
of problems.
This is my first encounter with a raid failure. I suspect I should
replace disk 0 and let the raid rebuild itself.
Seeking guidance and a good source for the procedures.
Dave M
In short, buy a replacement disk equal or greater size, create matching
partitions and then use mdadm to add the replacement partition (of
appropriate size) back into the array.
An example command to add a replacement partition would be:
mdadm --manage /dev/md0 --add /dev/sda1
I strongly recommend creating a virtual machine with a pair of virtual
disks and simulating the replacement of the drive before trying it out
on your real system. In any case, be sure to have good backups
(immediately).
--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos