RE: mdadm raid6 recovery status

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks. Is it safe to fsck -n (and then fsck -y) on /dev/md2 when sdg is just added to md2 and is in "spare rebulding" status and recovery is only at 4% completed? BTW, we got all of the data backed up.

Background:
When we assembled md2 we forced rest of the drives and did not include this drive. After 24 hrs, when I tried to assemble all of the drives into md2, I got "md: kicking non-fresh sdg from array!" message through dmesg, and was removed from md2 (through mdadm --detail). I just did:
# mdadm /dev/md2 --add /dev/sdg
mdadm: re-added /dev/sdg

Sundar

________________________________________
From: NeilBrown [neilb@xxxxxxx]
Sent: Thursday, March 29, 2012 3:41 PM
To: Paramasivam, Meenakshisundaram
Cc: linux-raid@xxxxxxxxxxxxxxx
Subject: Re: mdadm raid6 recovery status

On Thu, 29 Mar 2012 18:47:14 +0000 "Paramasivam, Meenakshisundaram"
<mparamas@xxxxxxxxx> wrote:

>
> Clarification:
> >>should I do new array creation
> I meant running newfs on assembled 12 TB array, and restore data from backup, to resolve "df" reporting problem.

I would suggest asking on
    linux-ext4@xxxxxxxxxxxxxxx

be sure to give lots of details - kernel version etc.
It would be worth running
   fsck -n /dev/md2
first and see if it reports anything strange.
Maybe  just a fsck will fix it.

NeilBrown


>
> ________________________________________
> From: Paramasivam, Meenakshisundaram
> Sent: Thursday, March 29, 2012 1:33 PM
> To: NeilBrown
> Cc: linux-raid@xxxxxxxxxxxxxxx
> Subject: RE: mdadm raid6 recovery status
>
> Good news: Got ALL of our data back. [Actually it was 4.96TB not 7TB].
>                      mdadm is a good one.
>
> Bad news: "df" is reporting wrong, while "du" is showing full size.
> # df -kl /myarray
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/md2             11537161976    162432 10950945196   1% /myarray
> # du -sk /myarray
> 5326133556      /myarray
> #
>
> I never looked into du or looked in depth of the files & folders and simply got mislead by reported "df" usage; data was there all along. We definitely want "df" for the array's filesystem (ext3) to report right.
>
> Now that we are backing up all of the data (at 400 Mbps) over network, I want to know if "df" reporting can be fixed easily or should I do new array creation and restore data from backup.
>
> We are ordering a new RAID card, just to be on safer side.
>
> Sundar
>
> ________________________________________
> From: NeilBrown [neilb@xxxxxxx]
> Sent: Wednesday, March 28, 2012 7:27 PM
> To: Paramasivam, Meenakshisundaram
> Cc: linux-raid@xxxxxxxxxxxxxxx
> Subject: Re: mdadm raid6 recovery status
>
> On Wed, 28 Mar 2012 12:49:18 +0000 "Paramasivam, Meenakshisundaram"
> <mparamas@xxxxxxxxx> wrote:
>
> > [root@in-rady-neuro9 ~]# df -kl /myarray
> > Filesystem           1K-blocks      Used Available Use% Mounted on
> > /dev/md2             11537161976    162432 10950945196   1% /myarray
> > Should be 7TB of used space.
>
> This is bad.  Something has happened to your filesystem.
> It is almost as though someone ran "mkfs" on the array.
> I don't know much about recovery after such an action, but I doubt you
> will get much back.
>
> >
> > [root@in-rady-neuro9 ~]# cat /proc/partitions
> > major minor  #blocks  name
> >
> >    8        0  438960128 sda
> >    8        1     512000 sda1
> >    8        2   51200000 sda2
> >    8        3  387247104 sda3
> >    8       16 1953514584 sdb
> >    8       32 1953514584 sdc
> >    8       48 1953514584 sdd
> >    8       64 1953514584 sde
> >    8       80 1953514584 sdf
> >    8       96 1953514584 sdg
> >    8      112 1953514584 sdh
> >    8      128 1953514584 sdi
> >  253        0  346226688 dm-0
> >  253        1   40992768 dm-1
>
> No md2 ???
>
> >
> > sd[b-i] are raid devices
> >
> > [root@in-rady-neuro9 ~]# mdadm --detail /dev/md2
> > /dev/md2:
> >         Version : 0.90
> >   Creation Time : Fri Dec 16 17:56:14 2011
> >      Raid Level : raid6
> >      Array Size : 11721086976 (11178.10 GiB 12002.39 GB)
> >   Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)  <<<====== Wrong! Should be 7TB of used array space.
>
> "Used Dev Size" isn't "how much of the array is used by the filesystem" -
> mdadm doesn't know anything about filesystems.
> It is "How much of each individual device is used by the array", which is
> usually a little less than the size of the smallest device.
> So 2TB is correct here.
>
>
> NeilBrown
>
>

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux