Re: Raid5 to raid6 grow interrupted, mdadm hangs on assemble command

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> There is much data on this array that I don't mind being trashed.

There is about 200GB I would very much like to have back. Email archive,
travel pictures, openhab configuration, ... It is all in a huge LVM
with different
logical volumes.

On Mon, Apr 24, 2023 at 3:31 PM Jove <jovetoo@xxxxxxxxx> wrote:
>
> Any data that can be retrieved would be a plus. There is much data on
> this array that I don't mind being trashed.
>
> The older drives are WD Red, they are pre-SHMR. I have made sure after
> that to use WD Red Plus and WD Red Pro drives. From what I found
> online, they should be CMR too. Unless they quietly changed those too.
>
> No, the conversion definitely did not stop at 0%. It ran for several
> hours. It stopped during the night, so I can't tell you more.
>
> I am worried that the processes are hung, though. Is that normal?
>
> Thank you for your time!
>
> On Mon, Apr 24, 2023 at 9:41 AM Wols Lists <antlists@xxxxxxxxxxxxxxx> wrote:
> >
> > On 23/04/2023 20:09, Jove wrote:
> > > # mdadm --version
> > > mdadm - v4.2 - 2021-12-30 - 8
> > >
> > > # mdadm -D /dev/md0
> > > /dev/md0:
> > >             Version : 1.2
> > >       Creation Time : Sat Oct 21 01:57:20 2017
> > >          Raid Level : raid6
> > >          Array Size : 7813771264 (7.28 TiB 8.00 TB)
> > >       Used Dev Size : 3906885632 (3.64 TiB 4.00 TB)
> > >        Raid Devices : 4
> > >       Total Devices : 5
> > >         Persistence : Superblock is persistent
> > >
> > >       Intent Bitmap : Internal
> > >
> > >         Update Time : Sun Apr 23 10:32:01 2023
> > >               State : clean, degraded
> > >      Active Devices : 3
> > >     Working Devices : 5
> > >      Failed Devices : 0
> > >       Spare Devices : 2
> > >
> > >              Layout : left-symmetric-6
> > >          Chunk Size : 512K
> > >
> > > Consistency Policy : bitmap
> > >
> > >          New Layout : left-symmetric
> > >
> > >                Name : atom:0  (local to host atom)
> > >                UUID : 8c56384e:ba1a3cec:aaf34c17:d0cd9318
> > >              Events : 669453
> > >
> > >      Number   Major   Minor   RaidDevice State
> > >         0       8       33        0      active sync   /dev/sdc1
> > >         1       8       97        1      active sync   /dev/sdg1
> > >         3       8       49        2      active sync   /dev/sdd1
> > >         5       8       80        3      spare rebuilding   /dev/sdf
> > >
> > >         4       8       64        -      spare   /dev/sde
> >
> > This bit looks good. You have three active drives, so I'm HOPEFUL your
> > data hasn't actually been damaged.
> >
> > I've cc'd two people more experienced than me who I hope can help.
> >
> > Cheers,
> > Wol




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux