Re: Raid array empty after restart

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Did you originally have /dev/md0p1 in fstab and you have edited fstab
since you booted?

If so the great and amazing systemd will not be amused and will still
have a job for the old device, you will need to run systemctl
daemon-reload for it to read the fstab file as it is not smart enough
to do that itself.

On Sun, May 24, 2020 at 5:44 PM Patrick O'Callaghan
<pocallaghan@xxxxxxxxx> wrote:
>
> On Mon, 2020-05-25 at 05:34 +0800, Ed Greshko wrote:
> > On 2020-05-25 05:20, Patrick O'Callaghan wrote:
> > > On Mon, 2020-05-25 at 03:16 +0800, Ed Greshko wrote:
> > > > > > NAME                 MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
> > > > > > sda                    8:0    0   50G  0 disk
> > > > > > └─md0                  9:0    0   50G  0 raid1
> > > > > > sdb                    8:16   0   50G  0 disk
> > > > > > └─md0                  9:0    0   50G  0 raid1
> > > > > > sr0                   11:0    1 1024M  0 rom
> > > > > > vda                  252:0    0   30G  0 disk
> > > > > > ├─vda1               252:1    0    1G  0 part  /boot
> > > > > > └─vda2               252:2    0   29G  0 part
> > > > > >    ├─fedora_f31k-root 253:0    0   27G  0 lvm   /
> > > > > >    └─fedora_f31k-swap 253:1    0  2.1G  0 lvm   [SWAP]
> > > > > > and it seems a bit more "sane" than your configuration.
> > > > > Yours is using LVM, which I wanted to avoid. That may be the root of
> > > > > the issue (though I've no idea why).
> > > >
> > > > ?????
> > > >
> > > >
> > > >
> > > > The RAID Array isn't using LVM.
> > > >
> > > >
> > > >
> > > > This is just an added pair of disks, with RAID.
> > > Oops, I was looking at the vda[12] rather than sd[ab]
> >
> > OK.  All things being equal, if I were in your shoes I'd go back and redo the RAID creation.
> >
> > When the "mdadm --create" is performed and mirroring of the drives begins the array can still
> > be used.  You can proceed with mkfs on it simultaneous.
>
> OK, I did this:
>
> # mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[de]
> mdadm: /dev/sdd appears to be part of a raid array:
>        level=raid1 devices=2 ctime=Wed May 20 16:34:58 2020
> mdadm: partition table exists on /dev/sdd but will be lost or
>        meaningless after creating array
> mdadm: Note: this array has metadata at the start and
>     may not be suitable as a boot device.  If you plan to
>     store '/boot' on this device please ensure that
>     your boot-loader understands md/v1.x metadata, or use
>     --metadata=0.90
> mdadm: /dev/sde appears to be part of a raid array:
>        level=raid1 devices=2 ctime=Wed May 20 16:34:58 2020
> mdadm: partition table exists on /dev/sde but will be lost or
>        meaningless after creating array
> Continue creating array? y
> mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array
> mdadm: Defaulting to version 1.2 metadata
> mdadm: array /dev/md0 started.
>
> And now I find:
> # lsblk
> NAME                            MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
> [...]
> sdd                               8:48   0 931.5G  0 disk
> └─md0                             9:0    0 931.4G  0 raid1
>   └─md0p1                       259:0    0 931.4G  0 part  /run/media/poc/6cb66da2-147a-4f3c-a513-36f6164ab581
> sde                               8:64   0 931.5G  0 disk
> └─md0                             9:0    0 931.4G  0 raid1
>   └─md0p1                       259:0    0 931.4G  0 part  /run/media/poc/6cb66da2-147a-4f3c-a513-36f6164ab581
>
> So although the above message says the existing partition table will be
> lost, for some reason I'm still getting a partition, while you
> apparently didn't. I copied the --create command directly from the man
> page. Is this not the "standard" way you mentioned in an earlier reply?
>
> Finally, the /run/media/... etc. mounts now show my existing data. All
> the same, the disk lights are busy and I expect them to be going all
> night.
>
> poc
> _______________________________________________
> users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
> To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx
> Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx



[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [EPEL Devel]     [Fedora Magazine]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Desktop]     [Fedora Fonts]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Fedora Sparc]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux