Re: ssm vs. lvm: moving physical drives and volume group to another system

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



When I change /etc/fstab from /dev/mapper/lvol001 to
/dev/lvm_pool/lvol001, kernel 3.10.0-514 will boot.

Kernel 3.10.0-862 hangs and will not boot.
On Sat, Jul 14, 2018 at 1:20 PM Mike <1100100@xxxxxxxxx> wrote:
>
> Maybe not a good assumption afterall --
>
> I can no longer boot using kernel 3.10.0-514 or 3.10.0-862.
>
> boot.log shows:
>
> Dependency failed for /mnt/data
> Dependency failed for Local File Systems
> Dependency failed for Mark the need to relabel after reboot.
> Dependency failed for Migrate local SELinux policy changes from the
> old store structure to the new structure.
> Dependency failed for Relabel all filesystems, if necessary.
>
>
> On Sat, Jul 14, 2018 at 12:55 PM Mike <1100100@xxxxxxxxx> wrote:
> >
> > I did the following test:
> >
> > ###############################################
> > 1.
> >
> > Computer with Centos 7.5 installed on hard drive /dev/sda.
> >
> > Added two hard drives to the computer: /dev/sdb and /dev/sdc.
> >
> > Created a new logical volume in RAID-1 using RedHat System Storage Manager:
> >
> > ssm create --fstype xfs -r 1 /dev/sdb /dev/sdc /mnt/data
> >
> > Everything works.
> > /dev/lvm_pool/lvol001 is mounted to /mnt/data.
> > Files and folders can be copied/moved, read/written on /mnt/data.
> >
> > ###############################################
> >
> > 2.
> >
> > I erased CentOS 7.5 from /dev/sda.
> > Wrote zeros to /dev/sda using dd.
> > Reinstalled CentOS 7 on /dev/sda.
> > Completed yum update - reboot - yum install system-storage-manager.
> >
> > RedHat system storage manager listed all existing volumes on the computer:
> >
> > [root@localhost]# ssm list
> >
> > --------------------------------------------------------------------------------------
> > Volume              Pool   Volume size  FS     FS size       Free
> > Type    Mount point
> > --------------------------------------------------------------------------------------
> > /dev/cl/root        cl        65.00 GB  xfs   64.97 GB   63.67 GB
> > linear  /
> > /dev/cl/swap        cl         8.00 GB
> > linear
> > /dev/lvm_pool/lvol001 lvm_pool200.00 GB xfs  199.90 GB  184.53 GB
> > raid1   /mnt/data
> > /dev/cl/home        cl       200.00 GB  xfs  199.90 GB  199.87 GB
> > linear  /home
> > /dev/sda1                      4.00 GB  xfs    3.99 GB    3.86 GB
> > part    /boot
> > --------------------------------------------------------------------------------------
> > [/CODE]
> >
> > So far, so good.  The new CentOS7 install can see the logical volume.
> >
> > Mounted the volume:  ssm mount -t xfs /dev/lvm_pool/lvol001 /mnt/data
> > Works.
> > cd to /mnt/data and I can see the files left on the volume from the
> > previous tests.
> > Moving/copying/read/write -- works.
> >
> > ###################################################
> >
> > 3. Is it safe to assume when using RedHat System Storage Manager it's
> > not necessary to use the lvm commands (vgexport and vgimport) to move
> > two physical drives containing a logical volume in raid 1 from one
> > computer to another?
> >
> > Thanks for your help and guidance.
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos



[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]


  Powered by Linux