Problem with mount QNAP disks in Linux

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I'm trying mount three disks from QNAP TS-469L in Linux (Fedora 25).
System correctly recognised disks and RAID5, but I have problem with activate LVM with data.

In QNAP QTS system it was look this way:
Storage Pool 1          5,44 TB
├─DataVol1 (System)     5,38 TB         (Free Size: ~300,00 GB)
└─DataVol2              509,46 GB       (Free Size: ~400,00 GB)

In Fedora it looks that disks and RAID are recognised OK.

$ lsblk
NAME            MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sdb               8:16   0   2,7T  0 disk
├─sdb4            8:20   0 517,7M  0 part
│ └─md123         9:123  0 448,1M  0 raid1
├─sdb2            8:18   0 517,7M  0 part
│ └─md127         9:127  0 517,7M  0 raid1
├─sdb5            8:21   0     8G  0 part
│ └─md125         9:125  0   6,9G  0 raid1
├─sdb3            8:19   0   2,7T  0 part
│ └─md124         9:124  0   5,5T  0 raid5
│   └─vg1-lv544 253:3    0    20G  0 lvm
└─sdb1            8:17   0 517,7M  0 part
  └─md126         9:126  0 517,7M  0 raid1
sdc               8:32   0   2,7T  0 disk
├─sdc2            8:34   0 517,7M  0 part
│ └─md127         9:127  0 517,7M  0 raid1
├─sdc5            8:37   0     8G  0 part
│ └─md125         9:125  0   6,9G  0 raid1
├─sdc3            8:35   0   2,7T  0 part
│ └─md124         9:124  0   5,5T  0 raid5
│   └─vg1-lv544 253:3    0    20G  0 lvm
├─sdc1            8:33   0 517,7M  0 part
│ └─md126         9:126  0 517,7M  0 raid1
└─sdc4            8:36   0 517,7M  0 part
  └─md123         9:123  0 448,1M  0 raid1
sda               8:0    0   2,7T  0 disk
├─sda4            8:4    0 517,7M  0 part
│ └─md123         9:123  0 448,1M  0 raid1
├─sda2            8:2    0 517,7M  0 part
│ └─md127         9:127  0 517,7M  0 raid1
├─sda5            8:5    0     8G  0 part
│ └─md125         9:125  0   6,9G  0 raid1
├─sda3            8:3    0   2,7T  0 part
│ └─md124         9:124  0   5,5T  0 raid5
│   └─vg1-lv544 253:3    0    20G  0 lvm
└─sda1            8:1    0 517,7M  0 part
  └─md126         9:126  0 517,7M  0 raid1

$ cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md123 : active (auto-read-only) raid1 sda4[2] sdc4[1] sdb4[0]
      458880 blocks super 1.0 [24/3] [UUU_____________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md124 : active (auto-read-only) raid5 sdb3[0] sdc3[1] sda3[2]
5840623232 blocks super 1.0 level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md125 : active (auto-read-only) raid1 sda5[2] sdc5[1] sdb5[0]
      7168000 blocks super 1.0 [3/3] [UUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md126 : active (auto-read-only) raid1 sda1[2] sdc1[1] sdb1[0]
      530112 blocks super 1.0 [24/3] [UUU_____________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md127 : active raid1 sda2[2](S) sdb2[0] sdc2[1]
      530112 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

$ sudo pvs
  PV             VG     Fmt  Attr PSize   PFree
  /dev/md124     vg1    lvm2 a--    5,44t    0

$ sudo vgs
  VG     #PV #LV #SN Attr   VSize   VFree
  vg1      1   4   0 wz--n-   5,44t    0

$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
  lv1   vg1    Vwi---tz--   5,42t tp1
  lv2   vg1    Vwi---tz-- 512,00g tp1
  lv544 vg1    -wi-a----- 20,00g
  tp1   vg1    twi---tz-- 5,40t

$ sudo lvscan
  ACTIVE            '/dev/vg1/lv544' [20,00 GiB] inherit
  inactive          '/dev/vg1/tp1' [5,40 TiB] inherit
  inactive          '/dev/vg1/lv1' [5,42 TiB] inherit
  inactive          '/dev/vg1/lv2' [512,00 GiB] inherit

If I understand correctly:
/dev/vg1/tp1 - it is QNAP "Storage Pool 1" (with real size 5,40 TiB)
/dev/vg1/lv1 - it is QNAP "DataVol1" (with virtual size 5,42 TiB)
/dev/vg1/lv2 - it is QNAP "DataVol2" (with virtual size 512 GiB)

So my data are on /dev/vg1/lv1 and /dev/vg1/lv2 but they are "inactive", so they cannot be mounted.
$ sudo mount /dev/vg1/lv2 /mnt/lv2
mount: special device /dev/vg1/lv2 does not exist

I tried to activate it with lvchange but there is message that manual repair of vg1/tp1 is required:
$ sudo lvchange -ay vg1/lv2
  Check of pool vg1/tp1 failed (status:1). Manual repair required!

I tried command lvconvert --repair vg1/tp1 with no success:
$ sudo lvconvert --repair vg1/tp1
  Using default stripesize 64,00 KiB.
Volume group "vg1" has insufficient free space (0 extents): 4096 required.

Please help howto repair/mount this LVM...

Best Regards,
Daniel

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux