Chris,
I very much appreciate the help. It's been a learning experience for
me to say the least.
<<Ok so it's only seeing partitions 1,2,5,6 on each drive, and those are
mdadm members. Open questions are why we only see two md devices, and
why we don't see partitions 3 and 4. It's a rabbit hole.>>
Yeah; that is weird... but for some reason; the latest data seems to
very much confirm this. For whatever reason; it looks like Synology
does some weird partitioning.
<<And also the gpt signature on md3 is weird, that suggests a backup gpt
at the end of md3. What do you get for:
# parted /dev/md3 u s p >>
Model: Linux Software RAID Array (md)
Disk /dev/md3: 17581481472s
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
<<And also the same for /dev/sda (without any numbers).>>
Model: WDC WD4002FFWX-68TZ4 (scsi)
Disk /dev/sda: 7814037168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 2048s 4982527s 4980480s ext4 raid
2 4982528s 9176831s 4194304s linux-swap(v1) raid
5 9453280s 1953318239s 1943864960s raid
6 1953334336s 7813830239s 5860495904s raid
<< /etc/lvm area. >>
This data was present on the NAS w/ drives installed.
I have the contents of /etc/lvm zipped up... but didn't not attach it
here because of two concerns:
1) the List will proably strip it. (smart)
2) Concerned the data may contain some sensitive data. Unlikely; but
wanted to make sure before I broadcasted it to everyone on the 'net.
<<a.) figure out how this thing assembles itself at boot time in order
to reveal the root to get at /etc/lvm; or b.) put the three drives in
the NAS and boot it. a) is tedious without a cheat sheet from
Synology. >>
Weirdly; there is something black-science-y going on with the way
Synology sets up these systems. Upon putting the drives in the NAS...
I got a /dev/md0 which becomes the root. Wonder why it didn't show up
when on the test PC. I specutively executed and dumped some data for
you at the end of this email thread.
<<Put the three NAS drives that are in the PC back into the NAS and boot
(degraded), and collect the information we really want:
# blkid>>
This returned no data. I suspect blkid is blocked or not "completely"
implemented on the Synology "os".
<<# mount>>
Trimmed out the extraneous info.
/dev/md0 on / type ext4 (rw,relatime,journal_checksum,barrier,data=ordered)
cgmfs on /run/cgmanager/fs type tmpfs (rw,relatime,size=100k,mode=755)
cgroup on /sys/fs/cgroup/devices type cgroup
(rw,relatime,devices,release_agent=/run/cgmanager/agents/cgm-release-agent.devices)
cgroup on /sys/fs/cgroup/freezer type cgroup
(rw,relatime,freezer,release_agent=/run/cgmanager/agents/cgm-release-agent.freezer)
cgroup on /sys/fs/cgroup/blkio type cgroup
(rw,relatime,blkio,release_agent=/run/cgmanager/agents/cgm-release-agent.blkio)
/dev/sdq1 on /volumeUSB1/usbshare type vfat
(rw,relatime,uid=1024,gid=100,fmask=0000,dmask=0000,allow_utime=0022,codepage=fault,iocharset=default,shortname=mixed,quiet,utf8,flush,errors=remount-ro)
<<# grep -r md3 /etc/lvm>>
/etc/lvm/archive/vg1000_00000-2024839799.vg:description = "Created
*before* executing '/sbin/vgcreate --physicalextentsize 4m /dev/vg1000
/dev/md2 /dev/md3'"
/etc/lvm/archive/vg1000_00000-2024839799.vg: device = "/dev/md3" # Hint only
/etc/lvm/archive/vg1000_00003-229433250.vg: device = "/dev/md3" # Hint only
/etc/lvm/archive/vg1000_00004-577325499.vg: device = "/dev/md3" # Hint only
/etc/lvm/archive/vg1000_00002-1423835597.vg:description = "Created
*before* executing '/sbin/pvresize /dev/md3'"
/etc/lvm/archive/vg1000_00002-1423835597.vg: device = "/dev/md3" # Hint only
/etc/lvm/archive/vg1000_00001-537833588.vg: device = "/dev/md3" # Hint only
/etc/lvm/backup/vg1000: device = "/dev/md3" # Hint only
<<# cat /etc/fstab>>
none /proc proc defaults 0 0
/dev/root / ext4 defaults 1 1
/dev/vg1000/lv /volume1 btrfs 0 0
<<Part 2:
Put the "missing" md member number 2/bay 3 drive into the PC, booting
from Live media as you have been.
# mdadm -E /dev/sdX6 >>
/dev/sda6:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 340a678e:167ca3d9:c185d6c8:a1d66183
Name : Zittware-NAS916:3
Creation Time : Thu May 25 01:26:52 2017
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 5860493856 (2794.50 GiB 3000.57 GB)
Array Size : 8790740736 (8383.50 GiB 9001.72 GB)
Used Dev Size : 5860493824 (2794.50 GiB 3000.57 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Unused Space : before=1968 sectors, after=32 sectors
State : clean
Device UUID : 62201ad0:0158f31a:ac35b379:7f13a583
Update Time : Sat Mar 2 01:09:20 2019
Checksum : 348b1754 - correct
Events : 16134
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
<< # dd if=/dev/sdX6 skip=2048 bs=1M count=1 of=/safepathtofile-sdX6. >>
The file is really too big and would probably be stripped by list
manager. It's available here:
https://drive.google.com/open?id=1A4e2UnzCiN0JUcJZdHe3QwXZa55-kMpd
Other data I collected...
cat /proc/mdstat showed some real interesting data on the NAS.
Basically 4 /dev/mdX volumes. I logged the output of each /dev/mdX
volume using mdadm -E /dev/md[0123] iirc.
/dev/md0:
Version : 0.90
Creation Time : Wed May 24 20:12:04 2017
Raid Level : raid1
Array Size : 2490176 (2.37 GiB 2.55 GB)
Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
Raid Devices : 4
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Mar 7 20:23:39 2019
State : active, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
UUID : 8cd11542:15f14c2c:3017a5a8:c86610be
Events : 0.7510966
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
- 0 0 2 removed
3 8 49 3 active sync /dev/sdd1
/dev/md1:
Version : 0.90
Creation Time : Thu Feb 28 23:20:49 2019
Raid Level : raid1
Array Size : 2097088 (2047.94 MiB 2147.42 MB)
Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB)
Raid Devices : 4
Total Devices : 3
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Thu Mar 7 20:02:20 2019
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
UUID : 73451bf6:121b75f1:f08fe43e:8582a597 (local to host
Zittware-NAS)
Events : 0.24
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
- 0 0 2 removed
3 8 50 3 active sync /dev/sdd2
/dev/md2:
Version : 1.2
Creation Time : Wed May 24 20:26:51 2017
Raid Level : raid5
Array Size : 2915794368 (2780.72 GiB 2985.77 GB)
Used Dev Size : 971931456 (926.91 GiB 995.26 GB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Thu Mar 7 20:02:37 2019
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : Zittware-NAS916:2
UUID : 542cb926:b17ba538:95653afc:c0d35a3c
Events : 2917
Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
1 8 21 1 active sync /dev/sdb5
- 0 0 2 removed
4 8 53 3 active sync /dev/sdd5
/dev/md3:
Version : 1.2
Creation Time : Wed May 24 20:26:52 2017
Raid Level : raid5
Array Size : 8790740736 (8383.50 GiB 9001.72 GB)
Used Dev Size : 2930246912 (2794.50 GiB 3000.57 GB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Thu Mar 7 20:02:37 2019
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : Zittware-NAS916:3
UUID : 340a678e:167ca3d9:c185d6c8:a1d66183
Events : 16137
Number Major Minor RaidDevice State
0 8 6 0 active sync /dev/sda6
1 8 22 1 active sync /dev/sdb6
- 0 0 2 removed
3 8 54 3 active sync /dev/sdd6