Okay with some troubles, I managed to understand something. What I did after installing mbadm is using the command sudo fdisk -l After that i got a bunch of data, but I guess this is what I needed to know. Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 33553920 bytes Disklabel type: gpt Disk identifier: 560070A2-CB61-4416-BC11-E4F7C2E28388 Device Start End Sectors Size Type /dev/sdc1 2048 4196351 4194304 2G Linux swap /dev/sdc2 6293504 7811938303 7805644800 3.7T Microsoft basic data /dev/sdc3 7811938304 7814037134 2098831 1G Microsoft basic data /dev/sdc4 4196352 6293503 2097152 1G Microsoft basic data After this I tried to run --examine: ubuntu@ubuntu:~$ sudo mdadm --examine /dev/sdc /dev/sdc: MBR Magic : aa55 Partition[0] : 4294967295 sectors at 1 (type ee) ubuntu@ubuntu:~$ sudo mdadm --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 0.90.00 UUID : 2da381ee:0105a2ee:40130e04:e0c90235 Creation Time : Fri Oct 20 16:57:14 2017 Raid Level : raid1 Used Dev Size : 2097088 (2048.28 MiB 2147.42 MB) Array Size : 2097088 (2048.28 MiB 2147.42 MB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 0 Update Time : Fri Oct 20 16:57:32 2017 State : clean Internal Bitmap : present Active Devices : 3 Working Devices : 3 Failed Devices : 1 Spare Devices : 0 Checksum : aca4dbd2 - correct Events : 6 Number Major Minor RaidDevice State this 1 8 33 1 active sync /dev/sdc1 0 0 8 17 0 active sync 1 1 8 33 1 active sync /dev/sdc1 2 2 8 1 2 active sync /dev/sda1 3 3 0 0 3 faulty removed ubuntu@ubuntu:~$ sudo mdadm --examine /dev/sdc2 /dev/sdc2: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : 4bc3cfef:db0ae9ef:2bceeae9:ac2bbee2 Name : 1 Creation Time : Mon Nov 7 13:04:44 2016 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 7805644528 (3722.02 GiB 3996.49 GB) Array Size : 3902822264 (3722.02 GiB 3996.49 GB) Super Offset : 7805644784 sectors Unused Space : before=0 sectors, after=256 sectors State : clean Device UUID : d23ca413:64d7dec5:d938d9d6:b11d2d26 Internal Bitmap : 2 sectors from superblock Update Time : Fri Oct 20 08:43:02 2017 Checksum : 5124c394 - correct Events : 3 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) ubuntu@ubuntu:~$ sudo mdadm --examine /dev/sdc3 mdadm: No md superblock detected on /dev/sdc3. ubuntu@ubuntu:~$ sudo mdadm --examine /dev/sdc4 mdadm: No md superblock detected on /dev/sdc4. ubuntu@ubuntu:~$ 2017-10-21 12:22 GMT+00:00 Francesco Tomadin <inamaru94@xxxxxxxxx>: > Thanks for the reply! > First of all, I'll try to express myself in the best way I can :) (I'm > actually impressed that you realized my main language) > > As you said, the nas won't boot up, it's stuck from yesterday. I'll > probably have to plug out the electricity since it doesn't work at > all. At the moment when I try to log in in the control panel it says > is performing file system consistency check. I'm not even sure if > unplugging right now will make the things even worse. > > When you ask me to run those commands, to be honest I wouldn't even > know where to start, from what I tried to do in the last 2 days, I was > only able to see the disk partitions, and nothing more. > > > 2017-10-21 13:25 GMT+02:00 Anthony Youngman <antmbox@xxxxxxxxxxxxxxx>: >> On 21/10/17 11:46, Francesco Tomadin wrote: >>> >>> As title says, due to a bug in the WD nas, it started creating >>> infinite shares folder. >>> At the moment I can't access my NAS anymore, so I decided to try >>> ubuntu to recover my data. >>> I tried to follow some guides and to check stuff but seems like >>> (especially for my NAS) stuff like this doesn't exist. >>> I don't even wanna try to do stuff on my own since as you can clearly >>> see I'm not a linux user I don't even understand what's going on. >>> >>> PS: My nas had 4 slot, one of them was in JBOD format, and with ubuntu >>> I could easily access the data without problem. >>> >>> Tried to use some mdadm command but It gave me some "strange" errors >>> which I would ever try to "fix" on my own. >>> Tried to give a last chance to my nas yesterday but it's stuck at boot. >>> Please if someone could really help me task to task because I really >>> really don't know what to do anymore. >> >> >> Have you read the raid wiki? Is there any chance you could run the --detail >> and --examine commands? (Yes I get you may have problem finding the disks to >> be able to run the commands on them! :-) >> >> https://raid.wiki.kernel.org/index.php/Linux_Raid >> >> You might have to take the disks out of the NAS, or force the whole thing >> into JBOD mode, but again if it won't boot, that might not be possible ... >>> >>> >>> Sorry for my poor english It's not my first language. >> >> >> Better than my foreign languages! If you can't express yourself clearly in >> English, then post bi-lingual. I'm guessing your first language is Spanish >> or Italian, which I don't understand, but there are probably people who do. >> But you need to use as much English as possible to get the best help. >> >> Cheers, >> Wol -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html