Greetings, I'm trying to mount a RAID5 array created on a NASAS-2040 which uses an embedded Linux OS which has failed. I'm having trouble determining the filesystem type used. I've physically connected 3 out of the 4 drives, so as to start it in degraded mode and pull the data off. Here are sections of my logs: Jun 2 13:58:16 localhost kernel: hda: IBM-DJNA-351520, ATA DISK drive Jun 2 13:58:16 localhost kernel: hdb: Maxtor 5A300J0, ATA DISK drive Jun 2 13:58:16 localhost kernel: blk: queue c03fcf00, I/O limit 4095Mb (mask 0xffffffff) Jun 2 13:58:16 localhost kernel: blk: queue c03fd040, I/O limit 4095Mb (mask 0xffffffff) Jun 2 13:58:16 localhost kernel: hdc: Maxtor 5A300J0, ATA DISK drive Jun 2 13:58:16 localhost kernel: hdd: Maxtor 5A300J0, ATA DISK drive Jun 2 13:58:16 localhost kernel: blk: queue c03fd35c, I/O limit 4095Mb (mask 0xffffffff) Jun 2 13:58:16 localhost kernel: blk: queue c03fd49c, I/O limit 4095Mb (mask 0xffffffff) Jun 2 13:58:16 localhost kernel: ide0 at 0x1f0-0x1f7,0x3f6 on irq 14 Jun 2 13:58:16 localhost kernel: ide1 at 0x170-0x177,0x376 on irq 15 Jun 2 13:58:16 localhost kernel: hda: attached ide-disk driver. Jun 2 13:58:16 localhost kernel: hda: host protected area => 1 Jun 2 13:58:16 localhost kernel: hda: 30033360 sectors (15377 MB) w/430KiB Cache, CHS=1869/255/63 Jun 2 13:58:16 localhost kernel: hdb: attached ide-disk driver. Jun 2 13:58:16 localhost kernel: hdb: host protected area => 1 Jun 2 13:58:16 localhost kernel: hdb: 585940320 sectors (300001 MB) w/2048KiB Cache, CHS=36473/255/63, UDMA(133) Jun 2 13:58:16 localhost kernel: hdc: attached ide-disk driver. Jun 2 13:58:16 localhost kernel: hdc: host protected area => 1 Jun 2 13:58:16 localhost kernel: hdc: 585940320 sectors (300001 MB) w/2048KiB Cache, CHS=36473/255/63, UDMA(133) Jun 2 13:58:16 localhost kernel: hdd: attached ide-disk driver. Jun 2 13:58:16 localhost kernel: hdd: host protected area => 1 Jun 2 13:58:16 localhost kernel: hdd: 585940320 sectors (300001 MB) w/2048KiB Cache, CHS=36473/255/63, UDMA(133) Jun 2 13:58:16 localhost kernel: Partition check: Jun 2 13:58:16 localhost kernel: hda: hda1 hda2 hda3 Jun 2 13:58:16 localhost kernel: hdb: hdb1 hdb2 hdb3 Jun 2 13:58:16 localhost kernel: hdc: hdc1 hdc2 hdc3 Jun 2 13:58:16 localhost kernel: hdd: hdd1 hdd2 hdd3 Jun 2 13:58:16 localhost kernel: ide: late registration of driver. Jun 2 13:58:16 localhost kernel: md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27 Jun 2 13:58:16 localhost kernel: md: Autodetecting RAID arrays. Jun 2 13:58:16 localhost kernel: [events: 00000008] Jun 2 13:58:17 localhost last message repeated 2 times Jun 2 13:58:17 localhost kernel: md: autorun ... Jun 2 13:58:17 localhost kernel: md: considering hdd1 ... Jun 2 13:58:17 localhost kernel: md: adding hdd1 ... Jun 2 13:58:17 localhost kernel: md: adding hdc1 ... Jun 2 13:58:17 localhost kernel: md: adding hdb1 ... Jun 2 13:58:17 localhost kernel: md: created md0 Jun 2 13:58:17 localhost kernel: md: bind<hdb1,1> Jun 2 13:58:17 localhost kernel: md: bind<hdc1,2> Jun 2 13:58:17 localhost kernel: md: bind<hdd1,3> Jun 2 13:58:17 localhost kernel: md: running: <hdd1><hdc1><hdb1> Jun 2 13:58:17 localhost kernel: md: hdd1's event counter: 00000008 Jun 2 13:58:17 localhost kernel: md: hdc1's event counter: 00000008 Jun 2 13:58:17 localhost kernel: md: hdb1's event counter: 00000008 Jun 2 13:58:17 localhost kernel: kmod: failed to exec /sbin/modprobe -s -k md-personality-4, errno = 2 Jun 2 13:58:17 localhost kernel: md: personality 4 is not loaded! Jun 2 13:58:17 localhost kernel: md :do_md_run() returned -22 Jun 2 13:58:17 localhost kernel: md: md0 stopped. Jun 2 13:58:17 localhost kernel: md: unbind<hdd1,2> Jun 2 13:58:17 localhost kernel: md: export_rdev(hdd1) Jun 2 13:58:17 localhost kernel: md: unbind<hdc1,1> Jun 2 13:58:17 localhost kernel: md: export_rdev(hdc1) Jun 2 13:58:17 localhost kernel: md: unbind<hdb1,0> Jun 2 13:58:17 localhost kernel: md: export_rdev(hdb1) Jun 2 13:58:17 localhost kernel: md: ... autorun DONE. ... Jun 2 14:01:59 localhost kernel: [events: 00000008] Jun 2 14:01:59 localhost kernel: md: bind<hdc1,1> Jun 2 14:01:59 localhost kernel: [events: 00000008] Jun 2 14:01:59 localhost kernel: md: bind<hdd1,2> Jun 2 14:01:59 localhost kernel: [events: 00000008] Jun 2 14:01:59 localhost kernel: md: bind<hdb1,3> Jun 2 14:01:59 localhost kernel: md: hdb1's event counter: 00000008 Jun 2 14:01:59 localhost kernel: md: hdd1's event counter: 00000008 Jun 2 14:01:59 localhost kernel: md: hdc1's event counter: 00000008 Jun 2 14:01:59 localhost kernel: raid5: measuring checksumming speed Jun 2 14:01:59 localhost kernel: 8regs : 2060.800 MB/sec Jun 2 14:01:59 localhost kernel: 32regs : 1369.200 MB/sec Jun 2 14:01:59 localhost kernel: pIII_sse : 3178.800 MB/sec Jun 2 14:01:59 localhost kernel: pII_mmx : 3168.800 MB/sec Jun 2 14:01:59 localhost kernel: p5_mmx : 4057.600 MB/sec Jun 2 14:01:59 localhost kernel: raid5: using function: pIII_sse (3178.800 MB/sec) Jun 2 14:01:59 localhost kernel: md: raid5 personality registered as nr 4 Jun 2 14:01:59 localhost kernel: md0: max total readahead window set to 744k Jun 2 14:01:59 localhost kernel: md0: 3 data-disks, max readahead per data-disk: 248k Jun 2 14:01:59 localhost kernel: raid5: device hdb1 operational as raid disk 1 Jun 2 14:01:59 localhost kernel: raid5: device hdd1 operational as raid disk 3 Jun 2 14:01:59 localhost kernel: raid5: device hdc1 operational as raid disk 2 Jun 2 14:01:59 localhost kernel: raid5: md0, not all disks are operational -- trying to recover array Jun 2 14:01:59 localhost kernel: raid5: allocated 4334kB for md0 Jun 2 14:01:59 localhost kernel: raid5: raid level 5 set md0 active with 3 out of 4 devices, algorithm 2 Jun 2 14:01:59 localhost kernel: RAID5 conf printout: Jun 2 14:01:59 localhost kernel: --- rd:4 wd:3 fd:1 Jun 2 14:01:59 localhost kernel: disk 0, s:0, o:0, n:0 rd:0 us:1 dev:[dev 00:00] Jun 2 14:01:59 localhost kernel: disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdb1 Jun 2 14:01:59 localhost kernel: disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdc1 Jun 2 14:01:59 localhost kernel: disk 3, s:0, o:1, n:3 rd:3 us:1 dev:hdd1 Jun 2 14:01:59 localhost kernel: RAID5 conf printout: Jun 2 14:01:59 localhost kernel: --- rd:4 wd:3 fd:1 Jun 2 14:01:59 localhost kernel: disk 0, s:0, o:0, n:0 rd:0 us:1 dev:[dev 00:00] Jun 2 14:01:59 localhost kernel: disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdb1 Jun 2 14:01:59 localhost kernel: disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdc1 Jun 2 14:01:59 localhost kernel: disk 3, s:0, o:1, n:3 rd:3 us:1 dev:hdd1 Jun 2 14:01:59 localhost kernel: md: updating md0 RAID superblock on device Jun 2 14:01:59 localhost kernel: md: hdb1 [events: 00000009]<6>(write) hdb1's sb offset: 290912960 Jun 2 14:01:59 localhost kernel: md: recovery thread got woken up ... Jun 2 14:01:59 localhost kernel: md0: no spare disk to reconstruct array! -- continuing in degraded mode Jun 2 14:01:59 localhost kernel: md: hdd1 [events: 00000009]<6>(write) hdd1's sb offset: 290912960 Jun 2 14:01:59 localhost kernel: md: hdc1 [events: 00000009]<6>(write) hdc1's sb offset: 290912960 Jun 2 14:02:20 localhost kernel: raid5: switching cache buffer size, 4096 --> 1024 Jun 2 14:02:20 localhost kernel: raid5: switching cache buffer size, 1024 --> 512 Jun 2 14:02:20 localhost kernel: FAT: bogus logical sector size 0 Jun 2 14:02:20 localhost kernel: VFS: Can't find a valid FAT filesystem on dev 09:00. Jun 2 14:02:26 localhost kernel: raid5: switching cache buffer size, 512 --> 4096 Jun 2 14:02:26 localhost kernel: Mount JFS Failure: 22 Jun 2 14:02:26 localhost kernel: jfs_mount failed w/return code = 22 Jun 2 14:02:30 localhost kernel: SGI XFS 1.3.1 with ACLs, no debug enabled Jun 2 14:02:30 localhost kernel: SGI XFS Quota Management subsystem Jun 2 14:02:30 localhost kernel: XFS: bad magic number Jun 2 14:02:30 localhost kernel: XFS: SB validate failed Jun 2 14:02:34 localhost kernel: raid5: switching cache buffer size, 4096 --> 1024 Jun 2 14:02:34 localhost kernel: VFS: Can't find ext3 filesystem on dev md(9,0). Jun 2 14:02:37 localhost kernel: VFS: Can't find ext2 filesystem on dev md(9,0). Jun 2 14:02:41 localhost kernel: sh-2021: reiserfs_read_super: can not find reiserfs on md(9,0) It looks like md assembles the array in degraded mode, and attempts to mount it as a FAT,VFS,jfs, xfs, ext3, ext2, and reiserfs but without success. I don't know what filesystem is used, but it is supposed to be a journaling fs. Is there a way I can examine the superblock and determine what FS it could be? Are there other commonly used fs types that perhaps I don't have support for? Am I missing something here? Thanks for any help, Tim - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html