I now have similar problem with different image, in our system we have rootfs ubi and recovery ubi. I created a new recovery ubi and flashed it with "fastboot flash" command. Tried to boot into recovery mode and got similar error that i got from rootfs ubi: [ 5.890160] ubi0: attaching mtd22 [ 6.046149] ubi0: scanning is finished [ 6.057618] ubi0: volume 1 ("recoveryfs") re-sized from 47 to 123 LEBs [ 6.058478] ubi0: attached mtd22 (name "recoveryfs", size 43 MiB) [ 6.063089] ubi0: PEB size: 262144 bytes (256 KiB), LEB size: 253952 bytes [ 6.069193] ubi0: min./max. I/O unit sizes: 4096/4096, sub-page size 4096 [ 6.075978] ubi0: VID header offset: 4096 (aligned 4096), data offset: 8192 [ 6.082842] ubi0: good PEBs: 172, bad PEBs: 0, corrupted PEBs: 0 [ 6.089591] ubi0: user volume: 2, internal volumes: 1, max. volumes count: 128 [ 6.095862] ubi0: max/mean erase counter: 126/67, WL threshold: 4096, image sequence number: 1118681957 [ 6.102890] ubi0: available PEBs: 0, total reserved PEBs: 172, PEBs reserved for bad PEB handling: 40 [ 6.112542] ubi0: background thread "ubi_bgt0d" started, PID 207 [ 6.121653] qcom,qpnp-rtc c440000.qcom,spmi:qcom,pmxpoorwills@0:qcom,pmxpoorwills_rtc: setting system clock to 1970-01-09 15:40:59 UTC (747659) [ 6.121718] cpuidle: enable-method property 'psci' found operations [ 6.122358] lpm_levels_of: Residency < 0 for LPM [ 6.122363] lpm_levels_of: idx 1 420 [ 6.122365] lpm_levels_of: Residency < 0 for LPM [ 6.122368] lpm_levels_of: idx 2 500 [ 6.122371] lpm_levels_of: idx 2 3040 [ 6.122595] lpm_levels: register_cluster_lpm_stats() [ 6.124767] rmnet_ipa3 started initialization [ 6.125782] RNDIS_IPA module is loaded. [ 6.125783] audio_pdr_late_init get_service_location failed ret -19 [ 6.126374] msm_bus_late_init: Remove handoff bw requests [ 6.140613] emac_phy: disabling [ 6.140620] rgmii_io_pads: disabling [ 6.140628] vreg_wlan: disabling [ 6.140632] ALSA deviã[ 6.205886] Freeing unused kernel memory: 1024K /etc/mdev/iio.sh: .: line 19: can't open '/sys/bus/i2c/devices/*-006*/iio:device?*/uevent' /etc/mdev/iio.sh: .: line 19: can't open '/sys/bus/i2c/devices/*-006*/iio:device?*/uevent' mkdir: can't create directory '/mnt/sdcard/': No such file or directory mount: mounting /dev/mmcblk0p1 on /mnt/sdcard/ failed: No such file or directory MTD : Detected block device : 22 for recoveryfs [ 7.062327] ubi: mtd22 is already attached to ubi0ubiattach: error!: cannot attach mtd22 error 17 (File exists) [ 7.103812] Waiting for ubinfo for recoveryfs [ 7.103991] Done ubinfo for recoveryfs, volume ID: 1 [ 7.107239] Waiting for /dev/ubi0_1 [ 7.112617] Done waiting for /dev/ubi0_1 [ 7.121111] block ubiblock0_1: created from ubi0:1(recoveryfs) [ 7.121735] Waiting for /dev/ubiblock0_1 [ 7.189342] Done waiting for /dev/ubiblock0_1 [ 7.220706] Waiting for ubinfo for md-recoveryfs [ 7.220894] Done ubinfo for md-recoveryfs, volume ID: 0 [ 7.224486] Waiting for /dev/ubi0_0 [ 7.229447] Done waiting for /dev/ubi0_0 [ 7.238403] block ubiblock0_0: created from ubi0:0(md-recoveryfs) [ 7.239025] Waiting for /dev/ubiblock0_0 [ 7.305156] Done waiting for /dev/ubiblock0_0 [ 7.436322] 1911 device_is_secure: ######################### device_is_secure=0 [ 7.453434] mount (277) used greatest stack depth: 6004 bytes left mount: mounting /dev on /system/dev failed: Invalid argument mount: mounting /dev/pts on /system/dev/pts failed: No such file or directory [ 7.627410] SQUASHFS error: zlib decompression failed, data probably corrupt [ 7.627449] SQUASHFS error: squashfs_read_data failed to read block 0x1e97f5 [ 7.645982] SQUASHFS error: zlib decompression failed, data probably corrupt [ 7.646018] SQUASHFS error: squashfs_read_data failed to read block 0x1e97f5 [ 7.670215] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000007 [ 7.670215] [ 7.670249] CPU: 0 PID: 1 Comm: init Not tainted 4.9.155 #1 [ 7.678403] Hardware name: Qualcomm Technologies, Inc. SDX POORWILLS (Flattened Device Tree) [ 7.683728] [<c010e784>] (unwind_backtrace) from [<c010bab8>] (show_stack+0x10/0x14) [ 7.692390] [<c010bab8>] (show_stack) from [<c01bf188>] (panic+0x13c/0x340) [ 7.700115] [<c01bf188>] (panic) from [<c011eb78>] (do_exit+0x4d8/0xa10) [ 7.706796] [<c011eb78>] (do_exit) from [<c01202e8>] (do_group_exit+0xb8/0xbc) [ 7.713741] [<c01202e8>] (do_group_exit) from [<c012a000>] (get_signal+0x538/0x578) [ 7.720774] [<c012a000>] (get_signal) from [<c010ae4c>] (do_signal+0x74/0x3f4) [ 7.728325] [<c010ae4c>] (do_signal) from [<c010b358>] (do_work_pending+0x78/0xbc) [ 7.735616] [<c010b358>] (do_work_pending) from [<c01078f4>] (slow_work_pending+0xc/0x20) [ 7.943190] ipa ipa3_active_clients_panic_notifier:259 I then boot no normal rootfs. and i can ubiattach recovry partition and mount recoveryfs volume: ~ # ubiattach -p /dev/mtd22 [ 50.903364] ubi2: attaching mtd22 [ 51.060835] ubi2: scanning is finished [ 51.066292] CHRDEV "ubi2" major number 226 goes below the dynamic allocation range [ 51.067865] ubi2: attached mtd22 (name "recoveryfs", size 43 MiB) [ 51.100394] ubi2: PEB size: 262144 bytes (256 KiB), LEB size: 253952 bytes [ 51.100427] ubi2: min./max. I/O unit sizes: 4096/4096, sub-page size 4096 [ 51.106160] ubi2: VID header offset: 4096 (aligned 4096), data offset: 8192 [ 51.120237] ubi2: good PEBs: 172, bad PEBs: 0, corrupted PEBs: 0 [ 51.120269] ubi2: user volume: 2, internal volumes: 1, max. volumes count: 128 [ 51.150477] ubi2: max/mean erase counter: 127/67, WL threshold: 4096, image sequence number: 1118681957 [ 51.150514] ubi2: available PEBs: 0, total reserved PEBs: 172, PEBs reserved for bad PEB handling: 40 [ 51.158691] ubi2: background thread "ubi_bgt2d" started, PID 1607 UBI device number 2, total 172 LEBs (43679744 bytes, 41.6 MiB), available 0 LEBs (0 bytes), LEB size 253952 bytes (248.0 KiB) ~ # ~ # ~ # ubiblock --create /dev/ubi2_1 [ 151.715001] block ubiblock2_1: created from ubi2:1(recoveryfs) ~ # mount -t squashfs /dev/ubiblock2_ /tmp/recovery ubiblock2_0 ubiblock2_1 ~ # mount -t squashfs /dev/ubiblock2_1 /tmp/recovery ~ # cd /tmp/recovery/ /var/volatile/tmp/recovery # ls -l total 0 drwxr-xr-x 2 root root 3 Sep 11 2019 app drwxr-xr-x 2 root root 3175 Sep 12 2019 bin drwxr-xr-x 2 root root 3 Sep 11 2019 boot drwxr-xr-x 2 root root 3 Sep 12 2019 cache drwxr-xr-x 2 root root 36 Sep 12 2019 data drwxr-xr-x 2 root root 826 Sep 12 2019 dev drwxr-xr-x 22 root root 971 Sep 12 2019 etc drwxr-xr-x 2 root root 3 Sep 11 2019 firmware drwxr-xr-x 21 root root 289 Sep 12 2019 home drwxr-xr-x 5 root root 1343 Sep 12 2019 lib drwxr-xr-x 12 diag diag 134 Sep 11 2019 media drwxr-xr-x 2 root root 3 Sep 12 2019 misc drwxr-xr-x 4 root root 41 Sep 12 2019 mnt drwxr-xr-x 2 root root 3 Sep 11 2019 proc drwxr-xr-x 2 root root 45 Sep 12 2019 res drwxr-xr-x 2 root root 3 Sep 11 2019 rom drwxr-xr-x 2 root root 3 Sep 12 2019 run drwxr-xr-x 2 root root 2235 Sep 12 2019 sbin lrwxrwxrwx 1 root root 11 Sep 11 2019 sdcard -> /mnt/sdcard drwxr-xr-x 3 root root 29 Sep 11 2019 share drwxr-xr-x 2 root root 3 Sep 11 2019 sys drwxr-xr-x 2 root root 3 Sep 12 2019 system drwxr-xr-x 2 root root 3 Sep 11 2019 systemrw lrwxrwxrwx 1 root root 8 Sep 11 2019 tmp -> /var/tmp drwxr-xr-x 10 root root 140 Sep 12 2019 usr drwxr-xr-x 8 root root 141 Sep 11 2019 var I can see files and read them inside /tmp/recovery. But if i try to chroot from system root to recovery root , i get: /var/volatile/tmp/recovery/etc # cd / / # chroot /tmp/recovery [ 1556.782747] SQUASHFS error: zlib decompression failed, data probably corrupt [ 1556.782786] SQUASHFS error: squashfs_read_data failed to read block 0x1e97f5 [ 1556.801516] SQUASHFS error: zlib decompression failed, data probably corrupt [ 1556.801554] SQUASHFS error: squashfs_read_data failed to read block 0x1e97f5 Bus error Same as when booting "recovery". Our squashfs images created with: mksquashfs ${IMAGE_ROOTFS} ${OUTPUT_FILE_SYSTEM_SQ} -noappend -b 128K -no-fragments -xattrs -noI We run Kernel [ 0.000000] Linux version 4.9.155 (oe-user@oe-host) (gcc version 6.4.0 (GCC) ) #1 PREEMPT Wed Sep 11 11:52:26 UTC 2019 On Wed, Sep 11, 2019 at 4:24 PM David Oberhollenzer <david.oberhollenzer@xxxxxxxxxxxxx> wrote: > > On 9/11/19 2:41 PM, Boris Stein wrote: > > Which tool should I use for dumping squashfs volume? > > > > I would try to either use dd on the ubiblock which you are trying to mount, > or cat on the underlying ubi volume and pipe it into a file. > > It shouldn't matter if there's extra garbage at the end. The SquashFS super > block specifies the exact size of the image. > > Regards, > > David ______________________________________________________ Linux MTD discussion mailing list http://lists.infradead.org/mailman/listinfo/linux-mtd/