Help needed | Deploy UBIFS on qspi nor flash(s25fs512s)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi maintainers/developers,

I have been trying to deploy UBIFS on qspi nor flash and need some help regarding the same.
LS1088ARDB platform has 2 Spansion flashes "s25fs512s" of size 64M each with page size and erase size as 256bytes.

I have erased the flash partition with 256bytes sector size successfully and got stuck on mounting step.
Logs are provided below [1].
After mounting, I read the content of flash and found that only 64bytes were written successfully. Moreover, TxFIFO size is 128 bytes.
I am using dts[2] for platform and the patch is under review.
Can someone please provide a pointer on how to debug/resolve the issue.

Thanks
Kuldeep

[1]
cat /proc/mtd
dev:    size   erasesize  name
mtd0: 20000000 00020000 "530000000.flash"
mtd1: 04000000 00040000 "20c0000.spi-0"
mtd2: 01000000 00040000 "read_only"
mtd3: 03000000 00040000 "file_system"
root@ls1012ardb:~# flash_erase /dev/mtd3 0 0
Erasing 256 Kibyte @ 2fc0000 -- 100 % complete
root@ls1012ardb:~# mtdinfo /dev/mtd3
mtd3
Name:                           file_system
Type:                           nor
Eraseblock size:                262144 bytes, 256.0 KiB
Amount of eraseblocks:          192 (50331648 bytes, 48.0 MiB)
Minimum input/output unit size: 1 byte
Sub-page size:                  1 byte
Character device major/minor:   90:6
Bad blocks are allowed:         false
Device is writable:             true

root@ls1012ardb:~#
root@ls1012ardb:~#
root@ls1012ardb:~#
root@ls1012ardb:~#
root@ls1012ardb:~#
root@ls1012ardb:~# ubiattach /dev/ubi_ctrl -m 3
[  415.072280] ubi0: attaching mtd3
[  415.081749] ubi0: scanning is finished
[  415.085508] ubi0: empty MTD device detected
[  417.065014] ubi0: attached mtd3 (name "file_system", size 48 MiB)
[  417.071126] ubi0: PEB size: 262144 bytes (256 KiB), LEB size: 262016 bytes
[  417.078006] ubi0: min./max. I/O unit sizes: 1/256, sub-page size 1
[  417.084187] ubi0: VID header offset: 64 (aligned 64), data offset: 128
[  417.090719] ubi0: good PEBs: 192, bad PEBs: 0, corrupted PEBs: 0
[  417.096728] ubi0: user volume: 0, internal volumes: 1, max. volumes count: 128
[  417.103955] ubi0: max/mean erase counter: 0/0, WL threshold: 4096, image sequence number: 2360392164
[  417.113088] ubi0: available PEBs: 188, total reserved PEBs: 4, PEBs reserved for bad PEB handling: 0
[  417.122273] ubi0: background thread "ubi_bgt0d" started, PID 1196
UBI device number 0, total 192 LEBs (50307072 bytes, 48.0 MiB), available 188 LEBs (49259008 bytes, 47.0 MiB), LEB size 262016 bytes (255.9 KiB)
root@ls1012ardb:~# cat /sys/class/ubi/ubi0/max_vol_count
128
root@ls1012ardb:~# ubimkvol /dev/ubi0 -N ubi_rootfs -S 128
Volume ID 0, size 128 LEBs (33538048 bytes, 32.0 MiB), LEB size 262016 bytes (255.9 KiB), dynamic, name "ubi_rootfs", alignment 1
root@ls1012ardb:~# mount -t ubifs ubi0_0 /tmp
[  443.249649] UBIFS (ubi0:0): default file-system created
[  444.122800] UBIFS error (ubi0:0 pid 1199): ubifs_check_node: bad CRC: calculated 0x51676941, read 0xd999b981
[  444.132656] UBIFS error (ubi0:0 pid 1199): ubifs_check_node: bad node at LEB 0:0
[  444.140070]  magic          0x6101831
[  444.143728]  crc            0xd999b981
[  444.147497]  node_type      6 (superblock node)
[  444.152027]  group_type     0 (no node group)
[  444.156382]  sqnum          2
[  444.159341]  len            4096
[  444.162569]  key_hash       0 (R5)
[  444.165971]  key_fmt        0 (simple)
[  444.169719]  flags          0x8
[  444.172857]  big_lpt        0
[  444.175815]  space_fixup    0
[  444.178781]  min_io_size    8
[  444.181746]  leb_size       262016
[  444.185146]  leb_cnt        128
[  444.188284]  max_leb_cnt    128
[  444.191416]  max_bud_bytes  786048
[  444.194816]  log_lebs       3
[  444.197784]  lpt_lebs       2
[  444.200750]  orph_lebs      2
[  444.203711]  jhead_cnt      1
[  444.206677]  fanout         8
[  444.209641]  lsave_cnt      256
[  444.212781]  default_compr  1
[  444.215740]  rp_size        1545894
[  444.219227]  rp_uid         0
[  444.222191]  rp_gid         0
[  444.225157]  fmt_version    5
[  444.228121]  time_gran      1000000000
[  444.231862]  UUID           975E6BE0-3472-4BAC-8176-8C491D3AD3CC
[  444.237874] CPU: 1 PID: 1199 Comm: mount Not tainted 5.4.0-03609-g51ebe9040582-dirty #14
[  444.245959] Hardware name: LS1088A RDB Board (DT)
[  444.250655] Call trace:
[  444.253095]  dump_backtrace+0x0/0x150
[  444.256750]  show_stack+0x14/0x20
[  444.260059]  dump_stack+0xbc/0x100
[  444.263453]  ubifs_check_node+0xc0/0x210
[  444.267367]  ubifs_read_node+0x1f0/0x248
[  444.271283]  ubifs_read_superblock+0x538/0xd00
[  444.275719]  ubifs_mount+0xa4c/0x13b8
[  444.279376]  legacy_get_tree+0x2c/0x58
[  444.283118]  vfs_get_tree+0x28/0x108
[  444.286687]  do_mount+0x64c/0x970
[  444.289994]  ksys_mount+0x90/0x100
[  444.293389]  __arm64_sys_mount+0x1c/0x28
[  444.297305]  el0_svc_common.constprop.2+0x64/0x160
[  444.302090]  el0_svc_handler+0x20/0x80
[  444.305832]  el0_svc+0x8/0xc
[  444.308721] UBIFS error (ubi0:0 pid 1199): ubifs_read_node: expected node type 6
mount: mount ubi0_0 on /var/volatile/tmp failed: Structure needs cleaning
root@ls1012ardb:~# mtd_debug read /dev/mtd3 0x0 0x1000 rw
Copied 4096 bytes from address 0x00000000 in flash to rw
root@ls1012ardb:~# hexdump -n 100 rw
0000000 4255 2349 0001 0000 0000 0000 0000 0200
0000010 0000 4000 0000 8000 b08c e4b9 0000 0000
0000020 0000 0000 0000 0000 0000 0000 0000 0000
0000030 0000 0000 0000 0000 0000 0000 8b5f 0b9e
0000040 ffff ffff ffff ffff ffff ffff ffff ffff
*
0000060 ffff ffff
0000064

Steps to deploy UBIFS:
flash_erase /dev/mtd3 0 0
ubiattach /dev/ubi_ctrl -m 3
cat /sys/class/ubi/ubi0/max_vol_count
ubimkvol /dev/ubi0 -N ubi_rootfs -S 128
mount -t ubifs ubi0_0 /tmp
mkdir /root_mnt
mount -o loop /ramdisk_rootfs_arm64.ext4 /root_mnt
cp -r /root_mnt/* /tmp/                                                                                                     
umount /tmp/;umount /root_mnt;rm -rf /root_mnt

[2] https://patchwork.kernel.org/patch/11272751/

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/



[Index of Archives]     [LARTC]     [Bugtraq]     [Yosemite Forum]     [Photo]

  Powered by Linux