Re: MTD: How to get actual image size from MTD partition

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

On Mon, 30 Aug 2021 at 21:28, Pintu Agarwal <pintu.ping@xxxxxxxxx> wrote:
>
> On Sun, 22 Aug 2021 at 19:51, Ezequiel Garcia
> <ezequiel@xxxxxxxxxxxxxxxxxxxx> wrote:
>
> > In other words, IMO it's best to expose the NAND through UBI
> > for both read-only and read-write access, using a single UBI device,
> > and then creating UBI volumes as needed. This will allow UBI
> > to spread wear leveling across the whole device, which is expected
> > to increase the flash lifetime.
> >
> > For instance, just as some silly example, you could have something like this:
> >
> >                                | RootFS SquashFS  |
> >                                | UBI block        | UBIFS User R-W area
> > ------------------------------------------------------------------------
> > Kernel A | Kernel B | RootFS A | RootFS B         | User
> > ------------------------------------------------------------------------
> >                                  UBIX
> > ------------------------------------------------------------------------
> >                                  /dev/mtdX
> >
> > This setup allows safe kernel and rootfs upgrading. The RootFS is read-only
> > via SquashFS and there's a read-write user area. UBI is supporting all
> > the volumes, handling bad blocks and wear leveling.
> >
> Dear Ezequiel,
> Thank you so much for your reply.
>
> This is exactly what we are also doing :)
> In our system we have a mix of raw and ubi partitions.
> The ubi partitioning is done almost exactly the same way.
> Only for the rootfs (squashfs) I see we were using /mtd/block<id> to
> mount the rootfs.
> Now, I understood we should change it to use /dev/ubiblock<id>
> This might have several benefits, but one most important could be,
> using ubiblock can handle bad-blocks/wear-leveling automatically,
> whereas mtdblocks access the flash directly ?
> I found some references for these..
> So, this seems good for my proposal.
>
> Another thing that is still open for us is:
> How do we calculate the exact image size from a raw mtd partition ?
> For example, support for one of the raw nand partitions, the size is
> defined as 15MB but we flash the actual image of size only 2.5MB.
> So, in the runtime how to determine the image size as ~2.5MB (at least
> roughly) ?
> Is it still possible ?
>

I am happy to inform you that using "ubiblock" for squashfs mounting
seems very helpful for us.
We have seen almost the double performance boost when using ubiblock
for rootfs as well as other read-only volume mounting.

However, we have found few issues while defining the read only volume as STATIC.
With static volume we see that OTA update is failing during "fsync".
That is ota_fsync is failing from here:
https://gerrit.pixelexperience.org/plugins/gitiles/bootable_recovery/+/ff6df890a2a01bf3bf56d3f430b17a5ef69055cf%5E%21/otafault/ota_io.cpp
int status = fsync(fd);
if (status == -1 && errno == EIO)
*
{ have_eio_error = true; }
*
return status;
}

Is this the known issue with static volume?

For now we are using dynamic volume itself but the problem is that
with dynamic volume we cannot get the exact image size from:
$ cat /sys/class/ubi/ubi0_0/data_bytes
==> In case of dynamic volume this will return the total volume size.
==> Thus our md5 integrity check does not match exactly with the
flashed image size.

Is there an alternate way to handle this issue ?


Thanks,
Pintu



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux