On Wed, Jul 10, 2019 at 12:34 PM Richard Weinberger <richard.weinberger@xxxxxxxxx> wrote: > > Ben, > > On Wed, Jul 10, 2019 at 5:32 AM Ben Schroeder <klowd92@xxxxxxxxx> wrote: > > Why do I see a loss of space when rewriting the same file? > > Please see my answer below. > > > Can I use an upgrade scheme with file binary diff as mentioned above - > > One that would run correctly with low available space? > > If the filesystem is full and all nodes are already packed, it can be > a challenge. > > > Can I use an upgrade scheme with UBI volume binary diff? > > Yes, you can alter a dynamic volume as you wish. But keep NAND odds on mind. > So you need to replace whole LEBs. > > > Sorry for the long mail, I have not found much information about fragmentation > > and space loss in UBIFS. Let me know if I forgot any relevant details. > > I think the root cause of the problem you see is how NAND works. > On NAND we write always full pages. So if you ask UBIFS to change one byte > of a file or change meta data, it has to waste a full page. > > Luckily Linux is a modern operating system with a write-cache and upon > write-back UBIFS can pack nodes (UBIFS data nodes, inode nodes, etc...) together > and wastes less space. > But it wastes still a significant amount of space if userspace forces > it to persist data. > i.e. by using fsync()/fdatasync(). > If UBIFS runs out of space the garbage collector will rewrite nodes > and pack them tightly > together. > > So, if you have a pre-created UBIFS, nodes are already packed and your > update mechanism > might force UBIFS to faster than the garbage collector can pack nodes. > > With that information in mind, do your other questions resolve? > Thanks for the reply Richard. I just wanted to reiterate that i am using SPI NOR Flash, partitioned in an A/B scheme as so: mtd7 Name: rootfs Type: nor Eraseblock size: 65536 bytes, 64.0 KiB Amount of eraseblocks: 880 (57671680 bytes, 55.0 MiB) Minimum input/output unit size: 1 byte Sub-page size: 1 byte Character device major/minor: 90:14 Bad blocks are allowed: false Device is writable: true mtd8 Name: rootfs1 Type: ubi Eraseblock size: 65408 bytes, 63.9 KiB Amount of eraseblocks: 353 (23089024 bytes, 22.0 MiB) Minimum input/output unit size: 1 byte Sub-page size: 1 byte Character device major/minor: 90:16 Bad blocks are allowed: false Device is writable: true mtd9 Name: rootfs2 Type: ubi Eraseblock size: 65408 bytes, 63.9 KiB Amount of eraseblocks: 353 (23089024 bytes, 22.0 MiB) Minimum input/output unit size: 1 byte Sub-page size: 1 byte Character device major/minor: 90:18 Bad blocks are allowed: false Device is writable: true I am not sure the garbage collector will improve the available space issue. Regardless of the UBI being mounted with sync option enabled or disabled, the issue persists. Even if i allow time for the background thread to run. The issue seems very problematic when considering the fact that i am downgrading the filesystem, patching files to be slightly smaller size than before, and i am still running out of disk space, regardless of how long i wait for garbage collection. On this regard, i will stick with your answer that it can be a serious challenge if all nodes are packed, and there is little available free space. Could you please clarify your answer regarding binary patching UBI Volumes: > Yes, you can alter a dynamic volume as you wish. But keep NAND odds on mind. > So you need to replace whole LEBs. It was my understanding that because UBI keeps tracks of bad blocks and erase counters, so that overwriting an existing and running UBI partition using a binary diff against a newer UBI partition, might cause loss of that metadata, or even corruption. > -- > Thanks, > //richard Thanks, Ben. ______________________________________________________ Linux MTD discussion mailing list http://lists.infradead.org/mailman/listinfo/linux-mtd/