Re: [PATCH] ubi: Select fastmap anchor PEBs considering wear level rules

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/17/20 11:33 PM, Richard Weinberger wrote:
On Thu, Apr 30, 2020 at 4:35 PM Richard Weinberger
<richard.weinberger@xxxxxxxxx> wrote:
>
> On Thu, Apr 30, 2020 at 4:29 PM Richard Weinberger
> <richard.weinberger@xxxxxxxxx> wrote:
> >
> > On Thu, Apr 30, 2020 at 10:34 AM Arne Edholm <arne.edholm@xxxxxxxx> wrote:
> > > Are you satisfied with my answers or are there additional
> > > information/changes needed?
> >
> > Yes. In the meanwhile I did more testing and with your changes the
> > anchor PEB selection
> > is *much* better. Testing took some time and then I scheduled away to
> > other stuff...
> > Critical workloads are these where a fastmap is not written due to
> > heavy write load,
> > but other events like volume change/creation.
> >
> > A good reproducer seems something stupid like that:
> > for i in `seq 1000` ; do ubimkvol -N test -m /dev/ubi0 >/dev/null &&
> > ubirmvol /dev/ubi0 -n 0 ; done
> > Wearleveling threshold is 50, btw.
> >
> > Without your patch, the erase counter of the first 64 PEBs:
> > 4    4    4    4    4    4    4    4
> > 4    4    4    4    4    4    4    4
> > 4    4    4    4    4    4    4    4
> > 4    4    4    4    4    4    4    4
> > 4    4    4    4    4    4    4    4
> > 4    4    4    4    4    4    22   4
> > 4    19   4    4    4    4    4    4
> > 4    4    4    4    8    908  906  1
> >
> > And with your patch:
> > 95   95   95   95   95   95   95   95
> > 95   95   95   95   95   95   95   95
> > 95   95   95   95   95   95   95   95
> > 95   95   95   95   95   95   95   95
> > 95   95   95   95   95   95   95   95
> > 95   95   95   95   95   95   95   95
> > 95   95   95   94   94   94   94   94
> > 94   94   94   94   94   94   94   95
>
> While observing my own mail on the mailing list I discovered something
> I didn't notice
> last time on my terminal.
> If we summarize all numbers in the squares it should be more or less 2000.
> because the test triggered 2000 fastmap writes.
> But 95 times 64 is much more than 2000.
>
> Your patch produces almost a perfect distribution, the overall erase count
> is three times as much as it is expected.
> Hmmm.

I did more tests and can no longer reproduce the problem with too much
wear-leveling.
Maybe my initial tests were wonky. So, patch looks good, results too.
Let's merge it with 5.8. :-)

Thank you Richard. I have also been trying to reproduce this issue without success.

/Arne



______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/




[Index of Archives]     [LARTC]     [Bugtraq]     [Yosemite Forum]     [Photo]

  Powered by Linux