Re: squashfs performance regression and readahea

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 13, 2022 at 1:33 PM Phillip Lougher <phillip@xxxxxxxxxxxxxxx> wrote:
>
> On 12/05/2022 07:23, Hsin-Yi Wang wrote:
>
> >
> > Hi Matthew,
> > Thanks for reviewing the patch previously. Does this version look good
> > to you? If so, I can send it to the list.
> >
> >
> > Thanks for all of your help.
>
> Hi Hsin-Yi Wang,
>
Hi Philip,

> Thanks for updating the patch to minimise the pages used when
> creating the page actor.
>
> Looking at the new patch, I have a couple of questions which is worth
> clarifying to have a fuller understanding of the readahead behaviour.
> In otherwords I'm deducing the behaviour of the readahead calls
> from context, and I want to make sure they're doing what I think
> they're doing.
>
> +       nr_pages = min(readahead_count(ractl), max_pages);
>
> As I understand it, this will always produce nr_pages which will
> cover the entirety of the block to be decompressed?  That is if
> a block is block_size, it will return the number of pages necessary
> to decompress the entire block?  It will never return less than the
> necessary pages, i.e. if the block_size was 128K, it will never
> return less than 32 pages?
>
In most of the cases the size is max_pages (readahead_count(ractl) ==
max_pages).

> Similarly, if at the end of the file, where the last block may not
> be a full block (i.e. less than block_size) it will only return
> the pages covered by the tail end block, not a full block.  For
> example, if the last block is 16 Kbytes (and block_size is
> 128 Kbytes), it will return 4 pages and not 32 pages ...
>

But it's possible that readahead_count(ractl) < max_pages at the end of file.

> Obviously this behaviour is important for decompression, because
> you must always have enough pages to decompress the entire block
> into.
>
> You also shouldn't pass in more pages than expected (i.e. if the
> block is only expected to decompress to 4 pages, that's what you
> pass, rather than the full block size).  This helps to trap
> corrupted blocks, where they are prevented to decompress to larger
> than expected.
>
> + nr_pages = __readahead_batch(ractl, pages, max_pages)
>
> My understanding is that this call will fully populate the
> pages array with page references without any holes.  That
> is none of the pages array entries will be NULL, meaning
> there isn't a page for that entry.  In other words, if the
> pages array has 32 pages, each of the 32 entries will
> reference a page.
>
I noticed that if nr_pages < max_pages, calling read_blocklist() will
have SQUASHFS errors,

SQUASHFS error: Failed to read block 0x125ef7d: -5
SQUASHFS error: zlib decompression failed, data probably corrupt

so I did a check if nr_pages < max_pages before squashfs_read_data(),
just skip the remaining pages and let them be handled by readpage.

> This is important for the decompression code, because it
> expects each pages array entry to reference a page, which
> can be kmapped to an address.  If an entry in the pages
> array is NULL, this will break.
>
> If the pages array can have holes (NULL pointers), I have
> written an update patch which allows the decompression code
> to handle these NULL pointers.
>
> If the pages array can have NULL pointers, I can send you
> the patch which will deal with this.

Sure, if there are better ways to deal with this.

Thanks.

>
> Thanks
>
> Phillip
>
>
>
> >
> >>>
> >>> It's also noticed that when the crash happened, nr_pages obtained by
> >>> readahead_count() is 512.
> >>> nr_pages = readahead_count(ractl); // this line
> >>>
> >>> 2) Normal cases that won't crash:
> >>> [   22.651750] Block @ 0xb3bbca6, compressed size 42172, src size 262144
> >>> [   22.653580] Block @ 0xb3c6162, compressed size 29815, src size 262144
> >>> [   22.656692] Block @ 0xb4a293f, compressed size 17484, src size 131072
> >>> [   22.666099] Block @ 0xb593881, compressed size 39742, src size 262144
> >>> [   22.668699] Block @ 0xb59d3bf, compressed size 37841, src size 262144
> >>> [   22.695739] Block @ 0x13698673, compressed size 65907, src size 131072
> >>> [   22.698619] Block @ 0x136a87e6, compressed size 3155, src size 131072
> >>> [   22.703400] Block @ 0xb1babe8, compressed size 99391, src size 131072
> >>> [   22.706288] Block @ 0x1514abc6, compressed size 4627, src size 131072
> >>>
> >>> nr_pages are observed to be 32, 64, 256... These won't cause a crash.
> >>> Other values (max_pages, bsize, block...) looks normal
> >>>
> >>> I'm not sure why the crash happened, but I tried to modify the mask
> >>> for a bit. After modifying the mask value to below, the crash is gone
> >>> (nr_pages are <=256).
> >>> Based on my testing on a 300K pack file, there's no performance change.
> >>>
> >>> diff --git a/fs/squashfs/file.c b/fs/squashfs/file.c
> >>> index 20ec48cf97c5..f6d9b6f88ed9 100644
> >>> --- a/fs/squashfs/file.c
> >>> +++ b/fs/squashfs/file.c
> >>> @@ -499,8 +499,8 @@ static void squashfs_readahead(struct
> >>> readahead_control *ractl)
> >>>    {
> >>>           struct inode *inode = ractl->mapping->host;
> >>>           struct squashfs_sb_info *msblk = inode->i_sb->s_fs_info;
> >>> -       size_t mask = (1UL << msblk->block_log) - 1;
> >>>           size_t shift = msblk->block_log - PAGE_SHIFT;
> >>> +       size_t mask = (1UL << shift) - 1;
> >>>
> >>>
> >>> Any pointers are appreciated. Thanks!
> >>
>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux