Re: [PATCH 1/3] xfs: pass alloc flags through to xfs_extent_busy_flush()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Jun 15, 2023, at 5:17 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> 
> On Thu, Jun 15, 2023 at 11:51:09PM +0000, Wengang Wang wrote:
>> 
>> 
>>> On Jun 15, 2023, at 4:33 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
>>> 
>>> On Thu, Jun 15, 2023 at 11:09:41PM +0000, Wengang Wang wrote:
>>>> When mounting the problematic metadump with the patches, I see the following reported.
>>>> 
>>>> For more information about troubleshooting your instance using a console connection, see the documentation: https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/References/serialconsole.htm#four
>>>> =================================================
>>>> [   67.212496] loop: module loaded
>>>> [   67.214732] loop0: detected capacity change from 0 to 629137408
>>>> [   67.247542] XFS (loop0): Deprecated V4 format (crc=0) will not be supported after September 2030.
>>>> [   67.249257] XFS (loop0): Mounting V4 Filesystem af755a98-5f62-421d-aa81-2db7bffd2c40
>>>> [   72.241546] XFS (loop0): Starting recovery (logdev: internal)
>>>> [   92.218256] XFS (loop0): Internal error ltbno + ltlen > bno at line 1957 of file fs/xfs/libxfs/xfs_alloc.c.  Caller xfs_free_ag_extent+0x3f6/0x870 [xfs]
>>>> [   92.249802] CPU: 1 PID: 4201 Comm: mount Not tainted 6.4.0-rc6 #8
>>> 
>>> What is the test you are running? Please describe how you reproduced
>>> this failure - a reproducer script would be the best thing here.
>> 
>> I was mounting a (copy of) V4 metadump from customer.
> 
> Is the metadump obfuscated? Can I get a copy of it via a private,
> secure channel?

I am OK to give you a copy after I get approvement for that.

> 
>>> Does the test fail on a v5 filesytsem?
>> 
>> N/A.
>> 
>>> 
>>>> I think that’s because that the same EFI record was going to be freed again
>>>> by xfs_extent_free_finish_item() after it already got freed by xfs_efi_item_recover().
> 
> How is this happening? Where (and why) are we defering an extent we
> have successfully freed into a new xefi that we create a new intent
> for and then defer?
> 
> Can you post the debug output and analysis that lead you to this
> observation? I certainly can't see how this can happen from looking
> at the code
> 
>>>> I was trying to fix above issue in my previous patch by checking the intent
>>>> log item’s lsn and avoid running iop_recover() in xlog_recover_process_intents().
>>>> 
>>>> Now I am thinking if we can pass a flag, say XFS_EFI_PROCESSED, from
>>>> xfs_efi_item_recover() after it processed that record to the xfs_efi_log_item
>>>> memory structure somehow. In xfs_extent_free_finish_item(), we skip to process
>>>> that xfs_efi_log_item on seeing XFS_EFI_PROCESSED and return OK. By that
>>>> we can avoid the double free.
>>> 
>>> I'm not really interested in speculation of the cause or the fix at
>>> this point. I want to know how the problem is triggered so I can
>>> work out exactly what caused it, along with why we don't have
>>> coverage of this specific failure case in fstests already.
>>> 
>> 
>> I get to know the cause by adding additional debug log along with
>> my previous patch.
> 
> Can you please post that debug and analysis, rather than just a
> stack trace that is completely lacking in context? Nothing can be
> inferred from a stack trace, and what you are saying is occurring
> does not match what the code should actually be doing. So I need to
> actually look at what is happening in detail to work out where this
> mismatch is coming from....

The debug patch was based on my previous patch, I will rework the debug patch
basing on yours. I will share you the debug patch, output and my analysis later. 

thanks,
wengang




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux