Re: [PATCH 1/3] xfs: pass alloc flags through to xfs_extent_busy_flush()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Jun 15, 2023, at 4:33 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> 
> On Thu, Jun 15, 2023 at 11:09:41PM +0000, Wengang Wang wrote:
>> When mounting the problematic metadump with the patches, I see the following reported.
>> 
>> For more information about troubleshooting your instance using a console connection, see the documentation: https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/References/serialconsole.htm#four
>> =================================================
>> [   67.212496] loop: module loaded
>> [   67.214732] loop0: detected capacity change from 0 to 629137408
>> [   67.247542] XFS (loop0): Deprecated V4 format (crc=0) will not be supported after September 2030.
>> [   67.249257] XFS (loop0): Mounting V4 Filesystem af755a98-5f62-421d-aa81-2db7bffd2c40
>> [   72.241546] XFS (loop0): Starting recovery (logdev: internal)
>> [   92.218256] XFS (loop0): Internal error ltbno + ltlen > bno at line 1957 of file fs/xfs/libxfs/xfs_alloc.c.  Caller xfs_free_ag_extent+0x3f6/0x870 [xfs]
>> [   92.249802] CPU: 1 PID: 4201 Comm: mount Not tainted 6.4.0-rc6 #8
> 
> What is the test you are running? Please describe how you reproduced
> this failure - a reproducer script would be the best thing here.

I was mounting a (copy of) V4 metadump from customer.

> 
> Does the test fail on a v5 filesytsem?

N/A.

> 
>> I think that’s because that the same EFI record was going to be freed again
>> by xfs_extent_free_finish_item() after it already got freed by xfs_efi_item_recover().
>> I was trying to fix above issue in my previous patch by checking the intent
>> log item’s lsn and avoid running iop_recover() in xlog_recover_process_intents().
>> 
>> Now I am thinking if we can pass a flag, say XFS_EFI_PROCESSED, from
>> xfs_efi_item_recover() after it processed that record to the xfs_efi_log_item
>> memory structure somehow. In xfs_extent_free_finish_item(), we skip to process
>> that xfs_efi_log_item on seeing XFS_EFI_PROCESSED and return OK. By that
>> we can avoid the double free.
> 
> I'm not really interested in speculation of the cause or the fix at
> this point. I want to know how the problem is triggered so I can
> work out exactly what caused it, along with why we don't have
> coverage of this specific failure case in fstests already.
> 

I get to know the cause by adding additional debug log along with my previous patch. 

> Indeed, if you have a script that is reproducing this, please turn
> it into a fstests test so it becomes a regression test that is
> always run...
> 

So far I don’t have such a script. Though I can try that, I am not sure if I can finish it shortly.
I am wondering what if we won’t have a stable reproducer soon?

thanks,
wengang






[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux