Re: [PATCH v2] mm: alloc_pages_bulk: remove assumption of populating only NULL elements

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2025/3/4 16:18, Dave Chinner wrote:

...

> 
>>
>> 1. https://lore.kernel.org/all/bd8c2f5c-464d-44ab-b607-390a87ea4cd5@xxxxxxxxxx/
>> 2. https://lore.kernel.org/all/20250212092552.1779679-1-linyunsheng@xxxxxxxxxx/
>> CC: Jesper Dangaard Brouer <hawk@xxxxxxxxxx>
>> CC: Luiz Capitulino <luizcap@xxxxxxxxxx>
>> CC: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
>> CC: Dave Chinner <david@xxxxxxxxxxxxx>
>> CC: Chuck Lever <chuck.lever@xxxxxxxxxx>
>> Signed-off-by: Yunsheng Lin <linyunsheng@xxxxxxxxxx>
>> Acked-by: Jeff Layton <jlayton@xxxxxxxxxx>
>> ---
>> V2:
>> 1. Drop RFC tag and rebased on latest linux-next.
>> 2. Fix a compile error for xfs.
> 
> And you still haven't tested the code changes to XFS, because
> this patch is also broken.

I tested XFS using the below cmd and testcase, testing seems
to be working fine, or am I missing something obvious here
as I am not realy familiar with fs subsystem yet:

Step to setup the xfs:
dd if=/dev/zero of=xfs_image bs=1M count=1024
losetup -f xfs_image
losetup -a
./mkfs.xfs /dev/loop0
mkdir xfs_test
mount /dev/loop0 xfs_test/

Test shell file:
#!/bin/bash

# Configuration parameters
DIR="/home/xfs_test"              # Directory to perform file operations
FILE_COUNT=100              # Maximum number of files to create in each loop
MAX_FILE_SIZE=1024          # Maximum file size in KB
MIN_FILE_SIZE=10            # Minimum file size in KB
OPERATIONS=10               # Number of create/delete operations per loop
TOTAL_RUNS=10000               # Total number of loops to run

# Check if the directory exists
if [ ! -d "$DIR" ]; then
    echo "Directory $DIR does not exist. Please create the directory first!"
    exit 1
fi

echo "Starting file system test on: $DIR"

for ((run=1; run<=TOTAL_RUNS; run++)); do
    echo "Run $run of $TOTAL_RUNS"

    # Randomly create files
    for ((i=1; i<=OPERATIONS; i++)); do
        # Generate a random file size between MIN_FILE_SIZE and MAX_FILE_SIZE (in KB)
        FILE_SIZE=$((RANDOM % (MAX_FILE_SIZE - MIN_FILE_SIZE + 1) + MIN_FILE_SIZE))
        # Generate a unique file name using timestamp and random number
        FILE_NAME="$DIR/file_$(date +%s)_$RANDOM"
        # Create a file with random content
        dd if=/dev/urandom of="$FILE_NAME" bs=1K count=$FILE_SIZE &>/dev/null
        echo "Created file: $FILE_NAME, Size: $FILE_SIZE KB"
    done

    # Randomly delete files
    for ((i=1; i<=OPERATIONS; i++)); do
        # List all files in the directory
        FILE_LIST=($(ls $DIR))
        # Check if there are any files to delete
        if [ ${#FILE_LIST[@]} -gt 0 ]; then
            # Randomly select a file to delete
            RANDOM_FILE=${FILE_LIST[$RANDOM % ${#FILE_LIST[@]}]}
            rm -f "$DIR/$RANDOM_FILE"
            echo "Deleted file: $DIR/$RANDOM_FILE"
        fi
    done

    echo "Completed run $run"
done

echo "Test completed!"


> 
>> diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
>> index 5d560e9073f4..b4e95b2dd0f0 100644
>> --- a/fs/xfs/xfs_buf.c
>> +++ b/fs/xfs/xfs_buf.c
>> @@ -319,16 +319,17 @@ xfs_buf_alloc_pages(
>>  	 * least one extra page.
>>  	 */
>>  	for (;;) {
>> -		long	last = filled;
>> +		long	alloc;
>>  
>> -		filled = alloc_pages_bulk(gfp_mask, bp->b_page_count,
>> -					  bp->b_pages);
>> +		alloc = alloc_pages_bulk(gfp_mask, bp->b_page_count - filled,
>> +					 bp->b_pages + filled);
>> +		filled += alloc;
>>  		if (filled == bp->b_page_count) {
>>  			XFS_STATS_INC(bp->b_mount, xb_page_found);
>>  			break;
>>  		}
>>  
>> -		if (filled != last)
>> +		if (alloc)
>>  			continue;
> 
> alloc_pages_bulk() now returns the number of pages allocated in the
> array. So if we ask for 4 pages, then get 2, filled is now 2. Then
> we loop, ask for another 2 pages, get those two pages and it returns
> 4. Now filled is 6, and we continue.

It will be returning 2 instead of 4 for the second loop if I understand
it correctly as 'bp->b_pages + filled' and 'bp->b_page_count - filled'
is passing to alloc_pages_bulk() API now.

> 
> Now we ask alloc_pages_bulk() for -2 pages, which returns 4 pages...
> 
> Worse behaviour: second time around, no page allocation succeeds
> so it returns 2 pages. Filled is now 4, which is the number of pages
> we need, so we break out of the loop with only 2 pages allocated.
> There's about to be kernel crashes occur.....
> 
> Once is a mistake, twice is compeltely unacceptable.  When XFS stops
> using alloc_pages_bulk (probably 6.15) I won't care anymore. But
> until then, please stop trying to change this code.
> 
> NACK.
> 
> -Dave.




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux