On Wed, Apr 26, 2023 at 2:06 PM Yu Kuai <yukuai1@xxxxxxxxxxxxxxx> wrote: > > Hi, > > 在 2023/04/26 13:56, Changhui Zhong 写道: > > On Tue, Apr 25, 2023 at 11:27 AM Ming Lei <ming.lei@xxxxxxxxxx> wrote: > >> > >> On Tue, Apr 25, 2023 at 10:37:05AM +0800, Changhui Zhong wrote: > >>> Hello, > >>> > >>> Below issue was triggered in my test,it caused system panic ,please > >>> help check it. > >>> branch: for-6.4/block > >>> https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git > >>> > >>> mdadm -CR /dev/md0 -l 1 -n 2 /dev/sda /dev/sdb -e 1.0 > >>> sgdisk -n 0:0:+100MiB /dev/md0 > >>> cat /proc/partitions > >>> mdadm -S /dev/md0 > >>> mdadm -A /dev/md0 /dev/sda /dev/sdb > >>> cat /proc/partitions > >>> > >>> > >>> [ 34.219123] BUG: kernel NULL pointer dereference, address: 00000000000000fc > >>> [ 34.219507] #PF: supervisor read access in kernel mode > >>> [ 34.219784] #PF: error_code(0x0000) - not-present page > >>> [ 34.220039] PGD 0 P4D 0 > >>> [ 34.220176] Oops: 0000 [#1] PREEMPT SMP PTI > >>> [ 34.220374] CPU: 8 PID: 1956 Comm: systemd-udevd Kdump: loaded Not > >>> tainted 6.3.0-rc2+ #1 > >>> [ 34.220787] Hardware name: HP ProLiant DL360 Gen9/ProLiant DL360 > >>> Gen9, BIOS P89 05/21/2018 > >>> [ 34.221188] RIP: 0010:blk_mq_sched_bio_merge+0x6d/0xf0 > >> > >> Hi Changhui, > >> > >> Please try the following fix: > >> > >> diff --git a/block/bdev.c b/block/bdev.c > >> index 850852fe4b78..fa2838ca2e6d 100644 > >> --- a/block/bdev.c > >> +++ b/block/bdev.c > >> @@ -419,7 +419,11 @@ struct block_device *bdev_alloc(struct gendisk *disk, u8 partno) > >> bdev->bd_inode = inode; > >> bdev->bd_queue = disk->queue; > >> bdev->bd_stats = alloc_percpu(struct disk_stats); > >> - bdev->bd_has_submit_bio = false; > >> + > >> + if (partno) > >> + bdev->bd_has_submit_bio = disk->part0->bd_has_submit_bio; > >> + else > >> + bdev->bd_has_submit_bio = false; > >> if (!bdev->bd_stats) { > >> iput(inode); > >> return NULL; > >> > >> Fixes: 9f4107b07b17 ("block: store bdev->bd_disk->fops->submit_bio state in bdev") > >> > > > > Hi,Ming > > > > I retest the latest updated branch for-6.4/block > > (https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git/log/?h=for-6.4/block), > > which contain your fix patch "block: sync part's ->bd_has_submit_bio > > with disk's". > > the kernel panic no longer happen, but the test will failed, the > > system will reread partition table on exclusively open device, > > Is this patch in the branch for-6.4/block? > > 3723091ea188 ("block: don't set GD_NEED_PART_SCAN if scan partition failed") > > Hi, Yu Kuai this patch was not found in the for-6.4/block branch, and found it exist in the master branch > > :: [ 01:50:05 ] :: [ BEGIN ] :: Running 'mdadm -S /dev/md0' > > mdadm: stopped /dev/md0 > > :: [ 01:50:06 ] :: [ PASS ] :: Command 'mdadm -S /dev/md0' > > (Expected 0, got 0) > > :: [ 01:50:06 ] :: [ BEGIN ] :: Running 'mdadm -A /dev/md0 > > /dev/"$dev0" /dev/"$dev1"' > > mdadm: /dev/md0 has been started with 2 drives. > > :: [ 01:50:06 ] :: [ PASS ] :: Command 'mdadm -A /dev/md0 > > /dev/"$dev0" /dev/"$dev1"' (Expected 0, got 0) > > :: [ 01:50:09 ] :: [ BEGIN ] :: Running 'lsblk' > > NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS > > sda 8:0 1 447.1G 0 disk > > ├─sda1 8:1 1 1G 0 part /boot > > └─sda2 8:2 1 446.1G 0 part > > ├─rhel_storageqe--104-root 253:0 0 70G 0 lvm / > > ├─rhel_storageqe--104-swap 253:1 0 7.7G 0 lvm [SWAP] > > └─rhel_storageqe--104-home 253:2 0 368.4G 0 lvm /home > > sdb 8:16 1 447.1G 0 disk > > ├─sdb1 8:17 1 100M 0 part > > └─md0 9:0 0 447.1G 0 raid1 > > └─md0p1 259:0 0 100M 0 part > > sdc 8:32 1 447.1G 0 disk > > ├─sdc1 8:33 1 100M 0 part > > └─md0 9:0 0 447.1G 0 raid1 > > └─md0p1 259:0 0 100M 0 part > > sdd 8:48 1 447.1G 0 disk > > :: [ 01:50:09 ] :: [ PASS ] :: Command 'lsblk' (Expected 0, got 0) > > :: [ 01:50:09 ] :: [ BEGIN ] :: Running 'cat /proc/partitions' > > major minor #blocks name > > > > 8 0 468851544 sda > > 8 1 1048576 sda1 > > 8 2 467801088 sda2 > > 8 48 468851544 sdd > > 8 32 468851544 sdc > > 8 33 102400 sdc1 > > 8 16 468851544 sdb > > 8 17 102400 sdb1 > > 253 0 73400320 dm-0 > > 253 1 8060928 dm-1 > > 253 2 386338816 dm-2 > > 9 0 468851392 md0 > > 259 0 102400 md0p1 > > > > Thanks, > > > > . > > >