Re: [Update PATCH V3] md: don't unregister sync_thread with reconfig_mutex held

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 5/25/22 12:13 AM, Logan Gunthorpe wrote:
On 2022-05-23 03:51, Guoqing Jiang wrote:
I have tried with both ubuntu 22.04 kernel which is 5.15 and vanilla
5.12, none of them
can pass your mentioned tests.

[root@localhost mdadm]# lsblk|grep vd
vda          252:0    0    1G  0 disk
vdb          252:16   0    1G  0 disk
vdc          252:32   0    1G  0 disk
vdd          252:48   0    1G  0 disk
[root@localhost mdadm]# ./test --dev=disk --disks=/dev/vd{a..d}
--tests=05r1-add-internalbitmap
Testing on linux-5.12.0-default kernel
/root/mdadm/tests/05r1-add-internalbitmap... succeeded
[root@localhost mdadm]# ./test --dev=disk --disks=/dev/vd{a..d}
--tests=07reshape5intr
Testing on linux-5.12.0-default kernel
/root/mdadm/tests/07reshape5intr... FAILED - see
/var/tmp/07reshape5intr.log and /var/tmp/fail07reshape5intr.log for details
[root@localhost mdadm]# ./test --dev=disk --disks=/dev/vd{a..d}
--tests=07revert-grow
Testing on linux-5.12.0-default kernel
/root/mdadm/tests/07revert-grow... FAILED - see
/var/tmp/07revert-grow.log and /var/tmp/fail07revert-grow.log for details
[root@localhost mdadm]# head -10  /var/tmp/07revert-grow.log | grep mdadm
+ . /root/mdadm/tests/07revert-grow
*++ mdadm -CR --assume-clean /dev/md0 -l5 -n4 -x1 /dev/vda /dev/vdb
/dev/vdc /dev/vdd /dev/vda /dev/vdb /dev/vdc /dev/vdd --metadata=0.9**
*
The above line is clearly wrong from my understanding.

And let's check ubuntu 22.04.

root@vm:/home/gjiang/mdadm# lsblk|grep vd
vda    252:0    0     1G  0 disk
vdb    252:16   0     1G  0 disk
vdc    252:32   0     1G  0 disk
root@vm:/home/gjiang/mdadm# ./test --dev=disk --disks=/dev/vd{a..d}
--tests=05r1-failfast
Testing on linux-5.15.0-30-generic kernel
/home/gjiang/mdadm/tests/05r1-failfast... succeeded
root@vm:/home/gjiang/mdadm# ./test --dev=disk --disks=/dev/vd{a..c}
--tests=07reshape5intr
Testing on linux-5.15.0-30-generic kernel
/home/gjiang/mdadm/tests/07reshape5intr... FAILED - see
/var/tmp/07reshape5intr.log and /var/tmp/fail07reshape5intr.log for details
root@vm:/home/gjiang/mdadm# ./test --dev=disk --disks=/dev/vd{a..c}
--tests=07revert-grow
Testing on linux-5.15.0-30-generic kernel
/home/gjiang/mdadm/tests/07revert-grow... FAILED - see
/var/tmp/07revert-grow.log and /var/tmp/fail07revert-grow.log for details

So I would not consider it is regression.
I definitely had those test working (at least some of the time) before I
rebased on md-next or if I revert 7e6ba434cc6080. You might need to try
my branch (plus that patch reverted) and my mdadm branch as there are a
number of fixes that may have helped with that specific test.

https://github.com/lsgunth/mdadm/ bugfixes2
https://github.com/sbates130272/linux-p2pmem md-bug

I would prefer to focus on block tree or md tree. With latest block tree
(commit 44d8538d7e7dbee7246acda3b706c8134d15b9cb), I get below
similar issue as Donald reported, it happened with the cmd (which did
work with 5.12 kernel).

vm79:~/mdadm> sudo ./test --dev=loop --tests=05r1-add-internalbitmap

May 25 04:48:51 vm79 kernel: Call Trace:
May 25 04:48:51 vm79 kernel:  <TASK>
May 25 04:48:51 vm79 kernel:  bfq_bic_update_cgroup+0x28/0x1b0
May 25 04:48:51 vm79 kernel:  bfq_insert_requests+0x29d/0x22d0
May 25 04:48:51 vm79 kernel:  ? ioc_find_get_icq+0x21c/0x2a0
May 25 04:48:51 vm79 kernel:  ? bfq_prepare_request+0x11/0x30
May 25 04:48:51 vm79 kernel:  blk_mq_sched_insert_request+0x8b/0x100
May 25 04:48:51 vm79 kernel:  blk_mq_submit_bio+0x44c/0x540
May 25 04:48:51 vm79 kernel:  __submit_bio+0xe8/0x160
May 25 04:48:51 vm79 kernel:  submit_bio_noacct_nocheck+0xf0/0x2b0
May 25 04:48:51 vm79 kernel:  ? submit_bio+0x3e/0xd0
May 25 04:48:51 vm79 kernel:  submit_bio+0x3e/0xd0
May 25 04:48:51 vm79 kernel:  submit_bh_wbc+0x117/0x140
May 25 04:48:51 vm79 kernel:  block_read_full_page+0x1eb/0x4f0
May 25 04:48:51 vm79 kernel:  ? blkdev_llseek+0x60/0x60
May 25 04:48:51 vm79 kernel:  ? folio_add_lru+0x51/0x80
May 25 04:48:51 vm79 kernel:  do_read_cache_folio+0x3b4/0x5e0
May 25 04:48:51 vm79 kernel:  ? kmem_cache_alloc_node+0x183/0x2e0
May 25 04:48:51 vm79 kernel:  ? alloc_vmap_area+0x9f/0x8a0
May 25 04:48:51 vm79 kernel:  read_cache_page+0x15/0x80
May 25 04:48:51 vm79 kernel:  read_part_sector+0x38/0x140
May 25 04:48:51 vm79 kernel:  read_lba+0x105/0x220
May 25 04:48:51 vm79 kernel:  efi_partition+0xed/0x7f0

Thanks,
Guoqing



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux