__write_sb_page() rounds up the io size to the optimal io size if it doesn't exceed the data offset, but it doesn't check the final size exceeds the bitmap length. For example: page count - 1 page size - 4K data offset - 1M optimal io size - 256K The final io size would be 256K (64 pages) but md_bitmap_storage_alloc() allocated 1 page, the IO would write 1 valid page and 63 pages that happens to be allocated afterwards. This leaks memory to the raid device superblock. This issue caused a data transfer failure in nvme-tcp. The network drivers checks the first page of an IO with sendpage_ok(), it returns true if the page isn't a slabpage and refcount >= 1. If the page !sendpage_ok() the network driver disables MSG_SPLICE_PAGES. As of now the network layer assumes all the pages of the IO are sendpage_ok() when MSG_SPLICE_PAGES is on. The bitmap pages aren't slab pages, the first page of the IO is sendpage_ok(), but the additional pages that happens to be allocated after the bitmap pages might be !sendpage_ok(). That cause skb_splice_from_iter() to stop the data transfer, in the case below it hangs 'mdadm --create'. The bug is reproducible, in order to reproduce we need nvme-over-tcp controllers with optimal IO size bigger than PAGE_SIZE. Creating a raid with bitmap over those devices reproduces the bug. In order to simulate large optimal IO size you can use dm-stripe with a single device. Script to reproduce the issue on top of brd devices using dm-stripe is attached below (will be added to blktest). I have added some logs to test the theory: ... md: created bitmap (1 pages) for device md127 __write_sb_page before md_super_write offset: 16, size: 262144. pfn: 0x53ee === __write_sb_page before md_super_write. logging pages === pfn: 0x53ee, slab: 0 <-- the only page that allocated for the bitmap pfn: 0x53ef, slab: 1 pfn: 0x53f0, slab: 0 pfn: 0x53f1, slab: 0 pfn: 0x53f2, slab: 0 pfn: 0x53f3, slab: 1 ... nvme_tcp: sendpage_ok - pfn: 0x53ee, len: 262144, offset: 0 skbuff: before sendpage_ok() - pfn: 0x53ee skbuff: before sendpage_ok() - pfn: 0x53ef WARNING at net/core/skbuff.c:6848 skb_splice_from_iter+0x142/0x450 skbuff: !sendpage_ok - pfn: 0x53ef. is_slab: 1, page_count: 1 ... Signed-off-by: Ofir Gal <ofir.gal@xxxxxxxxxxx> --- drivers/md/md-bitmap.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c index 0a2d37eb38ef..3cc2d0ad6f00 100644 --- a/drivers/md/md-bitmap.c +++ b/drivers/md/md-bitmap.c @@ -227,6 +227,7 @@ static int __write_sb_page(struct md_rdev *rdev, struct bitmap *bitmap, struct block_device *bdev; struct mddev *mddev = bitmap->mddev; struct bitmap_storage *store = &bitmap->storage; + unsigned int bitmap_limit = (bitmap->storage.file_pages - pg_index) << PAGE_SHIFT; loff_t sboff, offset = mddev->bitmap_info.offset; sector_t ps = pg_index * PAGE_SIZE / SECTOR_SIZE; unsigned int size = PAGE_SIZE; @@ -273,7 +274,7 @@ static int __write_sb_page(struct md_rdev *rdev, struct bitmap *bitmap, /* DATA METADATA BITMAP - no problems */ } - md_super_write(mddev, rdev, sboff + ps, (int) size, page); + md_super_write(mddev, rdev, sboff + ps, (int)min(size, bitmap_limit), page); return 0; } -- 2.45.1 Reproduce script: reproduce.sh | 114 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 114 insertions(+) create mode 100755 reproduce.sh diff --git a/reproduce.sh b/reproduce.sh new file mode 100755 index 000000000..8ae226b18 --- /dev/null +++ b/reproduce.sh @@ -0,0 +1,114 @@ +#!/usr/bin/env sh +# SPDX-License-Identifier: MIT + +set -e + +load_modules() { + modprobe nvme + modprobe nvme-tcp + modprobe nvmet + modprobe nvmet-tcp +} + +setup_ns() { + local dev=$1 + local num=$2 + local port=$3 + ls $dev > /dev/null + + mkdir -p /sys/kernel/config/nvmet/subsystems/$num + cd /sys/kernel/config/nvmet/subsystems/$num + echo 1 > attr_allow_any_host + + mkdir -p namespaces/$num + cd namespaces/$num/ + echo $dev > device_path + echo 1 > enable + + ln -s /sys/kernel/config/nvmet/subsystems/$num \ + /sys/kernel/config/nvmet/ports/$port/subsystems/ +} + +setup_port() { + local num=$1 + + mkdir -p /sys/kernel/config/nvmet/ports/$num + cd /sys/kernel/config/nvmet/ports/$num + echo "127.0.0.1" > addr_traddr + echo tcp > addr_trtype + echo 8009 > addr_trsvcid + echo ipv4 > addr_adrfam +} + +setup_big_opt_io() { + local dev=$1 + local name=$2 + + # Change optimal IO size by creating dm stripe + dmsetup create $name --table \ + "0 `blockdev --getsz $dev` striped 1 512 $dev 0" +} + +setup_targets() { + # Setup ram devices instead of using real nvme devices + modprobe brd rd_size=1048576 rd_nr=2 # 1GiB + + setup_big_opt_io /dev/ram0 ram0_big_opt_io + setup_big_opt_io /dev/ram1 ram1_big_opt_io + + setup_port 1 + setup_ns /dev/mapper/ram0_big_opt_io 1 1 + setup_ns /dev/mapper/ram1_big_opt_io 2 1 +} + +setup_initiators() { + nvme connect -t tcp -n 1 -a 127.0.0.1 -s 8009 + nvme connect -t tcp -n 2 -a 127.0.0.1 -s 8009 +} + +reproduce_warn() { + local devs=$@ + + # Hangs here + mdadm --create /dev/md/test_md --level=1 --bitmap=internal \ + --bitmap-chunk=1024K --assume-clean --run --raid-devices=2 $devs +} + +echo "################################### + +The script creates 2 nvme initiators in order to reproduce the bug. +The script doesn't know which controllers it created, choose the new nvme +controllers when asked. + +################################### + +Press enter to continue. +" + +read tmp + +echo "# Creating 2 nvme controllers for the reproduction. current nvme devices:" +lsblk -s | grep nvme || true +echo "--------------------------------- +" + +load_modules +setup_targets +setup_initiators + +sleep 0.1 # Wait for the new nvme ctrls to show up + +echo "# Created 2 nvme devices. nvme devices list:" + +lsblk -s | grep nvme +echo "--------------------------------- +" + +echo "# Insert the new nvme devices as separated lines. both should be with size of 1G" +read dev1 +read dev2 + +ls /dev/$dev1 > /dev/null +ls /dev/$dev2 > /dev/null + +reproduce_warn /dev/$dev1 /dev/$dev2 -- 2.45.1