On 6.04.19 г. 3:16 ч., Ming Lei wrote: > Hi Nikolay, > > On Fri, Apr 05, 2019 at 07:04:18PM +0300, Nikolay Borisov wrote: >> Hello Ming, >> >> Following the mp biovec rework what is the maximum >> data that a bio could contain? Should it be PAGE_SIZE * bio_vec > > There isn't any maximum data limit on the bio submitted from fs, > and block layer will make the final bio sent to driver correct > by applying all kinds of queue limit, such as max segment size, > max segment number, max sectors, ... > >> or something else? Currently I can see bios as large as 127 megs >> on sequential workloads, I got prompted to this since btrfs has a >> memory allocation that is dependent on the data in the bio and this >> particular memory allocation started failing with order 6 allocs. > > Could you share us the code? I don't see why order 6 allocs is a must. When a bio is submitted btrfs has to calculate the checksum for it, this happens in btrfs_csum_one_bio. Said checksums are stored in an kmalloc'ed array, whose size is calculated as: 32 + bio_size / btrfs' block size (usually 4k). So for a 127mb bio that would be: 32 * ((134184960÷4096) * 4) = 127k. We'd make an order 3 allocation. Admittedly the code in btrfs should know better rather than make unbounded allocations without a fallback, but bio suddenly becoming rather unbounded in their size caught us offhand. > >> Further debugging showed that with the following xfs_io command line: >> >> >> xfs_io -f -c "pwrite -S 0x61 -b 4m 0 10g" /media/scratch/file1 >> >> I can easily see very large bios: >> >> [ 188.366540] kworker/-7 3.... 34847519us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 28 bi_vcnt_max: 256 >> [ 188.367129] kworker/-658 2.... 34946536us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 134246400 bi_vcn: 28 bi_vcnt_max: 256 >> [ 188.367714] kworker/-7 3.... 35107967us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 30 bi_vcnt_max: 256 >> [ 188.368319] kworker/-658 2.... 35229894us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 134246400 bi_vcn: 32 bi_vcnt_max: 256 >> [ 188.368909] kworker/-7 3.... 35374809us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 25 bi_vcnt_max: 256 >> [ 188.369498] kworker/-658 2.... 35516194us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 134246400 bi_vcn: 31 bi_vcnt_max: 256 >> [ 188.370086] kworker/-7 3.... 35663669us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 32 bi_vcnt_max: 256 >> [ 188.370696] kworker/-658 2.... 35791006us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 100655104 bi_vcn: 24 bi_vcnt_max: 256 >> [ 188.371335] kworker/-658 2.... 35816114us : btrfs_submit_bio_hook: bio: ffff8dffe99434f0 bi_iter.bi_size = 33591296 bi_vcn: 5 bi_vcnt_max: 256 >> >> >> So that's 127 megs in a single bio? This stems from the new merging logic. >> 07173c3ec276 ("block: enable multipage bvecs") made it so that physically >> contiguous pages added to the bio would just modify bi_iter.bi_size and the >> initial page's bio_vec's bv_len. There's no longer the >> page == bv->bv_page portion of the check. > > bio_add_page() tries best to put physically contiguous pages into one bvec, and > I don't see anything is wrong in the log. > > Could you show us what the real problem is? > > Thanks, > Ming >