On Fri, Mar 17, 2023 at 04:25:04PM +0530, Ojaswin Mujoo wrote: > > > This improves the accuracy of CR0/1 allocation as earlier, we could have > > > essentially empty BLOCK_UNINIT groups being ignored by CR0/1 due to their buddy > > > not being initialized, leading to slower CR2 allocations. With this patch CR0/1 > > > will be able to discover these groups as well, thus improving performance. > > > > The patch looks good. I just somewhat wonder - this change may result in > > uninitialized groups being initialized and used earlier (previously we'd > > rather search in other already initialized groups) which may spread > > allocations more. But I suppose that's fine and uninit groups are not > > really a feature meant to limit fragmentation and as the filesystem ages > > the differences should be minimal. So feel free to add: > > Another point I wanted to discuss wrt this patch series was why were the > BLOCK_UNINIT groups not being prefetched earlier. One point I can think > of is that this might lead to memory pressure when we have too many > empty BGs in a very large (say terabytes) disk. Originally the prefetch logic was simply something to optimize I/O --- that is, normally, all of the block bitmaps for a flex_bg are contiguous, so why not just read them all in a single I/O which is issued all at once, instead of doing them as separate 4k reads. Skipping block groups that hadn't yet been prefetched was something which was added later, in order to improve performance of the allocator for freshly mounted file systems where the prefetch hadn't yet had a chance to pull in block bitmaps; the problem was that if the block groups hadn't been prefetch yet, then the cr0 scan would fetch them, and if you have a storage device where blocks with monotonically increasing LBA numbers aren't necessarily stored adjacently on disk (for example, on a dm-thin volume, but if one were to do an experiment on certain emulated block devices on certain hyperscalar cloud environments, one might find a similar performance profile), resulting in a cr0 scan potentially issuing a series of 16 sequential 4k I/O's, that could be substantially worse from a performance standpoint than doing a single squential 64k I/O. When this change was made, the focus was on *initialized* bitmaps taking a long time if they were issued as individual sequential 4k I/O's; the fix was to skip scanning them initially, since the hope was that the prefetch would pull them in fairly quickly, and a few bad allocations when the file system was freshly mounted was an acceptable tradeoff. But prefetching prefetching BLOCK_UNINIT groups makes sense, that should fix the problem that you've identified (at least for BLOCK_UNINIT groups; for initialized block bitmaps, we'll still have less optimal allocation patterns until we've managed to prefetch those block groups). Cheers, 0 Ted