On 22/09/08 11:01AM, Jan Kara wrote: > On Thu 08-09-22 00:11:10, Ritesh Harjani (IBM) wrote: > > On 22/09/06 05:29PM, Jan Kara wrote: > > > Using rbtree for sorting groups by average fragment size is relatively > > > expensive (needs rbtree update on every block freeing or allocation) and > > > leads to wide spreading of allocations because selection of block group > > > is very sentitive both to changes in free space and amount of blocks > > > allocated. Furthermore selecting group with the best matching average > > > fragment size is not necessary anyway, even more so because the > > > variability of fragment sizes within a group is likely large so average > > > is not telling much. We just need a group with large enough average > > > fragment size so that we have high probability of finding large enough > > > free extent and we don't want average fragment size to be too big so > > > that we are likely to find free extent only somewhat larger than what we > > > need. > > > > > > So instead of maintaing rbtree of groups sorted by fragment size keep > > > bins (lists) or groups where average fragment size is in the interval > > > [2^i, 2^(i+1)). This structure requires less updates on block allocation > > > / freeing, generally avoids chaotic spreading of allocations into block > > > groups, and still is able to quickly (even faster that the rbtree) > > > provide a block group which is likely to have a suitably sized free > > > space extent. > > > > This makes sense because we anyways maintain buddy bitmap for MB_NUM_ORDERS > > bitmaps. Hence our data structure to maintain different lists of groups, with > > their average fragments size can be bounded within MB_NUM_ORDERS lists. > > This also makes it for amortized O(1) search time for finding the right group > > in CR1 search. > > > > > > > > This patch reduces number of block groups used when untarring archive > > > with medium sized files (size somewhat above 64k which is default > > > mballoc limit for avoiding locality group preallocation) to about half > > > and thus improves write speeds for eMMC flash significantly. > > > > > > > Indeed a nice change. More inline with the how we maintain > > sbi->s_mb_largest_free_orders lists. > > I didn't really find more comments than the one below? No I meant. The data structure is more inline with sbi->s_mb_largest_free_orders lists :) Had no other comments. > > > I think as you already noted there are few minor checkpatch errors, > > other than that one small query below. > > Yep, some checkpatch errors + procfs file handling bugs + one bad unlock in > an error recovery path. All fixed up locally :) Sure. > > > > -/* > > > - * Reinsert grpinfo into the avg_fragment_size tree with new average > > > - * fragment size. > > > - */ > > > +/* Move group to appropriate avg_fragment_size list */ > > > static void > > > mb_update_avg_fragment_size(struct super_block *sb, struct ext4_group_info *grp) > > > { > > > struct ext4_sb_info *sbi = EXT4_SB(sb); > > > + int new_order; > > > > > > if (!test_opt2(sb, MB_OPTIMIZE_SCAN) || grp->bb_free == 0) > > > return; > > > > > > - write_lock(&sbi->s_mb_rb_lock); > > > - if (!RB_EMPTY_NODE(&grp->bb_avg_fragment_size_rb)) { > > > - rb_erase(&grp->bb_avg_fragment_size_rb, > > > - &sbi->s_mb_avg_fragment_size_root); > > > - RB_CLEAR_NODE(&grp->bb_avg_fragment_size_rb); > > > - } > > > + new_order = mb_avg_fragment_size_order(sb, > > > + grp->bb_free / grp->bb_fragments); > > > > Previous rbtree change was always checking for if grp->bb_fragments for 0. > > Can grp->bb_fragments be 0 here? > > Since grp->bb_free is greater than zero, there should be at least one > fragment... aah yes, right. -ritesh