Hello Jan,
Thanks for looking into this.
On 4/9/20 7:07 PM, Jan Kara wrote:
Hello Ritesh!
On Wed 08-04-20 22:24:10, Ritesh Harjani wrote:
@@ -3908,16 +3919,13 @@ ext4_mb_discard_group_preallocations(struct super_block *sb,
mb_debug(1, "discard preallocation for group %u\n", group);
- if (list_empty(&grp->bb_prealloc_list))
- return 0;
-
OK, so ext4_mb_discard_preallocations() is now going to lock every group
when we try to discard preallocations. That's likely going to increase lock
contention on the group locks if we are running out of free blocks when
there are multiple processes trying to allocate blocks. I guess we don't
care about the performace of this case too deeply but I'm not sure if the
cost won't be too big - probably we should check how much the CPU usage
with multiple allocating process trying to find free blocks grows...
Sure let me check the cpu usage in my test case with this patch.
But either ways unless we take the lock we are not able to confirm
that what are no. of free blocks available in the filesystem, right?
This mostly will happen only when there are lot of threads and due to
all of their preallocations filesystem is running into low space and
hence
trying to discard all the preallocations. => so when FS is going low on
space, isn't this cpu usage justifiable? (in an attempt to make sure we
don't fail with ENOSPC)?
Maybe not since this is only due to spinlock case, is it?
Or are you suggesting we should use some other method for discarding
all the group's PA. So that other threads could sleep while discard is
happening. Something like a discard work item which should free up
all of the group's PA. But we need a way to determine if the needed
no of blocks were freed so that we wake up and retry the allocation.
(Darrick did mentioned something on this line related to work/workqueue,
but couldn't discuss much that time).
bitmap_bh = ext4_read_block_bitmap(sb, group);
if (IS_ERR(bitmap_bh)) {
err = PTR_ERR(bitmap_bh);
ext4_set_errno(sb, -err);
ext4_error(sb, "Error %d reading block bitmap for %u",
err, group);
- return 0;
+ goto out_dbg;
}
err = ext4_mb_load_buddy(sb, group, &e4b);
@@ -3925,7 +3933,7 @@ ext4_mb_discard_group_preallocations(struct super_block *sb,
ext4_warning(sb, "Error %d loading buddy information for %u",
err, group);
put_bh(bitmap_bh);
- return 0;
+ goto out_dbg;
}
if (needed == 0)
@@ -3967,9 +3975,15 @@ ext4_mb_discard_group_preallocations(struct super_block *sb,
goto repeat;
}
- /* found anything to free? */
+ /*
+ * If this list is empty, then return the grp->bb_free. As someone
+ * else may have freed the PAs and updated grp->bb_free.
+ */
if (list_empty(&list)) {
BUG_ON(free != 0);
+ mb_debug(1, "Someone may have freed PA for this group %u, grp->bb_free %d\n",
+ group, grp->bb_free);
+ free = grp->bb_free;
goto out;
}
OK, but this still doesn't reliably fix the problem, does it? Because > bb_free can be still zero and another process just has some extents
to free
in its local 'list' (e.g. because it has decided it doesn't have enough
extents, some were busy and it decided to cond_resched()), so bb_free will
increase from 0 only once these extents are freed.
This patch should reliably fix it, I think.
So even if say Process P1 didn't free all extents, since some of the
PAs were busy it decided to cond_resched(), that still means that the
list(bb_prealloc_list) is not empty and whoever will get the
ext4_lock_group() next will either
get the busy PAs or it will be blocked on this lock_group() until all of
the PAs were freed by processes.
So if you see we may never actually return 0, unless, there are no PAs
and grp->bb_free is truely 0.
But your case does shows that grp->bb_free may not be the upper bound
of free blocks for this group. It could be just 1 PA's free blocks,
while other PAs are still in some other process's local list (due to
cond_reched())
Honestly, I don't understand why ext4_mb_discard_group_preallocations()
bothers with the local 'list'. Why doesn't it simply free the preallocation
Let's see if someone else know about this. I am not really sure
why it was done this way.
right away? And that would also basically fix your problem (well, it would
still theoretically exist because there's still freeing of one extent
potentially pending but I'm not sure if that will still be a practical
issue).
I guess this still can be a problem. So let's say if the process P1
just checks that the list was not empty and then in parallel process P2
just deletes the last entry - then when process P1 iterates over the
list, it will find it empty and return 0, which may return -ENOSPC failure.
unless we again take the group lock to check if the list is really free
and return grp->bb_free if it is.
-ritesh