On 2023/6/29 22:33, Jan Kara wrote:
On Thu 29-06-23 19:47:08, Baokun Li wrote:
On 2023/6/29 18:59, Jan Kara wrote:
On Wed 28-06-23 21:21:53, Baokun Li wrote:
@@ -760,6 +771,8 @@ dqcache_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
struct dquot *dquot;
unsigned long freed = 0;
+ flush_delayed_work("a_release_work);
+
I would not flush the work here. Sure, it can make more dquots available
for reclaim but I think it is more important for the shrinker to not wait
on srcu period as shrinker can be called very frequently under memory
pressure.
This is because I want to use remove_free_dquot() directly, and if I don't
do
flush here anymore, then DQST_FREE_DQUOTS will not be accurate.
Since that's the case, I'll remove the flush here and add a determination
to remove_free_dquot() whether to increase DQST_FREE_DQUOTS.
OK.
spin_lock(&dq_list_lock);
while (!list_empty(&free_dquots) && sc->nr_to_scan) {
dquot = list_first_entry(&free_dquots, struct dquot, dq_free);
@@ -787,6 +800,60 @@ static struct shrinker dqcache_shrinker = {
.seeks = DEFAULT_SEEKS,
};
+/*
+ * Safely release dquot and put reference to dquot.
+ */
+static void quota_release_workfn(struct work_struct *work)
+{
+ struct dquot *dquot;
+ struct list_head rls_head;
+
+ spin_lock(&dq_list_lock);
+ /* Exchange the list head to avoid livelock. */
+ list_replace_init(&releasing_dquots, &rls_head);
+ spin_unlock(&dq_list_lock);
+
+restart:
+ synchronize_srcu(&dquot_srcu);
+ spin_lock(&dq_list_lock);
+ while (!list_empty(&rls_head)) {
I think the logic below needs a bit more work. Firstly, I think that
dqget() should removing dquots from releasing_dquots list - basically just
replace the:
if (!atomic_read(&dquot->dq_count))
remove_free_dquot(dquot);
with
/* Dquot on releasing_dquots list? Drop ref kept by that list. */
if (atomic_read(&dquot->dq_count) == 1 && !list_empty(&dquot->dq_free))
atomic_dec(&dquot->dq_count);
remove_free_dquot(dquot);
atomic_inc(&dquot->dq_count);
That way we are sure that while we are holding dq_list_lock, all dquots on
rls_head list have dq_count == 1.
I wrote it this way at first, but that would have been problematic, so I
ended up dropping the dq_count == 1 constraint for dquots on
releasing_dquots. Like the following, we will get a bad dquot directly:
quota_release_workfn
spin_lock(&dq_list_lock)
dquot = list_first_entry(&rls_head, struct dquot, dq_free)
spin_unlock(&dq_list_lock)
dquot->dq_sb->dq_op->release_dquot(dquot)
release_dquot
dqget
atomic_dec(&dquot->dq_count)
remove_free_dquot(dquot)
atomic_inc(&dquot->dq_count)
spin_unlock(&dq_list_lock)
wait_on_dquot(dquot)
if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags))
// still active
mutex_lock(&dquot->dq_lock)
dquot_is_busy(dquot)
atomic_read(&dquot->dq_count) > 1
clear_bit(DQ_ACTIVE_B, &dquot->dq_flags)
mutex_unlock(&dquot->dq_lock)
Removing dquot from releasing_dquots and its reduced reference count
will cause dquot_is_busy() in dquot_release to fail. wait_on_dquot(dquot)
in dqget would have no effect. This is also the reason why I did not restart
at dquot_active. Adding dquot to releasing_dquots only in dqput() and
removing dquot from releasing_dquots only in quota_release_workfn() is
a simple and effective way to ensure consistency.
Indeed, that's a good point. Still cannot we simplify the loop like:
while (!list_empty(&rls_head)) {
dquot = list_first_entry(&rls_head, struct dquot, dq_free);
/* Dquot got used again? */
if (atomic_read(&dquot->dq_count) > 1) {
atomic_dec(&dquot->dq_count);
remove_free_dquot(dquot);
continue;
}
if (dquot_dirty(dquot)) {
keep what you had
}
if (dquot_active(dquot)) {
spin_unlock(&dq_list_lock);
dquot->dq_sb->dq_op->release_dquot(dquot);
goto restart;
}
/* Dquot is inactive and clean, we can move it to free list */
atomic_dec(&dquot->dq_count);
remove_free_dquot(dquot);
put_dquot_last(dquot);
}
What do you think?
Honza
This looks great, and the code looks much cleaner, and I'll send out the
next version later containing your suggested changes!
Thank you so much for your patient review!
--
With Best Regards,
Baokun Li
.