Re: [PATCH] quota: fix race condition between dqput() and dquot_mark_dquot_dirty()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello!

Sorry for the late reply, just had a Dragon Boat holiday.

On 2023/6/22 22:56, Jan Kara wrote:
Hello!

On Mon 19-06-23 14:44:03, Baokun Li wrote:
On 2023/6/16 23:28, Jan Kara wrote:
Now calling synchronize_srcu() directly from dquot_transfer() is too
expensive (and mostly unnecessary) so what I would rather suggest is to
create another dquot list (use dq_free list_head inside struct dquot for
it) and add dquot whose last reference should be dropped there. We'd then
queue work item which would call synchronize_srcu() and after that perform
the final cleanup of all the dquots on the list.

Now this also needs some modifications to dqget() and to quotaoff code to
handle various races with the new dqput() code so if you feel it is too
complex for your taste, I can implement this myself.

								Honza
I see what you mean, what we are doing here is very similar to
drop_dquot_ref(),
and if we have to modify it this way, I am happy to implement it.

But as you said, calling synchronize_srcu() is too expensive and it blocks
almost all
mark dirty processes, so we only call it now in performance insensitive
scenarios
like dquot_disable(). And how do we control how often synchronize_srcu() is
called?
Are there more than a certain number of dquots in releasing_dquots or are
they
executed at regular intervals? And it would introduce various new
competitions.
Is it worthwhile to do this for a corner scenario like this one?
So the way this is handled (e.g. in fsnotify subsystem) is that we just
queue work item when we drop the last reference to the protected structure.
The scheduling latency before the work item gets executed is enough to
batch synchronize_srcu() calls and once synchronize_srcu() finishes, we add
all items from the "staging list" to the free_dquots list.

Cool, thanks a lot for clearing up the confusion!

I will implement it in the next version.


I think we can simply focus on the race between the DQ_ACTIVE_B flag and
the DQ_MOD_B flag, which is the core problem, because the same quota
should not have both flags. These two flags are protected by dq_list_lock
and dquot->dq_lock respectively, so it makes sense to add a
wait_on_dquot() to ensure the accuracy of DQ_ACTIVE_B.
But the fundamental problem is not only the race with DQ_MOD_B setting. The
dquot structure can be completely freed by the time
dquot_claim_space_nodirty() calls dquot_mark_dquot_dirty() on it. That's
why I think making __dquot_transfer() obey dquot_srcu rules is the right
solution.

								Honza
Yes, now I also think that making __dquot_transfer() obey dquot_srcu rules is
a better solution. But with inode->i_lock protection, why would the dquot
structure be completely freed?

Thanks!
--
With Best Regards,
Baokun Li
.



[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux