Re: [PATCH RFC] ext4: fix potential race between online resizing and write operations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 26, 2020 at 04:53:47PM +0100, Uladzislau Rezki wrote:
> On Wed, Feb 26, 2020 at 07:06:56AM -0800, Paul E. McKenney wrote:
> > On Wed, Feb 26, 2020 at 02:04:40PM +0100, Uladzislau Rezki wrote:
> > > On Tue, Feb 25, 2020 at 02:47:45PM -0800, Paul E. McKenney wrote:
> > > > On Tue, Feb 25, 2020 at 07:54:00PM +0100, Uladzislau Rezki wrote:
> > > > > > > > > I was thinking a 2 fold approach (just thinking out loud..):
> > > > > > > > > 
> > > > > > > > > If kfree_call_rcu() is called in atomic context or in any rcu reader, then
> > > > > > > > > use GFP_ATOMIC to grow an rcu_head wrapper on the atomic memory pool and
> > > > > > > > > queue that.
> > > > > > > > > 
> > > > > > > I am not sure if that is acceptable, i mean what to do when GFP_ATOMIC
> > > > > > > gets failed in atomic context? Or we can just consider it as out of
> > > > > > > memory and another variant is to say that headless object can be called
> > > > > > > from preemptible context only.
> > > > > > 
> > > > > > Yes that makes sense, and we can always put disclaimer in the API's comments
> > > > > > saying if this object is expected to be freed a lot, then don't use the
> > > > > > headless-API to be extra safe.
> > > > > > 
> > > > > Agree.
> > > > > 
> > > > > > BTW, GFP_ATOMIC the documentation says if GFP_ATOMIC reserves are depleted,
> > > > > > the kernel can even panic some times, so if GFP_ATOMIC allocation fails, then
> > > > > > there seems to be bigger problems in the system any way. I would say let us
> > > > > > write a patch to allocate there and see what the -mm guys think.
> > > > > > 
> > > > > OK. It might be that they can offer something if they do not like our
> > > > > approach. I will try to compose something and send the patch to see.
> > > > > The tree.c implementation is almost done, whereas tiny one is on hold.
> > > > > 
> > > > > I think we should support batching as well as bulk interface there.
> > > > > Another way is to workaround head-less object, just to attach the head
> > > > > dynamically using kmalloc() and then call_rcu() but then it will not be
> > > > > a fair headless support :)
> > > > > 
> > > > > What is your view?
> > > > > 
> > > > > > > > > Otherwise, grow an rcu_head on the stack of kfree_call_rcu() and call
> > > > > > > > > synchronize_rcu() inline with it.
> > > > > > > > > 
> > > > > > > > >
> > > > > > > What do you mean here, Joel? "grow an rcu_head on the stack"?
> > > > > > 
> > > > > > By "grow on the stack", use the compiler-allocated rcu_head on the
> > > > > > kfree_rcu() caller's stack.
> > > > > > 
> > > > > > I meant here to say, if we are not in atomic context, then we use regular
> > > > > > GFP_KERNEL allocation, and if that fails, then we just use the stack's
> > > > > > rcu_head and call synchronize_rcu() or even synchronize_rcu_expedited since
> > > > > > the allocation failure would mean the need for RCU to free some memory is
> > > > > > probably great.
> > > > > > 
> > > > > Ah, i got it. I thought you meant something like recursion and then
> > > > > unwinding the stack back somehow :)
> > > > > 
> > > > > > > > > Use preemptible() andr task_struct's rcu_read_lock_nesting to differentiate
> > > > > > > > > between the 2 cases.
> > > > > > > > > 
> > > > > > > If the current context is preemptable then we can inline synchronize_rcu()
> > > > > > > together with freeing to handle such corner case, i mean when we are run
> > > > > > > out of memory.
> > > > > > 
> > > > > > Ah yes, exactly what I mean.
> > > > > > 
> > > > > OK.
> > > > > 
> > > > > > > As for "task_struct's rcu_read_lock_nesting". Will it be enough just
> > > > > > > have a look at preempt_count of current process? If we have for example
> > > > > > > nested rcu_read_locks:
> > > > > > > 
> > > > > > > <snip>
> > > > > > > rcu_read_lock()
> > > > > > >     rcu_read_lock()
> > > > > > >         rcu_read_lock()
> > > > > > > <snip>
> > > > > > > 
> > > > > > > the counter would be 3.
> > > > > > 
> > > > > > No, because preempt_count is not incremented during rcu_read_lock(). RCU
> > > > > > reader sections can be preempted, they just cannot goto sleep in a reader
> > > > > > section (unless the kernel is RT).
> > > > > > 
> > > > > So in CONFIG_PREEMPT kernel we can identify if we are in atomic or not by
> > > > > using rcu_preempt_depth() and in_atomic(). When it comes to !CONFIG_PREEMPT
> > > > > then we skip it and consider as atomic. Something like:
> > > > > 
> > > > > <snip>
> > > > > static bool is_current_in_atomic()
> > > > > {
> > > > > #ifdef CONFIG_PREEMPT_RCU
> > > > 
> > > > If possible: if (IS_ENABLED(CONFIG_PREEMPT_RCU))
> > > > 
> > > > Much nicer than #ifdef, and I -think- it should work in this case.
> > > > 
> > > OK. Thank you, Paul!
> > > 
> > > There is one point i would like to highlight it is about making caller
> > > instead to be responsible for atomic or not decision. Like how kmalloc()
> > > works, it does not really know the context it runs on, so it is up to
> > > caller to inform.
> > > 
> > > The same way:
> > > 
> > > kvfree_rcu(p, atomic = true/false);
> > > 
> > > in this case we could cover !CONFIG_PREEMPT case also.
> > 
> > Understood, but couldn't we instead use IS_ENABLED() to work out the
> > actual situation at runtime and relieve the caller of this burden?
> > Or am I missing a corner case?
> > 
> Yes we can do it in run-time, i mean to detect context type, atomic or not.
> But only for CONFIG_PREEMPT kernel. In case of !CONFIG_PREEMPT configuration 
> i do not see a straight forward way how to detect it. For example when caller 
> holds "spinlock". Therefore for such configuration we can just consider it
> as atomic. But in reality it could be not in atomic.
> 
> We need it for emergency/corner case and head-less objects. When we are run
> of memory. So in this case we should attach the rcu_head dynamically and
> queue the freed object to be processed later on, after GP.
> 
> If atomic context use GFP_ATOMIC flag if not use GFP_KERNEL. It is better 
> to allocate with GFP_KERNEL flag(if possible) because it has much less
> restrictions then GFP_ATOMIC one, i.e. GFP_KERNEL can sleep and wait until
> the memory is reclaimed.
> 
> But that is a corner case and i agree that it would be good to avoid of
> such passing of extra info by the caller.
> 
> Anyway i just share some extra info :)

Hmm, I can't see at the moment how you can use GFP_KERNEL here for
!CONFIG_PREEMPT kernels since that sleeps and you can't detect easily if you
are in an RCU reader on !CONFIG_PREEMPT unless lockdep is turned on (in which
case you could have checked lockdep's map).

How about for !PREEMPT using first: GFP_NOWAIT and second GFP_ATOMIC if
(NOWAIT fails)?  And for PREEMPT, use GFP_KERNEL, then GFP_ATOMIC (if
GFP_KERNEL fails).  Thoughts?

thanks,

 - Joel




[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux