Re: [PATCH v6 10/12] Handle async PF in non preemptable context

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 06, 2010 at 12:41:32PM +0200, Gleb Natapov wrote:
> On Tue, Oct 05, 2010 at 04:51:50PM -0300, Marcelo Tosatti wrote:
> > On Mon, Oct 04, 2010 at 05:56:32PM +0200, Gleb Natapov wrote:
> > > If async page fault is received by idle task or when preemp_count is
> > > not zero guest cannot reschedule, so do sti; hlt and wait for page to be
> > > ready. vcpu can still process interrupts while it waits for the page to
> > > be ready.
> > > 
> > > Acked-by: Rik van Riel <riel@xxxxxxxxxx>
> > > Signed-off-by: Gleb Natapov <gleb@xxxxxxxxxx>
> > > ---
> > >  arch/x86/kernel/kvm.c |   40 ++++++++++++++++++++++++++++++++++------
> > >  1 files changed, 34 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> > > index 36fb3e4..f73946f 100644
> > > --- a/arch/x86/kernel/kvm.c
> > > +++ b/arch/x86/kernel/kvm.c
> > > @@ -37,6 +37,7 @@
> > >  #include <asm/cpu.h>
> > >  #include <asm/traps.h>
> > >  #include <asm/desc.h>
> > > +#include <asm/tlbflush.h>
> > >  
> > >  #define MMU_QUEUE_SIZE 1024
> > >  
> > > @@ -78,6 +79,8 @@ struct kvm_task_sleep_node {
> > >  	wait_queue_head_t wq;
> > >  	u32 token;
> > >  	int cpu;
> > > +	bool halted;
> > > +	struct mm_struct *mm;
> > >  };
> > >  
> > >  static struct kvm_task_sleep_head {
> > > @@ -106,6 +109,11 @@ void kvm_async_pf_task_wait(u32 token)
> > >  	struct kvm_task_sleep_head *b = &async_pf_sleepers[key];
> > >  	struct kvm_task_sleep_node n, *e;
> > >  	DEFINE_WAIT(wait);
> > > +	int cpu, idle;
> > > +
> > > +	cpu = get_cpu();
> > > +	idle = idle_cpu(cpu);
> > > +	put_cpu();
> > >  
> > >  	spin_lock(&b->lock);
> > >  	e = _find_apf_task(b, token);
> > > @@ -119,19 +127,33 @@ void kvm_async_pf_task_wait(u32 token)
> > >  
> > >  	n.token = token;
> > >  	n.cpu = smp_processor_id();
> > > +	n.mm = current->active_mm;
> > > +	n.halted = idle || preempt_count() > 1;
> > > +	atomic_inc(&n.mm->mm_count);
> > 
> > Can't see why this reference is needed.
> I thought that if kernel thread does fault on behalf of some
> process mm can go away while kernel thread is sleeping. But it looks
> like kernel thread increase reference to mm it runs with by himself, so
> may be this is redundant (but not harmful).
> 
Actually it is not redundant. Kernel thread will release reference to
active_mm on reschedule.

--
			Gleb.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]