Re: [PATCH 6/7] x86/hyper-v: use hypercall for remove TLB flush

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jork Loeser <Jork.Loeser@xxxxxxxxxxxxx> writes:

>> -----Original Message-----
>> From: Vitaly Kuznetsov [mailto:vkuznets@xxxxxxxxxx]
>> Sent: Friday, April 7, 2017 04:27
>> To: devel@xxxxxxxxxxxxxxxxxxxxxx; x86@xxxxxxxxxx
>> Cc: linux-kernel@xxxxxxxxxxxxxxx; KY Srinivasan <kys@xxxxxxxxxxxxx>;
>> Haiyang Zhang <haiyangz@xxxxxxxxxxxxx>; Stephen Hemminger
>> <sthemmin@xxxxxxxxxxxxx>; Thomas Gleixner <tglx@xxxxxxxxxxxxx>; Ingo
>> Molnar <mingo@xxxxxxxxxx>; H. Peter Anvin <hpa@xxxxxxxxx>; Steven
>> Rostedt <rostedt@xxxxxxxxxxx>; Jork Loeser <Jork.Loeser@xxxxxxxxxxxxx>
>> Subject: [PATCH 6/7] x86/hyper-v: use hypercall for remove TLB flush
>
>> diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c new file
>> mode 100644 index 0000000..fb487cb
>> --- /dev/null
>> +++ b/arch/x86/hyperv/mmu.c
>> @@ -0,0 +1,128 @@
>> +#include <linux/types.h>
>> +#include <linux/hyperv.h>
>> +#include <linux/slab.h>
>> +#include <asm/mshyperv.h>
>> +#include <asm/tlbflush.h>
>> +#include <asm/msr.h>
>> +#include <asm/fpu/api.h>
>> +
>> +/*
>> + * Arbitrary number; we need to pre-allocate per-cpu struct for doing
>> +TLB
>> + * flush hypercalls and we need to pick a size. '16' means we'll be
>> +able
>> + * to flush 16 * 4096 pages (256MB) with one hypercall.
>> + */
>> +#define HV_MMU_MAX_GVAS 16
>> +
>> +/* HvFlushVirtualAddressSpace*, HvFlushVirtualAddressList hypercalls */
>> +struct hv_flush_pcpu {
>> +	struct {
>> +		__u64 address_space;
>> +		__u64 flags;
>> +		__u64 processor_mask;
>> +		__u64 gva_list[HV_MMU_MAX_GVAS];
>> +	} flush;
>> +
>> +	spinlock_t lock;
>> +};
> Does this need an alignment declaration, so that the flush portion never crosses a page boundary when allocated with alloc_percpu()?
>

Thanks for pointing this out! I would slightly prefer we use
__alloc_percpu() and specify something like roundup_pow_of_two()
alignment.

>> +
>> +static struct hv_flush_pcpu __percpu *pcpu_flush;
>> +
>> +static void hyperv_flush_tlb_others(const struct cpumask *cpus,
>> +				    struct mm_struct *mm, unsigned long
>> start,
>> +				    unsigned long end)
>> +{
>> +	struct hv_flush_pcpu *flush;
>> +	unsigned long cur, flags;
>> +	u64 status = -1ULL;
>> +	int cpu, vcpu, gva_n;
>> +
>> +	if (!pcpu_flush || !hv_hypercall_pg)
>> +		goto do_native;
>> +
>> +	if (cpumask_empty(cpus))
>> +		return;
>> +
>> +	flush = this_cpu_ptr(pcpu_flush);
>> +	spin_lock_irqsave(&flush->lock, flags);
>
> What purpose does the spinlock on the CPU-local struct serve? Would a
> local_irq_save() do?

Now I'm not sure why I put it here in the first place :-) Yes, it would
probably do.

> Could this be called from NMI context, such as from the debugger?
>

NMI - I don't think so, native function does smp_call_function_many()
which WARNs even if it's called with interrupts disabled.

> Could this be a long-running loop, e.g. due to a large start/end
> range? If so, consider disabling interrupts only in the inner loop /
> flush the entire space?

The decision for flushing the entire space should probably be done
elsewhere as it is not implementation-specific (and I think it's done
somewhere as I never see requests to flush more than 4096 pages in my
testing).

I can disable interrupts in the inner loop but we'll have to stash flags
and calculated cpu_mask to some local variables. This is not supposed to
be expensive.

-- 
  Vitaly
_______________________________________________
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxx
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel



[Index of Archives]     [Linux Driver Backports]     [DMA Engine]     [Linux GPIO]     [Linux SPI]     [Video for Linux]     [Linux USB Devel]     [Linux Coverity]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux