On Wed, Jun 7, 2017 at 2:01 PM, Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> wrote: > Hyper-V hosts may support more than 64 vCPUs, we need to use > HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX/LIST_EX hypercalls in this > case. > +/* HvFlushVirtualAddressSpaceEx, HvFlushVirtualAddressListEx hypercalls */ > +struct hv_flush_pcpu_ex { > + u64 address_space; > + u64 flags; > + struct { > + u64 format; > + u64 valid_bank_mask; > + u64 bank_contents[]; > + } hv_vp_set; > + u64 gva_list[]; > +}; > +static struct hv_flush_pcpu_ex __percpu *pcpu_flush_ex; > - flush->address_space = virt_to_phys(mm->pgd); > + flush->address_space = (u64)virt_to_phys(mm->pgd); I think this casting is redundant. Your variable fits phys_addr_t always. Like you do below w/o explicit casting. > +static void hyperv_flush_tlb_others_ex(const struct cpumask *cpus, > + struct mm_struct *mm, > + unsigned long start, > + unsigned long end) > +{ > + int nr_bank = 0, max_gvas, gva_n; > + struct hv_flush_pcpu_ex *flush; > + u64 status = U64_MAX; In one of the previous patches you used (u64)ULLONG_MAX. I recommend to use _there_ same as here, i.e. = U64_MAX; > + if (mm) { > + flush->address_space = virt_to_phys(mm->pgd); No explicit casting is okay here. > + flush->flags = 0; > + } else { > + flush->address_space = 0; > + flush->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; > + } > + /* > + * We can flush not more than max_gvas with one hypercall. Flush the > + * whole address space if we were asked to do more. > + */ > + max_gvas = (PAGE_SIZE - sizeof(*flush) - > + nr_bank * sizeof(flush->hv_vp_set.bank_contents[0])) / > + sizeof(flush->gva_list[0]); Is it possible to re-indent like max_gvas = (PAGE_SIZE - sizeof(*flush) - nr_bank * sizeof(flush->hv_vp_set.bank_contents[0])) / sizeof(flush->gva_list[0]); for easier understanding the calculus? -- With Best Regards, Andy Shevchenko _______________________________________________ devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxx http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel