Re: [PATCH] kvm/mmu: fixed coding style issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Please use get_maintainers.pl, KVM x86 has its own maintainers.

On Tue, Aug 15, 2023, Mohammad Natiq Khan wrote:
> Initializing global variable to 0 or false is not necessary and should
> be avoided. Issue reported by checkpatch script as:
> ERROR: do not initialise globals to 0 (or false).
> Along with some other warnings like:
> WARNING: Prefer 'unsigned int' to bare use of 'unsigned'

Sorry, but no.

First and foremost, don't pack a large pile of unrelated changes into one large
patch, as such a patch is annoyingly difficult to review and apply, e.g. this will
conflict with other in-flight changes.

Second, generally speaking, the value added by cleanups like this aren't worth
the churn to the code, e.g. it pollutes git blame.

Third, checkpatch is not the ultimately authority, e.g. IMO there's value in
explicitly initializing nx_huge_pages_recovery_ratio to zero because it shows
that it's *intentionally* zero for real-time kernels.

I'm all for opportunistically cleaning up existing messes when touching adjacent
code, or fixing specific issues if they're causing actual problems, e.g. actively
confusing readers.  But doing a wholesale cleanup based on what checkpatch wants
isn't going to happen.

> Signed-off-by: Mohammad Natiq Khan <natiqk91@xxxxxxxxx>
> ---
>  arch/x86/kvm/mmu/mmu.c | 105 +++++++++++++++++++++--------------------
>  1 file changed, 53 insertions(+), 52 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index ec169f5c7dce..8d6578782652 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -64,7 +64,7 @@ int __read_mostly nx_huge_pages = -1;
>  static uint __read_mostly nx_huge_pages_recovery_period_ms;
>  #ifdef CONFIG_PREEMPT_RT
>  /* Recovery can cause latency spikes, disable it for PREEMPT_RT.  */
> -static uint __read_mostly nx_huge_pages_recovery_ratio = 0;
> +static uint __read_mostly nx_huge_pages_recovery_ratio;
>  #else
>  static uint __read_mostly nx_huge_pages_recovery_ratio = 60;
>  #endif
> @@ -102,7 +102,7 @@ module_param_named(flush_on_reuse, force_flush_and_sync_on_reuse, bool, 0644);
>   * 2. while doing 1. it walks guest-physical to host-physical
>   * If the hardware supports that we don't need to do shadow paging.
>   */
> -bool tdp_enabled = false;
> +bool tdp_enabled;
>  
>  static bool __ro_after_init tdp_mmu_allowed;
>  
> @@ -116,7 +116,7 @@ static int tdp_root_level __read_mostly;
>  static int max_tdp_level __read_mostly;
>  
>  #ifdef MMU_DEBUG
> -bool dbg = 0;
> +bool dbg;
>  module_param(dbg, bool, 0644);
>  #endif
>  
> @@ -161,7 +161,7 @@ struct kvm_shadow_walk_iterator {
>  	hpa_t shadow_addr;
>  	u64 *sptep;
>  	int level;
> -	unsigned index;
> +	unsigned int index;
>  };



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux