RE: [PATCH] rt-tests: pi_stress: fix testing threads' smp affinity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Any comments? thanks.

Best Regards,
Jiafei.

> -----Original Message-----
> From: Jiafei Pan <jiafei.pan@xxxxxxx>
> Sent: Friday, March 26, 2021 5:44 PM
> To: williams@xxxxxxxxxx; jkacur@xxxxxxxxxx
> Cc: linux-rt-users@xxxxxxxxxxxxxxx; Jiafei Pan <jiafei.pan@xxxxxxx>; Jiafei
> Pan <jiafei.pan@xxxxxxx>
> Subject: [PATCH] rt-tests: pi_stress: fix testing threads' smp affinity
> 
> This patch includes the following modification:
> 1. Make sure test threads and admin threads don't run on the same CPU
>    Core if uniprocessor is not set or not on single Core platform to
>    avoid starve admin threads.
> 2. Force to use SCHED_RR if more than one Groups running one a CPU
>    Core to avoid test failure because threads in different Groups
>    are using the same priority, SCHED_FIFO which is default policy
>    and it maybe trigger deadlock of testing threads.
> 
> Signed-off-by: Jiafei Pan <Jiafei.Pan@xxxxxxx>
> ---
>  src/pi_tests/pi_stress.c | 23 +++++++++++++++++++----
>  1 file changed, 19 insertions(+), 4 deletions(-)
> 
> diff --git a/src/pi_tests/pi_stress.c b/src/pi_tests/pi_stress.c index
> 49f89b7..8795908 100644
> --- a/src/pi_tests/pi_stress.c
> +++ b/src/pi_tests/pi_stress.c
> @@ -237,6 +237,13 @@ int main(int argc, char **argv)
>  	/* process command line arguments */
>  	process_command_line(argc, argv);
> 
> +	if (ngroups > (num_processors - 1)) {
> +		printf("Warning: One Core will used for administor thread, the other
> CPU Core will run test thread,\n");
> +		printf("\t it will running more than one Group on one Core (groups >
> num_of_processors -1),\n");
> +		printf("\t it will force to use SCHED_RR, or change groups number to
> be lower than %ld \n", num_processors);
> +		policy = SCHED_RR;
> +	}
> +
>  	/* set default sched attributes */
>  	setup_sched_config(policy);
> 
> @@ -285,9 +292,17 @@ int main(int argc, char **argv)
>  			break;
>  	for (i = 0; i < ngroups; i++) {
>  		groups[i].id = i;
> -		groups[i].cpu = core++;
> -		if (core >= num_processors)
> -			core = 0;
> +		if (num_processors == 1 || uniprocessor) {
> +			groups[i].cpu = 0;
> +		} else {
> +			groups[i].cpu = core;
> +			/* Find next non-admin Core */
> +			do {
> +				core++;
> +				if (core >= num_processors)
> +					core = 0;
> +			} while (CPU_ISSET(core, &admin_cpu_mask));
> +		}
>  		if (create_group(&groups[i]) != SUCCESS)
>  			return FAILURE;
>  	}
> @@ -1143,7 +1158,7 @@ int create_group(struct group_parameters *group)
>  	CPU_ZERO(&mask);
>  	CPU_SET(group->cpu, &mask);
> 
> -	pi_debug("group %d bound to cpu %ld\n", group->id, group->cpu);
> +	printf("group %d bound to cpu %ld\n", group->id, group->cpu);
> 
>  	/* start the low priority thread */
>  	pi_debug("creating low priority thread\n");
> --
> 2.17.1





[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux