Re: Why vectorization didn't turn on by -O2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
runs finished for cheap vectorization model.  I am running very cheap
too.

https://lnt.opensuse.org/db_default/v4/SPEC/latest_runs_report?younger_in_days=14&older_in_days=0&all_changes=on&min_percentage_change=0.02&revisions=e54acea9e5a821448af97c66e94a1e4c4f977d5d%2Ce87209a1269622017bf3d98bf71502dcb0f893aa%2C73474527aaa24d9236aca074c5494a07f40ce058&include_user_branches=on

compares default -O2 (base), loop vectorize, slp vectorize.
Put "O2" to machine name.

Overall scores are:

kaby.spec2006.O2_generic
Test 			loop 	alp
SPEC/SPEC2006/FP/total 	8.16% 	0.19% 	
SPEC/SPEC2006/total 	4.96% 	0.38% 	
SPEC/SPEC2006/INT/total 36.595 	0.58% 	0.65% 	

kaby.spec2006.O2_generic_lto
SPEC/SPEC2006/FP/total 	9.06% 	-0.36% 	
SPEC/SPEC2006/total 	5.32% 	~ 	
SPEC/SPEC2006/INT/total	0.24% 	0.27% 	

kaby.spec2017.O2_generic
SPEC/SPEC2017/INT/total	6.54% 	-1.15% 	
SPEC/SPEC2017/total 	5.66% 	-0.17% 	
SPEC/SPEC2017/FP/total 	5.00% 	0.59% 	

kaby.spec2017.O2_generic_lto
SPEC/SPEC2017/INT/total	6.62% 	-0.12% 	
SPEC/SPEC2017/total 	5.69% 	-0.14% 	
SPEC/SPEC2017/FP/total 	4.99% 	-0.16% 	

zenith.spec2006.O2_generic
SPEC/SPEC2006/FP/total 	10.23% 	-0.35% 	
SPEC/SPEC2006/total 	6.01% 	-0.48% 	
SPEC/SPEC2006/INT/total	0.31% 	-0.66% 	

zenith.spec2006.O2_generic_lto
SPEC/SPEC2006/FP/total 	12.03% 	0.82% 	
SPEC/SPEC2006/total 	6.90% 	0.44% 	
SPEC/SPEC2006/INT/total	~ 	-0.11% 	

zenith.spec2017.O2_generic
SPEC/SPEC2017/INT/total	7.46% 	-0.37% 	
SPEC/SPEC2017/total 	6.81% 	0.48% 	
SPEC/SPEC2017/FP/total 	6.31% 	1.15% 	

zenith.spec2017.O2_generic_lto
SPEC/SPEC2017/INT/total	7.81% 	-0.22% 	
SPEC/SPEC2017/total 	7.07% 	0.44% 	
SPEC/SPEC2017/FP/total 	6.50% 	0.94% 	

So loop vectorize is consistent win, slp is mostly neutral.

Code size growth is too large for -O2 for loop vectorize. SLP vectorize
seems slight size win overall.

Noteworthy regressions caused by slp are:
  5-6%: xz (kaby, lto), milc (zenith), astar (zenith)
  4-5%: xalancbmk (kaby), blender (kaby)
  3-4%: xz (kaby, nolto), dealII (zenith)
  2-3%: povray (kaby), astar (kaby), perlbench (kaby), sjeng (zenith),
  xz(zenith)

We get 10% improvement on imagemagick (kaby), 17.5% (zenith),
imagematick 6.99% (zenith)

https://lnt.opensuse.org/db_default/v4/CPP/latest_runs_report?younger_in_days=14&older_in_days=0&all_changes=on&min_percentage_change=0.02&revisions=e54acea9e5a821448af97c66e94a1e4c4f977d5d%2Ce87209a1269622017bf3d98bf71502dcb0f893aa%2C73474527aaa24d9236aca074c5494a07f40ce058&include_user_branches=on

Is for C++/polyhedron benchmarks.  It shows several interesting
regressions in polyhedron and tsvc for loop vectorization (over 100%)
and also some for slp.  Shall I try to search for bugzilla for these?
  
Honza



[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux