Skip to content
Snippets Groups Projects
Commit cc572242 authored by Kyrylo Tkachov's avatar Kyrylo Tkachov
Browse files

aarch64: Reduce FP reassociation width for Neoverse V2 and set...

aarch64: Reduce FP reassociation width for Neoverse V2 and set AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA

The fp reassociation width for Neoverse V2 was set to 6 since its
introduction and I guess it was empirically tuned.  But since
AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA was added the tree reassociation
pass seems to be more deliberate in forming FMAs and when that flag is
used it seems to more properly evaluate the FMA vs non-FMA reassociation
widths.
According to the Neoverse V2 SWOG the core has a throughput of 4 for
most FP operations, so the value 6 is not accurate anyway.
Also, the SWOG does state that FMADD operations are pipelined and the
results can be forwarded from FP multiplies to the accumulation operands
of FMADD instructions, which seems to be what
AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA expresses.

This patch sets the fp_reassoc_width field to 4 and enables
AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA for -mcpu=neoverse-v2.

On SPEC2017 fprate I see the following changes on a Grace system:
503.bwaves_r	0.16%
507.cactuBSSN_r	-0.32%
508.namd_r	3.04%
510.parest_r	0.00%
511.povray_r	0.78%
519.lbm_r 	0.35%
521.wrf_r	0.69%
526.blender_r	-0.53%
527.cam4_r	0.84%
538.imagick_r	0.00%
544.nab_r	-0.97%
549.fotonik3d_r	-0.45%
554.roms_r	0.97%
Geomean	        0.35%

with -Ofast -mcpu=grace -flto.

So slight overall improvement with a meaningful improvement in
508.namd_r.

I think other tunings in aarch64 should look into
AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA as well, but I'll leave the
benchmarking to someone else.

Signed-off-by: default avatarKyrylo Tkachov <ktkachov@nvidia.com>

gcc/ChangeLog:

	* config/aarch64/tuning_models/neoversev2.h (fp_reassoc_width):
	Set to 4.
	(tune_flags): Add AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA.
parent 6d8b9b77
No related branches found
No related tags found
No related merge requests found
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment