Skip to content
Snippets Groups Projects
  1. Sep 03, 2024
    • Jan Hubicka's avatar
      Zen5 tuning part 3: scheduler tweaks · e2125a60
      Jan Hubicka authored
      this patch adds support for new fussion in znver5 documented in the
      optimization manual:
      
         The Zen5 microarchitecture adds support to fuse reg-reg MOV Instructions
         with certain ALU instructions. The following conditions need to be met for
         fusion to happen:
           - The MOV should be reg-reg mov with Opcode 0x89 or 0x8B
           - The MOV is followed by an ALU instruction where the MOV and ALU destination register match.
           - The ALU instruction may source only registers or immediate data. There cannot be any memory source.
           - The ALU instruction sources either the source or dest of MOV instruction.
           - If ALU instruction has 2 reg sources, they should be different.
           - The following ALU instructions can fuse with an older qualified MOV instruction:
             ADD ADC AND XOR OP SUB SBB INC DEC NOT SAL / SHL SHR SAR
             (I assume OP is OR)
      
      I also increased issue rate from 4 to 6.  Theoretically znver5 can do more, but
      with our model we can't realy use it.
      Increasing issue rate to 8 leads to infinite loop in scheduler.
      
      Finally, I also enabled fuse_alu_and_branch since it is supported by
      znver5 (I think by earlier zens too).
      
      New fussion pattern moves quite few instructions around in common code:
      @@ -2210,13 +2210,13 @@
              .cfi_offset 3, -32
              leaq    63(%rsi), %rbx
              movq    %rbx, %rbp
      +       shrq    $6, %rbp
      +       salq    $3, %rbp
              subq    $16, %rsp
              .cfi_def_cfa_offset 48
              movq    %rdi, %r12
      -       shrq    $6, %rbp
      -       movq    %rsi, 8(%rsp)
      -       salq    $3, %rbp
              movq    %rbp, %rdi
      +       movq    %rsi, 8(%rsp)
              call    _Znwm
              movq    8(%rsp), %rsi
              movl    $0, 8(%r12)
      @@ -2224,8 +2224,8 @@
              movq    %rax, (%r12)
              movq    %rbp, 32(%r12)
              testq   %rsi, %rsi
      -       movq    %rsi, %rdx
              cmovns  %rsi, %rbx
      +       movq    %rsi, %rdx
              sarq    $63, %rdx
              shrq    $58, %rdx
              sarq    $6, %rbx
      which should help decoder bandwidth and perhaps also cache, though I was not
      able to measure off-noise effect on SPEC.
      
      gcc/ChangeLog:
      
      	* config/i386/i386.h (TARGET_FUSE_MOV_AND_ALU): New tune.
      	* config/i386/x86-tune-sched.cc (ix86_issue_rate): Updat for znver5.
      	(ix86_adjust_cost): Add TODO about znver5 memory latency.
      	(ix86_fuse_mov_alu_p): New.
      	(ix86_macro_fusion_pair_p): Use it.
      	* config/i386/x86-tune.def (X86_TUNE_FUSE_ALU_AND_BRANCH): Add ZNVER5.
      	(X86_TUNE_FUSE_MOV_AND_ALU): New tune;
      e2125a60
    • Jonathan Wakely's avatar
      libstdc++: Simplify std::any to fix -Wdeprecated-declarations warning · dee3c5c6
      Jonathan Wakely authored
      We don't need to use std::aligned_storage in std::any. We just need a
      POD type of the right size. The void* union member already ensures the
      alignment will be correct. Avoiding std::aligned_storage means we don't
      need to suppress a -Wdeprecated-declarations warning.
      
      libstdc++-v3/ChangeLog:
      
      	* include/experimental/any (experimental::any::_Storage): Use
      	array of unsigned char instead of deprecated
      	std::aligned_storage.
      	* include/std/any (any::_Storage): Likewise.
      	* testsuite/20_util/any/layout.cc: New test.
      Unverified
      dee3c5c6
    • Dhruv Chawla's avatar
      libstdc++: Add missing feature-test macro in various headers · efe6efb6
      Dhruv Chawla authored
      
      version.syn#2 requires various headers to define
      __cpp_lib_allocator_traits_is_always_equal. Currently, only <memory> was
      defining this macro. Implement fixes for the other headers as well.
      
      Signed-off-by: default avatarDhruv Chawla <dhruvc@nvidia.com>
      
      libstdc++-v3/ChangeLog:
      
      	* include/std/deque: Define macro
      	__glibcxx_want_allocator_traits_is_always_equal.
      	* include/std/forward_list: Likewise.
      	* include/std/list: Likewise.
      	* include/std/map: Likewise.
      	* include/std/scoped_allocator: Likewise.
      	* include/std/set: Likewise.
      	* include/std/string: Likewise.
      	* include/std/unordered_map: Likewise.
      	* include/std/unordered_set: Likewise.
      	* include/std/vector: Likewise.
      	* testsuite/20_util/headers/memory/version.cc: New test.
      	* testsuite/20_util/scoped_allocator/version.cc: Likewise.
      	* testsuite/21_strings/headers/string/version.cc: Likewise.
      	* testsuite/23_containers/deque/version.cc: Likewise.
      	* testsuite/23_containers/forward_list/version.cc: Likewise.
      	* testsuite/23_containers/list/version.cc: Likewise.
      	* testsuite/23_containers/map/version.cc: Likewise.
      	* testsuite/23_containers/set/version.cc: Likewise.
      	* testsuite/23_containers/unordered_map/version.cc: Likewise.
      	* testsuite/23_containers/unordered_set/version.cc: Likewise.
      	* testsuite/23_containers/vector/version.cc: Likewise.
      Unverified
      efe6efb6
    • Jan Hubicka's avatar
      Zen5 tuning part 2: disable gather and scatter · d82edbe9
      Jan Hubicka authored
      We disable gathers for zen4.  It seems that gather has improved a bit compared
      to zen4 and Zen5 optimization manual suggests "Avoid GATHER instructions when
      the indices are known ahead of time. Vector loads followed by shuffles result
      in a higher load bandwidth." however the situation seems to be more
      complicated.
      
      gather is 5-10% loss on parest benchmark as well as 30% loss on sparse dot
      products in TSVC. Curiously enough breaking these out into microbenchmark
      reversed the situation and it turns out that the performance depends on
      how indices are distributed.  gather is loss if indices are sequential,
      neutral if they are random and win for some strides (4, 8).
      
      This seems to be similar to earlier zens, so I think (especially for
      backporting znver5 support) that it makes sense to be conistent and disable
      gather unless we work out a good heuristics on when to use it. Since we
      typically do not know the indices in advance, I don't see how that can be done.
      
      I opened PR116582 with some examples of wins and loses
      
      gcc/ChangeLog:
      
      	* config/i386/x86-tune.def (X86_TUNE_USE_GATHER_2PARTS): Disable for
      	ZNVER5.
      	(X86_TUNE_USE_SCATTER_2PARTS): Disable for ZNVER5.
      	(X86_TUNE_USE_GATHER_4PARTS): Disable for ZNVER5.
      	(X86_TUNE_USE_SCATTER_4PARTS): Disable for ZNVER5.
      	(X86_TUNE_USE_GATHER_8PARTS): Disable for ZNVER5.
      	(X86_TUNE_USE_SCATTER_8PARTS): Disable for ZNVER5.
      d82edbe9
    • H.J. Lu's avatar
      ipa: Don't disable function parameter analysis for fat LTO · 2f1689ea
      H.J. Lu authored
      
      Update analyze_parms not to disable function parameter analysis for
      -ffat-lto-objects.  Tested on x86-64, there are no differences in zstd
      with "-O2 -flto=auto" -g "vs -O2 -flto=auto -g -ffat-lto-objects".
      
      	PR ipa/116410
      	* ipa-modref.cc (analyze_parms): Always analyze function parameter
      	for LTO.
      
      Signed-off-by: default avatarH.J. Lu <hjl.tools@gmail.com>
      2f1689ea
    • Jeff Law's avatar
      [PR target/115921] Improve reassociation for rv64 · 4371f656
      Jeff Law authored
      As Jovan pointed out in pr115921, we're not reassociating expressions like this
      on rv64:
      
      (x & 0x3e) << 12
      
      It generates something like this:
      
              li      a5,258048
              slli    a0,a0,12
              and     a0,a0,a5
      
      We have a pattern that's designed to clean this up.  Essentially reassociating
      the operations so that we don't need to load the constant resulting in
      something like this:
      
              andi    a0,a0,63
              slli    a0,a0,12
      
      That pattern wasn't working for certain constants due to its condition. The
      condition is trying to avoid cases where this kind of reassociation would
      hinder shadd generation on rv64.  That condition was just written poorly.
      
      This patch tightens up that condition in a few ways.  First, there's no need to
      worry about shadd cases if ZBA is not enabled.  Second we can't use shadd if
      the shift value isn't 1, 2 or 3.  Finally rather than open-coding one of the
      tests, we can use an existing operand predicate.
      
      The net is we'll start performing this transformation in more cases on rv64
      while still avoiding reassociation if it would spoil shadd generation.
      
      	PR target/115921
      gcc/
      	* config/riscv/riscv.md (reassociate bitwise ops): Tighten test for
      	cases we do not want reassociate.
      
      gcc/testsuite/
      	* gcc.target/riscv/pr115921.c: New test.
      4371f656
    • Jan Hubicka's avatar
      Zen5 tuning part 1: avoid FMA chains · d6360b40
      Jan Hubicka authored
      testing matrix multiplication benchmarks shows that FMA on a critical chain
      is a perofrmance loss over separate multiply and add. While the latency of 4
      is lower than multiply + add (3+2) the problem is that all values needs to
      be ready before computation starts.
      
      While on znver4 AVX512 code fared well with FMA, it was because of the split
      registers. Znver5 benefits from avoding FMA on all widths.  This may be different
      with the mobile version though.
      
      On naive matrix multiplication benchmark the difference is 8% with -O3
      only since with -Ofast loop interchange solves the problem differently.
      It is 30% win, for example, on S323 from TSVC:
      
      real_t s323(struct args_t * func_args)
      {
      
      //    recurrences
      //    coupled recurrence
      
          initialise_arrays(__func__);
          gettimeofday(&func_args->t1, NULL);
      
          for (int nl = 0; nl < iterations/2; nl++) {
              for (int i = 1; i < LEN_1D; i++) {
                  a[i] = b[i-1] + c[i] * d[i];
                  b[i] = a[i] + c[i] * e[i];
              }
              dummy(a, b, c, d, e, aa, bb, cc, 0.);
          }
      
          gettimeofday(&func_args->t2, NULL);
          return calc_checksum(__func__);
      }
      
      gcc/ChangeLog:
      
      	* config/i386/x86-tune.def (X86_TUNE_AVOID_128FMA_CHAINS): Enable for
      	znver5.
      	(X86_TUNE_AVOID_256FMA_CHAINS): Likewise.
      	(X86_TUNE_AVOID_512FMA_CHAINS): Likewise.
      d6360b40
    • Tobias Burnus's avatar
      LTO/WPA: Ensure that output_offload_tables only writes table once [PR116535] · 2fcccf21
      Tobias Burnus authored
      When ltrans was written concurrently, e.g. via -flto=N (N > 1, assuming
      sufficient partiations, e.g., via -flto-partition=max), output_offload_tables
      wrote the output tables once per fork.
      
      	PR lto/116535
      
      gcc/ChangeLog:
      
      	* lto-cgraph.cc (output_offload_tables): Remove offload_ frees.
      	* lto-streamer-out.cc (lto_output): Make call to it depend on
      	lto_get_out_decl_state ()->output_offload_tables_p.
      	* lto-streamer.h (struct lto_out_decl_state): Add
      	output_offload_tables_p field.
      	* tree-pass.h (ipa_write_optimization_summaries): Add bool argument.
      	* passes.cc (ipa_write_summaries_1): Add bool
      	output_offload_tables_p arg.
      	(ipa_write_summaries): Update call.
      	(ipa_write_optimization_summaries): Accept output_offload_tables_p.
      
      gcc/lto/ChangeLog:
      
      	* lto.cc (stream_out): Update call to
      	ipa_write_optimization_summaries to pass true for first partition.
      2fcccf21
    • Szabolcs Nagy's avatar
      MAINTAINERS: Update my email address · ce5f2dc4
      Szabolcs Nagy authored
      	* MAINTAINERS: Update my email address and add myself to DCO.
      ce5f2dc4
    • Richard Biener's avatar
      tree-optimization/116575 - avoid ICE with SLP mask_load_lane · ac6cd62a
      Richard Biener authored
      The following avoids performing re-discovery with single lanes in
      the attempt to for the use of mask_load_lane as rediscovery will
      fail since a single lane of a mask load will appear permuted which
      isn't supported.
      
      	PR tree-optimization/116575
      	* tree-vect-slp.cc (vect_analyze_slp): Properly compute
      	the mask argument for vect_load/store_lanes_supported.
      	When the load is masked for now avoid rediscovery.
      
      	* gcc.dg/vect/pr116575.c: New testcase.
      ac6cd62a
    • Haochen Jiang's avatar
      i386: Fix vfpclassph non-optimizied intrin · 9b312595
      Haochen Jiang authored
      The intrin for non-optimized got a typo in mask type, which will cause
      the high bits of __mmask32 being unexpectedly zeroed.
      
      The test does not fail under O0 with current 1b since the testcase is
      wrong. We need to include avx512-mask-type.h after SIZE is defined, or
      it will always be __mmask8. That problem also happened in AVX10.2 testcases.
      I will write a seperate patch to fix that.
      
      gcc/ChangeLog:
      
      	* config/i386/avx512fp16intrin.h
      	(_mm512_mask_fpclass_ph_mask): Correct mask type to __mmask32.
      	(_mm512_fpclass_ph_mask): Ditto.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.target/i386/avx512fp16-vfpclassph-1c.c: New test.
      9b312595
    • Richard Biener's avatar
      Do not assert NUM_POLY_INT_COEFFS != 1 early · 14b65af6
      Richard Biener authored
      The following moves the assert on NUM_POLY_INT_COEFFS != 1 after
      INTEGER_CST processing.
      
      	* fold-const.cc (poly_int_binop): Move assert on
      	NUM_POLY_INT_COEFFS after INTEGER_CST processing.
      14b65af6
    • Jakub Jelinek's avatar
      lower-bitint: Fix up __builtin_{add,sub}_overflow{,_p} bitint lowering [PR116501] · d4d75a83
      Jakub Jelinek authored
      The following testcase is miscompiled.  The problem is in the last_ovf step.
      The second operand has signed _BitInt(513) type but has the MSB clear,
      so range_to_prec returns 512 for it (i.e. it fits into unsigned
      _BitInt(512)).  Because of that the last step actually doesn't need to get
      the most significant bit from the second operand, but the code was deciding
      what to use purely from TYPE_UNSIGNED (type1) - if unsigned, use 0,
      otherwise sign-extend the last processed bit; but that in this case was set.
      We don't want to treat the positive operand as if it was negative regardless
      of the bit below that precision, and precN >= 0 indicates that the operand
      is in the [0, inf) range.
      
      2024-09-03  Jakub Jelinek  <jakub@redhat.com>
      
      	PR tree-optimization/116501
      	* gimple-lower-bitint.cc (bitint_large_huge::lower_addsub_overflow):
      	In the last_ovf case, use build_zero_cst operand not just when
      	TYPE_UNSIGNED (typeN), but also when precN >= 0.
      
      	* gcc.dg/torture/bitint-73.c: New test.
      d4d75a83
    • Eric Botcazou's avatar
      ada: Add kludge for quirk of ancient 32-bit ABIs to previous change · a19cf635
      Eric Botcazou authored
      Some ancient 32-bit ABIs, most notably that of x86/Linux, misalign double
      scalars in record types, so comparing DECL_ALIGN with TYPE_ALIGN directly
      may give the wrong answer for them.
      
      gcc/ada/
      
      	* gcc-interface/trans.cc (addressable_p) <COMPONENT_REF>: Add kludge
      	to cope with ancient 32-bit ABIs.
      a19cf635
    • Eric Botcazou's avatar
      ada: Plug loophole exposed by previous change · 9362abf5
      Eric Botcazou authored
      The change causes more temporaries to be created at call sites for unaligned
      actual parameters, thus revealing that the machinery does not properly deal
      with unconstrained nominal subtypes for them.
      
      gcc/ada/
      
      	* gcc-interface/trans.cc (create_temporary): Deal with types whose
      	size is self-referential by allocating the maximum size.
      9362abf5
    • Eric Botcazou's avatar
      ada: Fix internal error with Atomic Volatile_Full_Access object · 0a862c5a
      Eric Botcazou authored
      The initial implementation of the GNAT aspect/pragma Volatile_Full_Access
      made it incompatible with Atomic, because it was not decided whether the
      read-modify-write sequences generated by Volatile_Full_Access would need
      to be implemented atomically when Atomic was also specified, which would
      have required a compare-and-swap primitive from the target architecture.
      
      But Ada 2022 introduced Full_Access_Only and retrofitted it into Atomic
      in the process, answering the above question by the negative, so the
      incompatibility between Volatile_Full_Access and Atomic was lifted in
      Ada 2012 as well, unfortunately without adjusting the implementation.
      
      gcc/ada/
      
      	* gcc-interface/trans.cc (get_atomic_access): Deal specifically with
      	nodes that are both Atomic and Volatile_Full_Access in Ada 2012.
      0a862c5a
    • Eric Botcazou's avatar
      ada: Pass unaligned record components by copy in calls on all platforms · d8d19146
      Eric Botcazou authored
      This has historically been done only on platforms requiring the strict
      alignment of memory references, but this can arguably be considered as
      being mandated by the language on all of them.
      
      gcc/ada/
      
      	* gcc-interface/trans.cc (addressable_p) <COMPONENT_REF>: Take into
      	account the alignment of the field on all platforms.
      d8d19146
    • Eric Botcazou's avatar
      ada: Fix internal error on pragma pack with discriminated record component · 9ba7262c
      Eric Botcazou authored
      When updating the size after making a packable type in gnat_to_gnu_field,
      we fail to clear it again when it is not constant.
      
      gcc/ada/
      
      	* gcc-interface/decl.cc (gnat_to_gnu_field): Clear again gnu_size
      	after updating it if it is not constant.
      9ba7262c
    • Marc Poulhiès's avatar
      ada: Simplify Note_Uplevel_Bound procedure · b3f6a790
      Marc Poulhiès authored
      The procedure Note_Uplevel_Bound was implemented as a custom expression
      tree walk. This change replaces this custom tree traversal by a more
      idiomatic use of Traverse_Proc.
      
      gcc/ada/
      
      	* exp_unst.adb (Check_Static_Type::Note_Uplevel_Bound): Refactor
      	to use the generic Traverse_Proc.
      	(Check_Static_Type): Adjust calls to Note_Uplevel_Bound as the
      	previous second parameter was unused, so removed.
      b3f6a790
    • Steve Baird's avatar
      ada: Transform Length attribute references for non-Strict overflow mode. · 1ef11f4b
      Steve Baird authored
      The non-strict overflow checking code does a better job of eliminating
      overflow checks if given an expression consisting only of predefined
      operators (including relationals), literals, identifiers, and conditional
      expressions. If it is both feasible and useful, rewrite a
      Length attribute reference as such an expression. "Feasible" means
      "index type is same type as attribute reference type, so we can rewrite without
      using type conversions". "Useful" means "Overflow_Mode is something other than
      Strict, so there is value in making overflow check elimination easier".
      
      gcc/ada/
      
      	* exp_attr.adb (Expand_N_Attribute_Reference): If it makes sense
      	to do so, then rewrite a Length attribute reference as an
      	equivalent conditional expression.
      1ef11f4b
    • Eric Botcazou's avatar
      ada: Do not warn for partial access to Atomic Volatile_Full_Access objects · d7e110d8
      Eric Botcazou authored
      The initial implementation of the GNAT aspect/pragma Volatile_Full_Access
      made it incompatible with Atomic, because it was not decided whether the
      read-modify-write sequences generated by Volatile_Full_Access would need
      to be implemented atomically when Atomic was also specified, which would
      have required a compare-and-swap primitive from the target architecture.
      
      But Ada 2022 introduced Full_Access_Only and retrofitted it into Atomic
      in the process, answering the above question by the negative, so the
      incompatibility between Volatile_Full_Access and Atomic was lifted in
      Ada 2012 as well, but the implementation was not entirely adjusted.
      
      In Ada 2012, it does not make sense to warn for the partial access to an
      Atomic object if the object is also declared Volatile_Full_Access, since
      the object will be accessed as a whole in this case (like in Ada 2022).
      
      gcc/ada/
      
      	* sem_res.adb (Is_Atomic_Ref_With_Address): Rename into...
      	(Is_Atomic_Non_VFA_Ref_With_Address): ...this and adjust the
      	implementation to exclude Volatile_Full_Access objects.
      	(Resolve_Indexed_Component): Adjust to above renaming.
      	(Resolve_Selected_Component): Likewise.
      d7e110d8
    • Steve Baird's avatar
      ada: Reject illegal array aggregates as per AI22-0106. · e083e728
      Steve Baird authored
      Implement the new legality rules of AI22-0106 which (as discussed in the AI)
      are needed to disallow constructs whose semantics would otherwise be poorly
      defined.
      
      gcc/ada/
      
      	* sem_aggr.adb (Resolve_Array_Aggregate): Implement the two new
      	legality rules of AI11-0106. Add code to avoid cascading error
      	messages.
      e083e728
    • Bob Duff's avatar
      ada: Fix Finalize_Storage_Only bug in b-i-p calls · b776b08b
      Bob Duff authored
      Do not pass null for the Collection parameter when
      Finalize_Storage_Only is in effect. If the collection
      is null in that case, we will blow up later when we
      deallocate the object.
      
      gcc/ada/
      
      	* exp_ch6.adb (Add_Collection_Actual_To_Build_In_Place_Call):
      	Remove Finalize_Storage_Only from the code that checks whether to
      	pass null to the Collection parameter. Having done that, we don't
      	need to check for Is_Library_Level_Entity, because
      	No_Heap_Finalization requires that. And if we ever change
      	No_Heap_Finalization to allow nested access types, we will still
      	want to pass null. Note that the comment "Such a type lacks a
      	collection." is incorrect in the case of Finalize_Storage_Only;
      	such types have a collection.
      b776b08b
    • Jennifer Schmitz's avatar
      SVE intrinsics: Fold constant operands for svmul. · 6b1cf59e
      Jennifer Schmitz authored
      
      This patch implements constant folding for svmul by calling
      gimple_folder::fold_const_binary with tree_code MULT_EXPR.
      Tests were added to check the produced assembly for different
      predicates, signed and unsigned integers, and the svmul_n_* case.
      
      The patch was bootstrapped and regtested on aarch64-linux-gnu, no regression.
      OK for mainline?
      
      Signed-off-by: default avatarJennifer Schmitz <jschmitz@nvidia.com>
      
      gcc/
      	* config/aarch64/aarch64-sve-builtins-base.cc (svmul_impl::fold):
      	Try constant folding.
      
      gcc/testsuite/
      	* gcc.target/aarch64/sve/const_fold_mul_1.c: New test.
      6b1cf59e
    • Jennifer Schmitz's avatar
      SVE intrinsics: Fold constant operands for svdiv. · ee8b7231
      Jennifer Schmitz authored
      
      This patch implements constant folding for svdiv:
      The new function aarch64_const_binop was created, which - in contrast to
      int_const_binop - does not treat operations as overflowing. This function is
      passed as callback to vector_const_binop from the new gimple_folder
      method fold_const_binary, if the predicate is ptrue or predication is _x.
      From svdiv_impl::fold, fold_const_binary is called with TRUNC_DIV_EXPR as
      tree_code.
      In aarch64_const_binop, a case was added for TRUNC_DIV_EXPR to return 0
      for division by 0, as defined in the semantics for svdiv.
      Tests were added to check the produced assembly for different
      predicates, signed and unsigned integers, and the svdiv_n_* case.
      
      The patch was bootstrapped and regtested on aarch64-linux-gnu, no regression.
      OK for mainline?
      
      Signed-off-by: default avatarJennifer Schmitz <jschmitz@nvidia.com>
      
      gcc/
      	* config/aarch64/aarch64-sve-builtins-base.cc (svdiv_impl::fold):
      	Try constant folding.
      	* config/aarch64/aarch64-sve-builtins.h: Declare
      	gimple_folder::fold_const_binary.
      	* config/aarch64/aarch64-sve-builtins.cc (aarch64_const_binop):
      	New function to fold binary SVE intrinsics without overflow.
      	(gimple_folder::fold_const_binary): New helper function for
      	constant folding of SVE intrinsics.
      
      gcc/testsuite/
      	* gcc.target/aarch64/sve/const_fold_div_1.c: New test.
      ee8b7231
    • Jennifer Schmitz's avatar
      SVE intrinsics: Refactor const_binop to allow constant folding of intrinsics. · 87217bea
      Jennifer Schmitz authored
      
      This patch sets the stage for constant folding of binary operations for SVE
      intrinsics:
      In fold-const.cc, the code for folding vector constants was moved from
      const_binop to a new function vector_const_binop. This function takes a
      function pointer as argument specifying how to fold the vector elements.
      The intention is to call vector_const_binop from the backend with an
      aarch64-specific callback function.
      The code in const_binop for folding operations where the first operand is a
      vector constant and the second argument is an integer constant was also moved
      into vector_const_binop to to allow folding of binary SVE intrinsics where
      the second operand is an integer (_n).
      To allow calling poly_int_binop from the backend, the latter was made public.
      
      The patch was bootstrapped and regtested on aarch64-linux-gnu, no regression.
      OK for mainline?
      
      Signed-off-by: default avatarJennifer Schmitz <jschmitz@nvidia.com>
      
      gcc/
      	* fold-const.h: Declare vector_const_binop.
      	* fold-const.cc (const_binop): Remove cases for vector constants.
      	(vector_const_binop): New function that folds vector constants
      	element-wise.
      	(int_const_binop): Remove call to wide_int_binop.
      	(poly_int_binop): Add call to wide_int_binop.
      87217bea
    • Richard Biener's avatar
      Handle mixing REALPART/IMAGPART with other components in SLP groups · 7c9394e8
      Richard Biener authored
      The following makes sure we handle a SLP load/store group from
      a structure with complex and scalar members.  This for example
      happens in gcc.target/i386/pr106010-9a.c.
      
      	* tree-vect-slp.cc (vect_build_slp_tree_1): Handle mixing
      	all of handled components besides ARRAY_RANGE_REF, drop
      	handling of INDIRECT_REF.
      7c9394e8
    • Richard Biener's avatar
      Correctly handle store IFNs in vect_get_vector_types_for_stmt · 340ca743
      Richard Biener authored
      Currently vect_get_vector_types_for_stmt only special-cases
      IFN_MASK_STORE but there are now very many variants and simply
      passing analysis without setting *VECTYPE will ICE duing SLP
      discovery (noticed with IFN_SCATTER_STORE).  The following
      properly uses internal_store_fn_p.  I also noticed we're
      unnecessarily handing those again to determine the scalar type
      but there should always be a data reference for them.
      
      	* tree-vect-stmts.cc (vect_get_vector_types_for_stmt):
      	Handle all internal_store_fn_p the same.  Remove special-casing
      	for the scalar_type of IFN_MASK_STORE.
      340ca743
    • Levy Hsu's avatar
      i386: Support partial vectorized V2BF/V4BF smaxmin · 62df24e5
      Levy Hsu authored
      This patch supports sminmax for partial vectorized V2BF/V4BF.
      
      gcc/ChangeLog:
      
      	* config/i386/mmx.md (<code><mode>3): New define_expand for V2BF/V4BFsmaxmin
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.target/i386/avx10_2-partial-bf-vector-smaxmin-1.c: New test.
      62df24e5
    • Levy Hsu's avatar
      i386: Support partial vectorized V2BF/V4BF plus/minus/mult/div/sqrt · 8e16f26c
      Levy Hsu authored
      This patch introduces new mode iterators and expands for the i386 architecture to support partial vectorization of bf16 operations using AVX10.2 instructions.
      
      gcc/ChangeLog:
      
      	* config/i386/mmx.md (VBF_32_64): New mode iterator for partial vectorized V2BF/V4BF.
      	(<insn><mode>3): New define_expand for plusminusmultdiv.
      	(sqrt<mode>2): New define_expand for sqrt.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.target/i386/avx10_2-partial-bf-vector-fast-math-1.c: New test.
      	* gcc.target/i386/avx10_2-partial-bf-vector-operations-1.c: New test.
      8e16f26c
    • Pan Li's avatar
      RISC-V: Support form 1 of integer scalar .SAT_ADD · 539fcaae
      Pan Li authored
      
      This patch would like to support the scalar signed ssadd pattern
      for the RISC-V backend.  Aka
      
      Form 1:
        #define DEF_SAT_S_ADD_FMT_1(T, UT, MIN, MAX) \
        T __attribute__((noinline))                  \
        sat_s_add_##T##_fmt_1 (T x, T y)             \
        {                                            \
          T sum = (UT)x + (UT)y;                     \
          return (x ^ y) < 0                         \
            ? sum                                    \
            : (sum ^ x) >= 0                         \
              ? sum                                  \
              : x < 0 ? MIN : MAX;                   \
        }
      
      DEF_SAT_S_ADD_FMT_1(int64_t, uint64_t, INT64_MIN, INT64_MAX)
      
      Before this patch:
        10   │ sat_s_add_int64_t_fmt_1:
        11   │     mv   a5,a0
        12   │     add  a0,a0,a1
        13   │     xor  a1,a5,a1
        14   │     not  a1,a1
        15   │     xor  a4,a5,a0
        16   │     and  a1,a1,a4
        17   │     blt  a1,zero,.L5
        18   │     ret
        19   │ .L5:
        20   │     srai a5,a5,63
        21   │     li   a0,-1
        22   │     srli a0,a0,1
        23   │     xor  a0,a5,a0
        24   │     ret
      
      After this patch:
        10   │ sat_s_add_int64_t_fmt_1:
        11   │     add  a2,a0,a1
        12   │     xor  a1,a0,a1
        13   │     xor  a5,a0,a2
        14   │     srli a5,a5,63
        15   │     srli a1,a1,63
        16   │     xori a1,a1,1
        17   │     and  a5,a5,a1
        18   │     srai a4,a0,63
        19   │     li   a3,-1
        20   │     srli a3,a3,1
        21   │     xor  a3,a3,a4
        22   │     neg  a4,a5
        23   │     and  a3,a3,a4
        24   │     addi a5,a5,-1
        25   │     and  a0,a2,a5
        26   │     or   a0,a0,a3
        27   │     ret
      
      The below test suites are passed for this patch:
      1. The rv64gcv fully regression test.
      
      gcc/ChangeLog:
      
      	* config/riscv/riscv-protos.h (riscv_expand_ssadd): Add new func
      	decl for expanding ssadd.
      	* config/riscv/riscv.cc (riscv_gen_sign_max_cst): Add new func
      	impl to gen the max int rtx.
      	(riscv_expand_ssadd): Add new func impl to expand the ssadd.
      	* config/riscv/riscv.md (ssadd<mode>3): Add new pattern for
      	signed integer .SAT_ADD.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.target/riscv/sat_arith.h: Add test helper macros.
      	* gcc.target/riscv/sat_arith_data.h: Add test data.
      	* gcc.target/riscv/sat_s_add-1.c: New test.
      	* gcc.target/riscv/sat_s_add-2.c: New test.
      	* gcc.target/riscv/sat_s_add-3.c: New test.
      	* gcc.target/riscv/sat_s_add-4.c: New test.
      	* gcc.target/riscv/sat_s_add-run-1.c: New test.
      	* gcc.target/riscv/sat_s_add-run-2.c: New test.
      	* gcc.target/riscv/sat_s_add-run-3.c: New test.
      	* gcc.target/riscv/sat_s_add-run-4.c: New test.
      	* gcc.target/riscv/scalar_sat_binary_run_xxx.h: New test.
      
      Signed-off-by: default avatarPan Li <pan2.li@intel.com>
      539fcaae
    • GCC Administrator's avatar
      Daily bump. · 519ec1cf
      GCC Administrator authored
      519ec1cf
    • YunQiang Su's avatar
      MIPS: Support vector reduc for MSA · f4f72f9b
      YunQiang Su authored
      We have SHF.fmt and HADD_S/U.fmt with MSA, which can be used for
      vector reduc.
      
      For min/max for U8/S8, we can
      	SHF.B W1, W0, 0xb1  # swap byte inner every half
      	MIN.B W1, W1, W0
      	SHF.H W2, W1, 0xb1  # swap half inner every word
      	MIN.B W2, W2, W1
      	SHF.W W3, W2, 0xb1  # swap word inner every doubleword
      	MIN.B W4, W3, W2
      	SHF.W W4, W4, 0x4e  # swap the two doubleword
      	MIN.B W4, W4, W3
      
      For plus of S8/U8, we can use HADD
      	HADD.H	W0, W0, W0
      	HADD.W	W0, W0, W0
      	HADD.D	W0, W0, W0
      	SHF.W	W1, W0, 0x4e  # swap the two doubleword
      	ADDV.D	W1, W1, W0
      	COPY_S.B  T0, W1      # COPY_U.B for U8
      
      We can do similar for S16/U16/S32/U32/S64/U64/FLOAT/DOUBLE.
      
      gcc
      
      	* config/mips/mips-msa.md: (MSA_NO_HADD): we have HADD for
      	S8/U8/S16/U16/S32/U32 only.
      	(reduc_smin_scal_<mode>): New define pattern.
      	(reduc_smax_scal_<mode>): Ditto.
      	(reduc_umin_scal_<mode>): Ditto.
      	(reduc_umax_scal_<mode>): Ditto.
      	(reduc_plus_scal_<mode>): Ditto.
      	(reduc_plus_scal_v4si): Ditto.
      	(reduc_plus_scal_v8hi): Ditto.
      	(reduc_plus_scal_v16qi): Ditto.
      	(reduc_<optab>_scal_<mode>): Ditto.
      	* config/mips/mips-protos.h: New function mips_expand_msa_reduc.
      	* config/mips/mips.cc: New function mips_expand_msa_reduc.
      	* config/mips/mips.md: Define any_bitwise iterator.
      
      gcc/testsuite:
      
      	* gcc.target/mips/msa-reduc.c: New tests.
      f4f72f9b
  2. Sep 02, 2024
    • Jakub Jelinek's avatar
      testsuite: Fix optimize_one.c FAIL on i686-linux · b64980b0
      Jakub Jelinek authored
      The test FAILs on i686-linux because -mfpmath=sse is used without
      -msse2 being enabled.
      
      2024-09-02  Jakub Jelinek  <jakub@redhat.com>
      
      	* gcc.target/i386/optimize_one.c: Add -msse2 to dg-options.
      b64980b0
    • Alexandre Oliva's avatar
      [libstdc++-v3] [testsuite] improve future/*/poll.cc calibration · af1500dd
      Alexandre Oliva authored
      30_threads/future/members/poll.cc has calibration code that, on
      systems with very low clock resolution, may spuriously fail to run.
      Even when it does run, low resolution and reasonable
      timeouts limit severely the viability of increasing the loop counts so
      as to reduce measurement noise, so we end up with very noisy results.
      
      On various vxworks targets, high iteration count (low-noise)
      measurements confirmed that some of the operations that we expected to
      be up to 100x slower than the fastest ones can run a little slower
      than that and, with significant noise, may seem to be even slower,
      comparatively.
      
      Bump the factors up to 200x, so that we have plenty of margin over
      measured results.
      
      
      for  libstdc++-v3/ChangeLog
      
      	* testsuite/30_threads/future/members/poll.cc: Factor out
      	calibration, and run it unconditionally.  Lower its
      	strictness.  Bump wait_until_*'s slowness factor.
      af1500dd
    • Alexandre Oliva's avatar
      [libstdc++] [testsuite] avoid async.cc loss of precision [PR91486] · 410061b1
      Alexandre Oliva authored
      When we get to test_pr91486_wait_until(), we're about 10s past the
      float_steady_clock epoch.  This is enough for the 1s delta for the
      timeout to come out slightly lower when the futex-less wait_until
      converts the deadline from float_steady_clock to __clock_t.  So we may
      wake up a little too early, and end up looping one extra time to sleep
      for e.g. another 954ns until we hit the deadline.
      
      Each iteration calls float_steady_clock::now(), bumping the call_count
      that we VERIFY() at the end of the subtest.  Since we expect at most 3
      calls, and we're going to have at the very least 3 on futex-less
      targets (one in the test proper, one before wait_until_impl to compute
      the deadline, and one after wait_until_impl to check whether the
      deadline was hit), any such imprecision that causes an extra iteration
      will reach 5 and cause the test to fail.
      
      Initializing the epoch in the beginning of the test makes such
      spurious fails due to loss of precision far less likely.  I don't
      suppose allowing for an extra couple of calls would be desirable.
      
      While at that, I'm annotating unused status variables as such.
      
      
      for  libstdc++-v3/ChangeLog
      
      	PR libstdc++/91486
      	* testsuite/30_threads/async/async.cc
      	(test_pr91486_wait_for): Mark status as unused.
      	(test_pr91486_wait_until): Likewise.  Initialize epoch later.
      410061b1
    • Alexandre Oliva's avatar
      [testsuite] add linkonly to dg-additional-sources [PR115295] · 9223d171
      Alexandre Oliva authored
      The D testsuite shows it was a mistake to assume that
      dg-additional-sources are never to be used for compilation tests.
      Even if an output file is specified for compilation, extra module
      files can be named and used in the compilation without being flagged
      as errors.
      
      Introduce a 'linkonly' flag for dg-additional-sources, and use it in
      pr95401.cc and other vector tests that default to run, so that its
      additional sources get discarded when vector tests downgrade to
      compile-only.  This reverts previous workarounds for this very
      circumstance, that relied on being able to run vector tests anyway,
      even after failing to detect runtime or hardware vector support.
      
      
      for  gcc/ChangeLog
      
      	PR d/115295
      	* doc/sourcebuild.texi (dg-additional-sources): Add linkonly.
      
      for  gcc/testsuite/ChangeLog
      
      	PR d/115295
      	* g++.dg/vect/pr95401.cc: Add linkonly to dg-additional-sources.
      	* g++.dg/vect/pr68762-1.cc: Likewise.
      	* g++.dg/vect/simd-clone-3.cc: Likewise.
      	* g++.dg/vect/simd-clone-5.cc: Likewise.
      	* gcc.dg/vect/vect-simd-clone-10.c: Likewise.  Drop dg-do run.
      	* gcc.dg/vect/vect-simd-clone-12.c: Likewise.  Likewise.
      	* lib/gcc-defs.exp (additional_sources_omit_on_compile): New.
      	(dg-additional-sources): Add to it on linkonly.
      	(dg-additional-files-options): Omit select sources on compile.
      9223d171
    • Andrew Stubbs's avatar
      amdgcn: Remove TARGET_GCN5_PLUS · b9bf0c3f
      Andrew Stubbs authored
      Now that GCN3 support is gone, TARGET_GCN5_PLUS always evaluates to true, so
      we can make that code unconditional, and remove all the "else" cases.
      
      The ISA features TARGET_GLOBAL_ADDRSPACE, TARGET_FLAT_OFFSETS,
      TARGET_EXPLICIT_CARRY, and TARGET_MULTIPLY_IMMEDIATE, are similarly also
      redundant and can be made unconditional.
      
      The naming of the "gcc_version" attribute has been confusing since the "rdna"
      attribute was added and this makes it worse, so it has been renamed to "cdna".
      
      The add-with-carry assembler mnemonics no longer have two forms, so '%^' can be
      removed.
      
      gcc/ChangeLog:
      
      	* config/gcn/gcn-opts.h (TARGET_GCN5_PLUS): Delete.
      	(TARGET_GLOBAL_ADDRSPACE): Delete.
      	(TARGET_FLAT_OFFSETS): Delete.
      	(TARGET_EXPLICIT_CARRY): Delete.
      	(TARGET_MULTIPLY_IMMEDIATE): Delete.
      	* config/gcn/gcn-valu.md (*mov<mode>): Rename "gcn_version" to "cdna".
      	(*mov<mode>_4reg): Likewise.
      	(@mov<mode>_sgprbase): Likwise.
      	(gather<mode>_insn_1offset<exec>): Likewise.
      	(gather<mode>_insn_1offset_ds<exec>): Likewise.
      	(gather<mode>_insn_2offsets<exec>): Likewise.
      	(scatter<mode>_insn_1offset<exec_scatter>): Likewise.
      	(scatter<mode>_insn_1offset_ds<exec_scatter>): Likewise.
      	(scatter<mode>_insn_2offsets<exec_scatter>): Likewise.
      	(gather<mode>_insn_1offset<exec>): Remove TARGET_FLAT_OFFSETS
      	conditionals.
      	(scatter<mode>_insn_1offset<exec_scatter>): Likewise.
      	(scatter<mode>_insn_1offset<exec_scatter>): Likewise.
      	(add<mode>3<exec_clobber>): Use "_co" instead of "%^".
      	(add<mode>3_dup<exec_clobber>): Likewise.
      	(add<mode>3_vcc<exec_vcc>): Likewise.
      	(add<mode>3_vcc_dup<exec_vcc>): Likewise.
      	(addc<mode>3<exec_vcc>): Likewise.
      	(sub<mode>3<exec_clobber>): Likewise.
      	(sub<mode>3_vcc<exec_vcc>): Likewise.
      	(subc<mode>3<exec_vcc>): Likewise.
      	(*plus_carry_dpp_shr_<mode>): Likewise.
      	(*plus_carry_in_dpp_shr_<mode>): Likewise.
      	* config/gcn/gcn.cc (gcn_flat_address_p): Remove TARGET_FLAT_OFFSETS
      	conditionals.
      	(gcn_addr_space_legitimate_address_p): Likewise.
      	(gcn_addr_space_legitimize_address): Likewise.
      	(gcn_expand_scalar_to_vector_address): Likewise.
      	(print_operand_address): Likewise, and TARGET_GLOBAL_ADDRSPACE also.
      	(print_operand): Remove "%^" operand code.
      	Remove TARGET_GLOBAL_ADDRSPACE assertion.
      	* config/gcn/gcn.h (STACK_ADDR_SPACE): Remove GCN5 conditional.
      	* config/gcn/gcn.md (gcn_version): Rename attribute ...
      	(cdna): ... to this, and remove the gcn3 and gcn5 values.
      	(enabled): Replace old "gcn_version" logic with new "cdna" logic.
      	(*mov<mode>_insn): Rename "gcn_version" to "cdna".
      	(*movti_insn): Likewise.
      	(addsi3): Use "_co" instead of "%^".
      	(addsi3_scalar_carry): Likewise.
      	(addsi3_scalar_carry_cst): Likewise.
      	(addcsi3_scalar): Likewise.
      	(addcsi3_scalar_zero): Likewise.
      	(addptrdi3): Likewise.
      	(subsi3): Likewise.
      	(<su>mulsi3_highpart): Remove TARGET_MULTIPLY_IMMEDIATE conditions.
      	(<su>mulsi3_highpart_reg): Remove "gcn_version" attribute.
      	(muldi3): Likewise.
      	(atomic_fetch_<bare_mnemonic><mode>): Likewise.
      	(atomic_<bare_mnemonic><mode>): Likewise.
      	(sync_compare_and_swap<mode>_insn): Likewise.
      	(atomic_load<mode>): Likewise.
      	(atomic_store<mode>): Likewise.
      	(atomic_exchange<mode>): Likewise.
      	(<su>mulsi3_highpart_imm): Remove both TARGET_MULTIPLY_IMMEDIATE and
      	"gcn_version".
      	(<su>mulsidi3): Likewise.
      	(<su>mulsidi3_imm): Likewise.
      b9bf0c3f
    • Andrew Stubbs's avatar
      amdgcn: Remove TARGET_GCN3 · 023641d9
      Andrew Stubbs authored
      The only GCN3 ISA device was remove (Fiji, gfx803) so all the GCN3-specific
      code and features can be removed from the back-end.
      
      gcc/ChangeLog:
      
      	* config/gcn/gcn-opts.h (enum gcn_isa): Delete ISA_GCN3.
      	(TARGET_GCN3): Delete.
      	(TARGET_GCN3_PLUS): Delete.
      	(TARGET_M0_LDS_LIMIT): Delete.
      	* config/gcn/gcn-valu.md
      	(gather<mode>_insn_1offset<exec>): Remove TARGET_GCN3 from conditions.
      	(*<reduc_op>_dpp_shr_<mode>): Likewise.
      	* config/gcn/gcn.cc (enum gcn_isa): Change default to ISA_GCN5.
      	(gcn_expand_prologue): Remove TARGET_M0_LDS_LIMIT feature.
      	(gcn_expand_reduc_scalar): Remove TARGET_GCN3 conditions.
      	* config/gcn/gcn.h (TARGET_CPU_CPP_BUILTINS): Remove TARGET_GCN3.
      023641d9
    • Andrew Stubbs's avatar
      amdgcn: remove gfx803 "Fiji" support · 57af0022
      Andrew Stubbs authored
      The gfx803 "Fiji" device was deprecated in GCC 14, removed from LLVM 18, and
      hasn't worked properly with the drivers since about ROCm 4.
      
      This patch removes the device from GCC options and documentation, and removes
      the direct mentions from the internals.
      
      The TARGET_GCN3 support in the back-end is now unused and can be removed (in a
      follow-up patch).
      
      gcc/ChangeLog:
      
      	* config.gcc (amdgcn-*-*): Remove "fiji" from with_arch checks.
      	* config/gcn/gcn-hsa.h (ABI_VERSION_SPEC): Remove fiji alternative.
      	(NO_XNACK): Likewise.
      	(NO_SRAM_ECC): Likewise.
      	(ASM_SPEC): Remove "%{}" around ABI_VERSION_SPEC.
      	* config/gcn/gcn-opts.h (enum processor_type): Remove PROCESSOR_FIJI.
      	(TARGET_FIJI): Delete.
      	* config/gcn/gcn.cc (gcn_option_override): Remove Fiji.
      	(gcn_omp_device_kind_arch_isa): Likewise.
      	(output_file_start): Likewise.
      	* config/gcn/gcn.h (TARGET_CPU_CPP_BUILTINS): Likewise.
      	* config/gcn/gcn.opt (gpu_type): Likewise.
      	(march, mtune): Change default to PROCESSOR_VEGA10.
      	* config/gcn/mkoffload.cc (EF_AMDGPU_MACH_AMDGCN_GFX803): Delete.
      	(copy_early_debug_info): Remove elf_flags_actual.
      	Use ELFABIVERSION_AMDGPU_HSA_V4 unconditionally.
      	(get_arch): Remove Fiji.
      	(main): Remove gfx803.
      	* config/gcn/t-omp-device
      	(omp-device-properties-gcn): Remove fiji and gfx803.
      	* doc/install.texi (amdgcn*-*-*): Remove fiji and special instructions.
      	* doc/invoke.texi: Remove fiji.
      
      libgomp/ChangeLog:
      
      	* libgomp.texi: Remove fiji and gfx803.
      	* testsuite/libgomp.c/declare-variant-4.h: Remove fiji and gfx803.
      	* testsuite/libgomp.c/declare-variant-4-fiji.c: Removed.
      	* testsuite/libgomp.c/declare-variant-4-gfx803.c: Removed.
      57af0022
Loading