Skip to content
Snippets Groups Projects
  1. Jan 04, 2024
    • Juzhe-Zhong's avatar
      RISC-V: Make liveness estimation be aware of .vi variant · b1342247
      Juzhe-Zhong authored
      Consider this following case:
      
      void
      f (int *restrict a, int *restrict b, int *restrict c, int *restrict d, int n)
      {
        for (int i = 0; i < n; i++)
          {
            int tmp = b[i] + 15;
            int tmp2 = tmp + b[i];
            c[i] = tmp2 + b[i];
            d[i] = tmp + tmp2 + b[i];
          }
      }
      
      Current dynamic LMUL cost model choose LMUL = 4 because we count the "15" as
      consuming 1 vector register group which is not accurate.
      
      We teach the dynamic LMUL cost model be aware of the potential vi variant instructions
      transformation, so that we can choose LMUL = 8 according to more accurate cost model.
      
      After this patch:
      
      f:
      	ble	a4,zero,.L5
      .L3:
      	vsetvli	a5,a4,e32,m8,ta,ma
      	slli	a0,a5,2
      	vle32.v	v16,0(a1)
      	vadd.vi	v24,v16,15
      	vadd.vv	v8,v24,v16
      	vadd.vv	v0,v8,v16
      	vse32.v	v0,0(a2)
      	vadd.vv	v8,v8,v24
      	vadd.vv	v8,v8,v16
      	vse32.v	v8,0(a3)
      	add	a1,a1,a0
      	add	a2,a2,a0
      	add	a3,a3,a0
      	sub	a4,a4,a5
      	bne	a4,zero,.L3
      .L5:
      	ret
      
      Tested on both RV32 and RV64 no regression. Ok for trunk ?
      
      gcc/ChangeLog:
      
      	* config/riscv/riscv-vector-costs.cc (variable_vectorized_p): Teach vi variant.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-13.c: New test.
      b1342247
    • Kito Cheng's avatar
      RISC-V: Fix misaligned stack offset for interrupt function · 73a4f67b
      Kito Cheng authored
      `interrupt` function will backup fcsr register, but it fixed to SImode,
      it's not big issue since fcsr only used 8 bits so far, however the
      offset should still using UNITS_PER_WORD to prevent the stack offset
      become non 8 byte aligned, it will cause problem for RV64.
      
      gcc/ChangeLog:
      
      	* config/riscv/riscv.cc (riscv_for_each_saved_reg): Adjust the
      	offset of fcsr.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.target/riscv/interrupt-misaligned.c: New.
      73a4f67b
    • chenxiaolong's avatar
      LoongArch: testsuite:Add loongarch to gcc.dg/vect/slp-26.c. · 15053a3e
      chenxiaolong authored
      In the LoongArch architecture, GCC supports the vectorization function tested
      by vect/slp-26.c, but there is no detection of loongarch in dg-finals.  Add
      loongarch to the appropriate dg-finals.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.dg/vect/slp-26.c: Add loongarch.
      15053a3e
    • Juzhe-Zhong's avatar
      RISC-V: Refine LMUL computation for MASK_LEN_LOAD/MASK_LEN_STORE IFN · 83869ff4
      Juzhe-Zhong authored
      Notice a case has "Maximum lmul = 16" which is incorrect.
      Correct LMUL estimation for MASK_LEN_LOAD/MASK_LEN_STORE.
      
      Committed.
      
      gcc/ChangeLog:
      
      	* config/riscv/riscv-vector-costs.cc (variable_vectorized_p): New function.
      	(compute_nregs_for_mode): Refine LMUL.
      	(max_number_of_live_regs): Ditto.
      	(compute_estimated_lmul): Ditto.
      	(has_unexpected_spills_p): Ditto.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-11.c: New test.
      83869ff4
    • chenxiaolong's avatar
      LoongArch: testsuite:Fix FAIL in lasx-xvstelm.c file. · 49b2387b
      chenxiaolong authored
      After implementing the cost model on the LoongArch architecture, the GCC
      compiler code has this feature turned on by default, which causes the
      lasx-xvstelm.c file test to fail. Through analysis, this test case can
      generate vectorization instructions required for detection only after
      disabling the functionality of the cost model with the "-fno-vect-cost-model"
      compilation option.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.target/loongarch/vector/lasx/lasx-xvstelm.c:Add compile
      	option "-fno-vect-cost-model" to dg-options.
      49b2387b
    • Li Wei's avatar
      LoongArch: Merge constant vector permuatation implementations. · cb666ded
      Li Wei authored
      There are currently two versions of the implementations of constant
      vector permutation: loongarch_expand_vec_perm_const_1 and
      loongarch_expand_vec_perm_const_2.  The implementations of the two
      versions are different. Currently, only the implementation of
      loongarch_expand_vec_perm_const_1 is used for 256-bit vectors.  We
      hope to streamline the code as much as possible while retaining the
      better-performing implementation of the two.  By repeatedly testing
      spec2006 and spec2017, we got the following Merged version.
      Compared with the pre-merger version, the number of lines of code
      in loongarch.cc has been reduced by 888 lines.  At the same time,
      the performance of SPECint2006 under Ofast has been improved by 0.97%,
      and the performance of SPEC2017 fprate has been improved by 0.27%.
      
      gcc/ChangeLog:
      
      	* config/loongarch/loongarch.cc (loongarch_is_odd_extraction):
      	Remove useless forward declaration.
      	(loongarch_is_even_extraction): Remove useless forward declaration.
      	(loongarch_try_expand_lsx_vshuf_const): Removed.
      	(loongarch_expand_vec_perm_const_1): Merged.
      	(loongarch_is_double_duplicate): Removed.
      	(loongarch_is_center_extraction): Ditto.
      	(loongarch_is_reversing_permutation): Ditto.
      	(loongarch_is_di_misalign_extract): Ditto.
      	(loongarch_is_si_misalign_extract): Ditto.
      	(loongarch_is_lasx_lowpart_extract): Ditto.
      	(loongarch_is_op_reverse_perm): Ditto.
      	(loongarch_is_single_op_perm): Ditto.
      	(loongarch_is_divisible_perm): Ditto.
      	(loongarch_is_triple_stride_extract): Ditto.
      	(loongarch_expand_vec_perm_const_2): Merged.
      	(loongarch_expand_vec_perm_const): New.
      	(loongarch_vectorize_vec_perm_const): Adjust.
      cb666ded
    • Sandra Loosemore's avatar
      OpenMP: trivial cleanups to omp-general.cc · ab80d838
      Sandra Loosemore authored
      gcc/ChangeLog
      	* omp-general.cc: Fix comment typos and misplaced/confusing
      	comments.  Delete redundant include of omp-general.h.
      ab80d838
    • YunQiang Su's avatar
      MIPS/testsuite: Include stdio.h in mipscop tests · 345da368
      YunQiang Su authored
      gcc/testsuite
      
      	* gcc.c-torture/compile/mipscop-1.c: Include stdio.h.
      	* gcc.c-torture/compile/mipscop-2.c: Ditto.
      	* gcc.c-torture/compile/mipscop-3.c: Ditto.
      	* gcc.c-torture/compile/mipscop-4.c: Ditto.
      345da368
    • YunQiang Su's avatar
      MIPS: Add pattern insqisi_extended and inshisi_extended · 65d4b32d
      YunQiang Su authored
      This match pattern allows combination (zero_extract:DI 8, 24, QI)
      with an sign-extend to 32bit INS instruction on TARGET_64BIT.
      
      For SI mode, if the sign-bit is modified by bitops, we will need a
      sign-extend operation.  Since 32bit INS instruction can be sure that
      result is sign-extended, and the QImode src register is safe for INS, too.
      
      (insn 19 18 20 2 (set (zero_extract:DI (reg/v:DI 200 [ val ])
                  (const_int 8 [0x8])
                  (const_int 24 [0x18]))
              (subreg:DI (reg:QI 205) 0)) "../xx.c":7:29 -1
           (nil))
      (insn 20 19 23 2 (set (reg/v:DI 200 [ val ])
              (sign_extend:DI (subreg:SI (reg/v:DI 200 [ val ]) 0))) "../xx.c":7:29 -1
           (nil))
      
      Combine try to merge them to:
      
      (insn 20 19 23 2 (set (reg/v:DI 200 [ val ])
              (sign_extend:DI (ior:SI (and:SI (subreg:SI (reg/v:DI 200 [ val ]) 0)
                          (const_int 16777215 [0xffffff]))
                      (ashift:SI (subreg:SI (reg:QI 205 [ MEM[(const unsigned char *)buf_8(D) + 3B] ]) 0)
                          (const_int 24 [0x18]))))) "../xx.c":7:29 18 {*insv_extended}
           (expr_list:REG_DEAD (reg:QI 205 [ MEM[(const unsigned char *)buf_8(D) + 3B] ])
              (nil)))
      
      And do similarly for 16/16 pair:
      (insn 13 12 14 2 (set (zero_extract:DI (reg/v:DI 198 [ val ])
                  (const_int 16 [0x10])
                  (const_int 16 [0x10]))
              (subreg:DI (reg:HI 201 [ MEM[(const short unsigned int *)buf_6(D) + 2B] ]) 0)) "xx.c":5:30 286 {*insvdi}
           (expr_list:REG_DEAD (reg:HI 201 [ MEM[(const short unsigned int *)buf_6(D) + 2B] ])
              (nil)))
      (insn 14 13 17 2 (set (reg/v:DI 198 [ val ])
              (sign_extend:DI (subreg:SI (reg/v:DI 198 [ val ]) 0))) "xx.c":5:30 241 {extendsidi2}
           (nil))
      ------------>
      (insn 14 13 17 2 (set (reg/v:DI 198 [ val ])
              (sign_extend:DI (ior:SI (ashift:SI (subreg:SI (reg:HI 201 [ MEM[(const short unsigned int *)buf_6(D) + 2B] ]) 0)
                          (const_int 16 [0x10]))
                      (zero_extend:SI (subreg:HI (reg/v:DI 198 [ val ]) 0))))) "xx.c":5:30 284 {*inshisi_extended}
           (expr_list:REG_DEAD (reg:HI 201 [ MEM[(const short unsigned int *)buf_6(D) + 2B] ])
              (nil)))
      
      Let's accept these patterns, and set the cost to 1 instruction.
      
      gcc
      
      	PR rtl-optimization/104914
      	* config/mips/mips.md (insqisi_extended): New patterns.
      	(inshisi_extended): Ditto.
      
      gcc/testsuite
      
      	* gcc.target/mips/pr104914.c: New test.
      65d4b32d
    • YunQiang Su's avatar
      MIPS: Implement TARGET_INSN_COSTS · 9876d50e
      YunQiang Su authored
      When combine some instructions, the generic `rtx_cost`
      may over estimate the cost of result RTL, due to that
      the RTL may be quite complex and `rtx_cost` has no
      information that this RTL can be convert to simple
      hardware instruction(s).
      
      In this case, Let's use `insn_count * perf_ratio` to
      estimate the cost if both of them are available.
      Otherwise fallback to pattern_cost.
      
      When non-speed, Let's use the length as cost.
      
      gcc
      
      	* config/mips/mips.cc (mips_insn_cost): New function.
      
      gcc/testsuite
      
      	* gcc.target/mips/data-sym-multi-pool.c: Skip Os or -O0.
      9876d50e
    • YunQiang Su's avatar
      MIPS: define_attr perf_ratio in mips.md · ffdbb8e0
      YunQiang Su authored
      The accurate cost of an pattern can get with
      	 insn_count * perf_ratio
      
      The default value is set to 0 instead of 1, since that
      we will need to distinguish the default value and it is
      really set for an pattern.  Since it is not set for most
      patterns yet, to use it, we will need to be sure that it's
      value is greater than 0.
      
      This attr will be used in `mips_insn_cost`.
      
      gcc
      
      	* config/mips/mips.md (perf_ratio): New attribute.
      ffdbb8e0
    • Juzhe-Zhong's avatar
      RISC-V: Fix bug of earliest fusion for infinite loop[VSETVL PASS] · 4a0a8dc1
      Juzhe-Zhong authored
      As PR113206 and PR113209, the bugs happens on the following situation:
      
              li      a4,32
      	...
      	vsetvli zero,a4,e8,m8,ta,ma
      	...
              slliw   a4,a3,24
              sraiw   a4,a4,24
              bge     a3,a1,.L8
              sb      a4,%lo(e)(a0)
              vsetvli zero,a4,e8,m8,ta,ma  --> a4 is polluted value not the expected "32".
      	...
      .L7:
              j       .L7 ---> infinite loop.
      
      The root cause is that infinite loop confuse earliest computation and let earliest fusion
      happens on unexpected place.
      
      Disable blocks that belong to infinite loop to fix this bug since applying ealiest LCM fusion
      on infinite loop seems quite complicated and we don't see any benefits.
      
      Note that disabling earliest fusion on infinite loops doesn't hurt the vsetvli performance,
      instead, it does improve codegen of some cases.
      
      Tested on both RV32 and RV64 no regression.
      
      	PR target/113206
      	PR target/113209
      
      gcc/ChangeLog:
      
      	* config/riscv/riscv-vsetvl.cc (invalid_opt_bb_p): New function.
      	(pre_vsetvl::compute_lcm_local_properties): Disable earliest fusion on
      	blocks belong to infinite loop.
      	(pre_vsetvl::emit_vsetvl): Remove fake edges.
      	* config/riscv/t-riscv: Add a new include file.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.target/riscv/rvv/vsetvl/avl_single-23.c: Adapt test.
      	* gcc.target/riscv/rvv/vsetvl/vlmax_call-1.c: Robostify test.
      	* gcc.target/riscv/rvv/vsetvl/vlmax_call-2.c: Ditto.
      	* gcc.target/riscv/rvv/vsetvl/vlmax_call-3.c: Ditto.
      	* gcc.target/riscv/rvv/vsetvl/vlmax_conflict-5.c: Ditto.
      	* gcc.target/riscv/rvv/vsetvl/vlmax_single_vtype-1.c: Ditto.
      	* gcc.target/riscv/rvv/vsetvl/vlmax_single_vtype-2.c: Ditto.
      	* gcc.target/riscv/rvv/vsetvl/vlmax_single_vtype-3.c: Ditto.
      	* gcc.target/riscv/rvv/vsetvl/vlmax_single_vtype-4.c: Ditto.
      	* gcc.target/riscv/rvv/vsetvl/vlmax_single_vtype-5.c: Ditto.
      	* gcc.target/riscv/rvv/autovec/pr113206-1.c: New test.
      	* gcc.target/riscv/rvv/autovec/pr113206-2.c: New test.
      	* gcc.target/riscv/rvv/autovec/pr113209.c: New test.
      4a0a8dc1
    • Juzhe-Zhong's avatar
      RISC-V: Fix indent · 97c1f176
      Juzhe-Zhong authored
      Fix indent of some codes to make them 8 spaces align.
      
      Committed.
      
      gcc/ChangeLog:
      
      	* config/riscv/vector.md: Fix indent.
      97c1f176
    • GCC Administrator's avatar
      Daily bump. · eb84e8d3
      GCC Administrator authored
      eb84e8d3
  2. Jan 03, 2024
    • Patrick Palka's avatar
      c++: bad direct reference binding via conv fn [PR113064] · 1c522c9e
      Patrick Palka authored
      When computing a direct reference binding via a conversion function
      yields a bad conversion, reference_binding incorrectly commits to that
      conversion instead of trying a conversion via a temporary.  This causes
      us to reject the first testcase because the bad direct conversion to B&&
      via the && conversion operator prevents us from considering the good
      conversion via the & conversion operator and a temporary.  (Similar
      story for the second testcase.)
      
      This patch fixes this by making reference_binding not prematurely commit
      to such a bad direct conversion.  We still fall back to it if using a
      temporary also fails (otherwise the diagnostic for cpp0x/explicit7.C
      regresses).
      
      	PR c++/113064
      
      gcc/cp/ChangeLog:
      
      	* call.cc (reference_binding): Still try a conversion via a
      	temporary if a direct conversion was bad.
      
      gcc/testsuite/ChangeLog:
      
      	* g++.dg/cpp0x/rv-conv4.C: New test.
      	* g++.dg/cpp0x/rv-conv5.C: New test.
      1c522c9e
    • Harald Anlauf's avatar
      Fortran: fix FE memleak · 93c96e3a
      Harald Anlauf authored
      gcc/fortran/ChangeLog:
      
      	* trans-types.cc (gfc_get_nodesc_array_type): Clear used gmp
      	variables.
      93c96e3a
    • Kwok Cheung Yeung's avatar
      openmp: Adjust position of OMP_CLAUSE_INDIRECT in OpenMP clauses · a56a693a
      Kwok Cheung Yeung authored
      Move OMP_CLAUSE_INDIRECT so that it is outside of the range checked by
      OMP_CLAUSE_SIZE and OMP_CLAUSE_DECL.
      
      2024-01-03  Kwok Cheung Yeung  <kcy@codesourcery.com>
      
      	gcc/c/
      	* c-parser.cc (c_parser_omp_clause_name): Move handling of indirect
      	clause to correspond to alphabetical order.
      
      	gcc/cp/
      	* parser.cc (cp_parser_omp_clause_name): Move handling of indirect
      	clause to correspond to alphabetical order.
      
      	gcc/
      	* tree-core.h (enum omp_clause_code): Move OMP_CLAUSE_INDIRECT to before
      	OMP_CLAUSE__SIMDUID_.
      	* tree.cc (omp_clause_num_ops): Update position of entry for
      	OMP_CLAUSE_INDIRECT to correspond with omp_clause_code.
      	(omp_clause_code_name): Likewise.
      a56a693a
    • Kwok Cheung Yeung's avatar
      nvptx: Restructure code generating function map labels · 6ae84729
      Kwok Cheung Yeung authored
      This restructures the code generating FUNC_MAP and IND_FUNC_MAP labels
      in the assembly code for mkoffload to consume, hopefully making it a
      bit clearer and easier to search for.
      
      2024-01-03  Kwok Cheung Yeung  <kcy@codesourcery.com>
      
      	gcc/
      	* config/nvptx/nvptx.cc (nvptx_record_offload_symbol): Restucture
      	printing of FUNC_MAP/IND_FUNC_MAP labels.
      6ae84729
    • Jakub Jelinek's avatar
      Update copyright years. · a945c346
      Jakub Jelinek authored
      a945c346
    • Jakub Jelinek's avatar
      Small tweaks for update-copyright.py · 9afc1915
      Jakub Jelinek authored
      update-copyright.py --this-year FAILs on two spots in the modula2
      directories.
      One is gpl_v3_without_node.texi, I think that is similar to other
      license files which we already exclude from updates.
      And the other is GmcOptions.cc, which has lines like
        mcPrintf_printf0 ((const char *) "Copyright ", 10);
        mcPrintf_printf1 ((const char *) "Copyright (C) %d Free Software Foundation, Inc.\\n", 49, (const unsigned char *) &year, (sizeof (year)-1));
        mcPrintf_printf1 ((const char *) "Copyright (C) %d Free Software Foundation, Inc.\\n", 49, (const unsigned char *) &year, (sizeof (year)-1));
      which update-copyhright.py obviously can't grok.  The file is generated
      and doesn't contain normal Copyright year which should be updated, so I think
      it is also ok to skip it.
      
      2024-01-03  Jakub Jelinek  <jakub@redhat.com>
      
      	* update-copyright.py (GenericFilter): Skip gpl_v3_without_node.texi.
      	(GCCFilter): Skip GmcOptions.cc.
      9afc1915
    • Jakub Jelinek's avatar
      Update copyright dates. · 4e053a7e
      Jakub Jelinek authored
      Manual part of copyright year updates.
      
      2024-01-03  Jakub Jelinek  <jakub@redhat.com>
      
      gcc/
      	* gcc.cc (process_command): Update copyright notice dates.
      	* gcov-dump.cc (print_version): Ditto.
      	* gcov.cc (print_version): Ditto.
      	* gcov-tool.cc (print_version): Ditto.
      	* gengtype.cc (create_file): Ditto.
      	* doc/cpp.texi: Bump @copying's copyright year.
      	* doc/cppinternals.texi: Ditto.
      	* doc/gcc.texi: Ditto.
      	* doc/gccint.texi: Ditto.
      	* doc/gcov.texi: Ditto.
      	* doc/install.texi: Ditto.
      	* doc/invoke.texi: Ditto.
      gcc/ada/
      	* gnat_ugn.texi: Bump @copying's copyright year.
      	* gnat_rm.texi: Likewise.
      gcc/d/
      	* gdc.texi: Bump @copyrights-d year.
      gcc/fortran/
      	* gfortranspec.cc (lang_specific_driver): Update copyright notice
      	dates.
      	* gfc-internals.texi: Bump @copying's copyright year.
      	* gfortran.texi: Ditto.
      	* intrinsic.texi: Ditto.
      	* invoke.texi: Ditto.
      gcc/go/
      	* gccgo.texi: Bump @copyrights-go year.
      libgomp/
      	* libgomp.texi: Bump @copying's copyright year.
      libitm/
      	* libitm.texi: Bump @copying's copyright year.
      libquadmath/
      	* libquadmath.texi: Bump @copying's copyright year.
      4e053a7e
    • Jakub Jelinek's avatar
      Update Copyright year in ChangeLog files · 6a720d41
      Jakub Jelinek authored
      2023 -> 2024
      6a720d41
    • Jakub Jelinek's avatar
      Rotate ChangeLog files. · 8c22aed4
      Jakub Jelinek authored
      Rotate ChangeLog files for ChangeLogs with yearly cadence.
      8c22aed4
    • Xi Ruoyao's avatar
      LoongArch: Provide fmin/fmax RTL pattern for vectors · 87acfc36
      Xi Ruoyao authored
      We already had smin/smax RTL pattern using vfmin/vfmax instructions.
      But for smin/smax, it's unspecified what will happen if either operand
      contains any NaN operands.  So we would not vectorize the loop with
      -fno-finite-math-only (the default for all optimization levels expect
      -Ofast).
      
      But, LoongArch vfmin/vfmax instruction is IEEE-754-2008 conformant so we
      can also use them and vectorize the loop.
      
      gcc/ChangeLog:
      
      	* config/loongarch/simd.md (fmax<mode>3): New define_insn.
      	(fmin<mode>3): Likewise.
      	(reduc_fmax_scal_<mode>3): New define_expand.
      	(reduc_fmin_scal_<mode>3): Likewise.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.target/loongarch/vfmax-vfmin.c: New test.
      87acfc36
    • Juzhe-Zhong's avatar
      RISC-V: Make liveness be aware of rgroup number of LENS[dynamic LMUL] · a43bd825
      Juzhe-Zhong authored
      This patch fixes the following situation:
      vl4re16.v       v12,0(a5)
      ...
      vl4re16.v       v16,0(a3)
      vs4r.v  v12,0(a5)
      ...
      vl4re16.v       v4,0(a0)
      vs4r.v  v16,0(a3)
      ...
      vsetvli a3,zero,e16,m4,ta,ma
      ...
      vmv.v.x v8,t6
      vmsgeu.vv       v2,v16,v8
      vsub.vv v16,v16,v8
      vs4r.v  v16,0(a5)
      ...
      vs4r.v  v4,0(a0)
      vmsgeu.vv       v1,v4,v8
      ...
      vsub.vv v4,v4,v8
      slli    a6,a4,2
      vs4r.v  v4,0(a5)
      ...
      vsub.vv v4,v12,v8
      vmsgeu.vv       v3,v12,v8
      vs4r.v  v4,0(a5)
      ...
      
      There are many spills which are 'vs4r.v'.  The root cause is that we don't count
      vector REG liveness referencing the rgroup controls.
      
      _29 = _25->iatom[0]; is transformed into the following vect statement with 4 different loop_len (loop_len_74, loop_len_75, loop_len_76, loop_len_77).
      
        vect__29.11_78 = .MASK_LEN_LOAD (vectp_sb.9_72, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_74, 0);
        vect__29.12_80 = .MASK_LEN_LOAD (vectp_sb.9_79, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_75, 0);
        vect__29.13_82 = .MASK_LEN_LOAD (vectp_sb.9_81, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_76, 0);
        vect__29.14_84 = .MASK_LEN_LOAD (vectp_sb.9_83, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_77, 0);
      
      which are the LENS number (LOOP_VINFO_LENS (loop_vinfo).length ()).
      
      Count liveness according to LOOP_VINFO_LENS (loop_vinfo).length () to compute liveness more accurately:
      
      vsetivli	zero,8,e16,m1,ta,ma
      vmsgeu.vi	v19,v14,8
      vadd.vi	v18,v14,-8
      vmsgeu.vi	v17,v1,8
      vadd.vi	v16,v1,-8
      vlm.v	v15,0(a5)
      ...
      
      Tested no regression, ok for trunk ?
      
      	PR target/113112
      
      gcc/ChangeLog:
      
      	* config/riscv/riscv-vector-costs.cc (compute_nregs_for_mode): Add rgroup info.
      	(max_number_of_live_regs): Ditto.
      	(has_unexpected_spills_p): Ditto.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.dg/vect/costmodel/riscv/rvv/pr113112-5.c: New test.
      a43bd825
    • Patrick Palka's avatar
      libstdc++: testsuite: Reduce max_size_type.cc exec time [PR113175] · a138b996
      Patrick Palka authored
      The adjustment to max_size_type.cc in r14-205-g83470a5cd4c3d2
      inadvertently increased the execution time of this test by over 5x due
      to making the two main loops actually run in the signed_p case instead
      of being dead code.
      
      To compensate, this patch cuts the relevant loops' range [-1000,1000] by
      10x as proposed in the PR.  This shouldn't significantly weaken the test
      since the same important edge cases are still checked in the smaller range
      and/or elsewhere.  On my machine this reduces the test's execution time by
      roughly 10x (and 1.6x relative to before r14-205).
      
      	PR testsuite/113175
      
      libstdc++-v3/ChangeLog:
      
      	* testsuite/std/ranges/iota/max_size_type.cc (test02): Reduce
      	'limit' to 100 from 1000 and adjust 'log2_limit' accordingly.
      	(test03): Likewise.
      a138b996
    • GCC Administrator's avatar
      Daily bump. · 45c807b7
      GCC Administrator authored
      45c807b7
  3. Jan 02, 2024
    • Jun Sha (Joshua)'s avatar
      RISC-V: Use vector_length_operand instead of csr_operand in vsetvl patterns · 152cd65b
      Jun Sha (Joshua) authored
      
      This patch replaces csr_operand by vector_length_operand in the vsetvl
      patterns.  This allows future changes in the vector code (i.e. in the
      vector_length_operand predicate) without affecting scalar patterns that
      use the csr_operand predicate.
      
      gcc/ChangeLog:
      
      	* config/riscv/vector.md:
      	Use vector_length_operand for vsetvl patterns.
      
      Co-authored-by: default avatarJin Ma <jinma@linux.alibaba.com>
      Co-authored-by: default avatarXianmiao Qu <cooper.qu@linux.alibaba.com>
      Co-authored-by: default avatarChristoph Müllner <christoph.muellner@vrull.eu>
      152cd65b
    • Andreas Schwab's avatar
      libsanitizer: Enable LSan and TSan for riscv64 · ae11ee8f
      Andreas Schwab authored
      libsanitizer:
      	* configure.tgt (riscv64-*-linux*): Enable LSan and TSan.
      ae11ee8f
    • Szabolcs Nagy's avatar
      aarch64: fortran: Adjust vect-8.f90 for libmvec · 046cea56
      Szabolcs Nagy authored
      With new glibc one more loop can be vectorized via simd exp in libmvec.
      
      Found by the Linaro TCWG CI.
      
      gcc/testsuite/ChangeLog:
      
      	* gfortran.dg/vect/vect-8.f90: Accept more vectorized loops.
      046cea56
    • Juzhe-Zhong's avatar
      RISC-V: Add simplification of dummy len and dummy mask COND_LEN_xxx pattern · 76f069fe
      Juzhe-Zhong authored
      In https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=d1eacedc6d9ba9f5522f2c8d49ccfdf7939ad72d
      I optimize COND_LEN_xxx pattern with dummy len and dummy mask with too simply solution which
      causes redundant vsetvli in the following case:
      
      	vsetvli	a5,a2,e8,m1,ta,ma
      	vle32.v	v8,0(a0)
      	vsetivli	zero,16,e32,m4,tu,mu   ----> We should apply VLMAX instead of a CONST_INT AVL
      	slli	a4,a5,2
      	vand.vv	v0,v8,v16
      	vand.vv	v4,v8,v12
      	vmseq.vi	v0,v0,0
      	sub	a2,a2,a5
      	vneg.v	v4,v8,v0.t
      	vsetvli	zero,a5,e32,m4,ta,ma
      
      The root cause above is the following codes:
      
      is_vlmax_len_p (...)
         return poly_int_rtx_p (len, &value)
              && known_eq (value, GET_MODE_NUNITS (mode))
              && !satisfies_constraint_K (len);            ---> incorrect check.
      
      Actually, we should not elide the VLMAX situation that has AVL in range of [0,31].
      
      After removing the the check above, we will have this following issue:
      
              vsetivli        zero,4,e32,m1,ta,ma
              vlseg4e32.v     v4,(a5)
              vlseg4e32.v     v12,(a3)
              vsetvli a5,zero,e32,m1,tu,ma             ---> This is redundant since VLMAX AVL = 4 when it is fixed-vlmax
              vfadd.vf        v3,v13,fa0
              vfadd.vf        v1,v12,fa1
              vfmul.vv        v17,v3,v5
              vfmul.vv        v16,v1,v5
      
      Since all the following operations (vfadd.vf ... etc) are COND_LEN_xxx with dummy len and dummy mask,
      we add the simplification operations dummy len and dummy mask into VLMAX TA and MA policy.
      
      So, after this patch. Both cases are optimal codegen now:
      
      case 1:
      	vsetvli	a5,a2,e32,m1,ta,mu
      	vle32.v	v2,0(a0)
      	slli	a4,a5,2
      	vand.vv	v1,v2,v3
      	vand.vv	v0,v2,v4
      	sub	a2,a2,a5
      	vmseq.vi	v0,v0,0
      	vneg.v	v1,v2,v0.t
      	vse32.v	v1,0(a1)
      
      case 2:
      	vsetivli zero,4,e32,m1,tu,ma
      	addi a4,a5,400
      	vlseg4e32.v v12,(a3)
      	vfadd.vf v3,v13,fa0
      	vfadd.vf v1,v12,fa1
      	vlseg4e32.v v4,(a4)
      	vfadd.vf v2,v14,fa1
      	vfmul.vv v17,v3,v5
      	vfmul.vv v16,v1,v5
      
      This patch is just additional fix of previous approved patch.
      Tested on both RV32 and RV64 newlib no regression. Committed.
      
      gcc/ChangeLog:
      
      	* config/riscv/riscv-v.cc (is_vlmax_len_p): Remove satisfies_constraint_K.
      	(expand_cond_len_op): Add simplification of dummy len and dummy mask.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.target/riscv/rvv/base/vf_avl-3.c: New test.
      76f069fe
    • Di Zhao's avatar
      aarch64: add 'AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA' · b041bd4e
      Di Zhao authored
      This patch adds a new tuning option
      'AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA', to consider fully
      pipelined FMAs in reassociation. Also, set this option by default
      for Ampere CPUs.
      
      gcc/ChangeLog:
      
      	* config/aarch64/aarch64-tuning-flags.def
      	(AARCH64_EXTRA_TUNING_OPTION): New tuning option
      	AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA.
      	* config/aarch64/aarch64.cc
      	(aarch64_override_options_internal): Set
      	param_fully_pipelined_fma according to tuning option.
      	* config/aarch64/tuning_models/ampere1.h: Add
      	AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA to tune_flags.
      	* config/aarch64/tuning_models/ampere1a.h: Likewise.
      	* config/aarch64/tuning_models/ampere1b.h: Likewise.
      b041bd4e
    • Feng Wang's avatar
      RISC-V: Modify copyright year of vector-crypto.md · 6be6305f
      Feng Wang authored
      gcc/ChangeLog:
      	* config/riscv/vector-crypto.md: Modify copyright year.
      6be6305f
    • Juzhe-Zhong's avatar
      RISC-V: Declare STMT_VINFO_TYPE (...) as local variable · d2e40f28
      Juzhe-Zhong authored
      Committed.
      
      gcc/ChangeLog:
      
      	* config/riscv/riscv-vector-costs.cc: Move STMT_VINFO_TYPE (...) to local.
      d2e40f28
    • Lulu Cheng's avatar
      LoongArch: Added TLS Le Relax support. · 3c20e626
      Lulu Cheng authored
      Check whether the assembler supports tls le relax. If it supports it, the assembly
      instruction sequence of tls le relax will be generated by default.
      
      The original way to obtain the tls le symbol address:
          lu12i.w $rd, %le_hi20(sym)
          ori $rd, $rd, %le_lo12(sym)
          add.{w/d} $rd, $rd, $tp
      
      If the assembler supports tls le relax, the following sequence is generated:
      
          lu12i.w $rd, %le_hi20_r(sym)
          add.{w/d} $rd,$rd,$tp,%le_add_r(sym)
          addi.{w/d} $rd,$rd,%le_lo12_r(sym)
      
      gcc/ChangeLog:
      
      	* config.in: Regenerate.
      	* config/loongarch/loongarch-opts.h (HAVE_AS_TLS_LE_RELAXATION): Define.
      	* config/loongarch/loongarch.cc (loongarch_legitimize_tls_address):
      	Added TLS Le Relax support.
      	(loongarch_print_operand_reloc): Add the output string of TLS Le Relax.
      	* config/loongarch/loongarch.md (@add_tls_le_relax<mode>): New template.
      	* configure: Regenerate.
      	* configure.ac: Check if binutils supports TLS le relax.
      
      gcc/testsuite/ChangeLog:
      
      	* lib/target-supports.exp: Add a function to check whether binutil supports
      	TLS Le Relax.
      	* gcc.target/loongarch/tls-le-relax.c: New test.
      3c20e626
    • Feng Wang's avatar
      RISC-V: Add crypto machine descriptions · d3d6a96d
      Feng Wang authored
      Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com>
      Co-Authored by: Ciyan Pan <panciyan@eswincomputing.com>
      gcc/ChangeLog:
      
      	* config/riscv/iterators.md: Add rotate insn name.
      	* config/riscv/riscv.md: Add new insns name for crypto vector.
      	* config/riscv/vector-iterators.md: Add new iterators for crypto vector.
      	* config/riscv/vector.md: Add the corresponding attr for crypto vector.
      	* config/riscv/vector-crypto.md: New file.The machine descriptions for crypto vector.
      d3d6a96d
    • Juzhe-Zhong's avatar
      RISC-V: Count pointer type SSA into RVV regs liveness for dynamic LMUL cost model · 9a29b003
      Juzhe-Zhong authored
      This patch fixes the following choosing unexpected big LMUL which cause register spillings.
      
      Before this patch, choosing LMUL = 4:
      
      	addi	sp,sp,-160
      	addiw	t1,a2,-1
      	li	a5,7
      	bleu	t1,a5,.L16
      	vsetivli	zero,8,e64,m4,ta,ma
      	vmv.v.x	v4,a0
      	vs4r.v	v4,0(sp)                        ---> spill to the stack.
      	vmv.v.x	v4,a1
      	addi	a5,sp,64
      	vs4r.v	v4,0(a5)                        ---> spill to the stack.
      
      The root cause is the following codes:
      
                        if (poly_int_tree_p (var)
                            || (is_gimple_val (var)
                               && !POINTER_TYPE_P (TREE_TYPE (var))))
      
      We count the variable as consuming a RVV reg group when it is not POINTER_TYPE.
      
      It is right for load/store STMT for example:
      
      _1 = (MEM)*addr -->  addr won't be allocated an RVV vector group.
      
      However, we find it is not right for non-load/store STMT:
      
      _3 = _1 == x_8(D);
      
      _1 is pointer type too but we does allocate a RVV register group for it.
      
      So after this patch, we are choosing the perfect LMUL for the testcase in this patch:
      
      	ble	a2,zero,.L17
      	addiw	a7,a2,-1
      	li	a5,3
      	bleu	a7,a5,.L15
      	srliw	a5,a7,2
      	slli	a6,a5,1
      	add	a6,a6,a5
      	lui	a5,%hi(replacements)
      	addi	t1,a5,%lo(replacements)
      	slli	a6,a6,5
      	lui	t4,%hi(.LANCHOR0)
      	lui	t3,%hi(.LANCHOR0+8)
      	lui	a3,%hi(.LANCHOR0+16)
      	lui	a4,%hi(.LC1)
      	vsetivli	zero,4,e16,mf2,ta,ma
      	addi	t4,t4,%lo(.LANCHOR0)
      	addi	t3,t3,%lo(.LANCHOR0+8)
      	addi	a3,a3,%lo(.LANCHOR0+16)
      	addi	a4,a4,%lo(.LC1)
      	add	a6,t1,a6
      	addi	a5,a5,%lo(replacements)
      	vle16.v	v18,0(t4)
      	vle16.v	v17,0(t3)
      	vle16.v	v16,0(a3)
      	vmsgeu.vi	v25,v18,4
      	vadd.vi	v24,v18,-4
      	vmsgeu.vi	v23,v17,4
      	vadd.vi	v22,v17,-4
      	vlm.v	v21,0(a4)
      	vmsgeu.vi	v20,v16,4
      	vadd.vi	v19,v16,-4
      	vsetvli	zero,zero,e64,m2,ta,mu
      	vmv.v.x	v12,a0
      	vmv.v.x	v14,a1
      .L4:
      	vlseg3e64.v	v6,(a5)
      	vmseq.vv	v2,v6,v12
      	vmseq.vv	v0,v8,v12
      	vmsne.vv	v1,v8,v12
      	vmand.mm	v1,v1,v2
      	vmerge.vvm	v2,v8,v14,v0
      	vmv1r.v	v0,v1
      	addi	a4,a5,24
      	vmerge.vvm	v6,v6,v14,v0
      	vmerge.vim	v2,v2,0,v0
      	vrgatherei16.vv	v4,v6,v18
      	vmv1r.v	v0,v25
      	vrgatherei16.vv	v4,v2,v24,v0.t
      	vs1r.v	v4,0(a5)
      	addi	a3,a5,48
      	vmv1r.v	v0,v21
      	vmv2r.v	v4,v2
      	vcompress.vm	v4,v6,v0
      	vs1r.v	v4,0(a4)
      	vmv1r.v	v0,v23
      	addi	a4,a5,72
      	vrgatherei16.vv	v4,v6,v17
      	vrgatherei16.vv	v4,v2,v22,v0.t
      	vs1r.v	v4,0(a3)
      	vmv1r.v	v0,v20
      	vrgatherei16.vv	v4,v6,v16
      	addi	a5,a5,96
      	vrgatherei16.vv	v4,v2,v19,v0.t
      	vs1r.v	v4,0(a4)
      	bne	a6,a5,.L4
      
      No spillings, no "sp" register used.
      
      Tested on both RV32 and RV64, no regression.
      
      Ok for trunk ?
      
      	PR target/113112
      
      gcc/ChangeLog:
      
      	* config/riscv/riscv-vector-costs.cc (compute_nregs_for_mode): Fix
      	pointer type liveness count.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.dg/vect/costmodel/riscv/rvv/pr113112-4.c: New test.
      9a29b003
    • GCC Administrator's avatar
      Daily bump. · c8170fe5
      GCC Administrator authored
      c8170fe5
  4. Jan 01, 2024
  5. Dec 31, 2023
    • Roger Sayle's avatar
      i386: Tweak define_insn_and_split to fix FAIL of gcc.target/i386/pr43644-2.c · 79e1b23b
      Roger Sayle authored
      This patch resolves the failure of pr43644-2.c in the testsuite, a code
      quality test I added back in July, that started failing as the code GCC
      generates for 128-bit values (and their parameter passing) has been in
      flux.
      
      The function:
      
      unsigned __int128 foo(unsigned __int128 x, unsigned long long y) {
        return x+y;
      }
      
      currently generates:
      
      foo:    movq    %rdx, %rcx
              movq    %rdi, %rax
              movq    %rsi, %rdx
              addq    %rcx, %rax
              adcq    $0, %rdx
              ret
      
      and with this patch, we now generate:
      
      foo:	movq    %rdi, %rax
              addq    %rdx, %rax
              movq    %rsi, %rdx
              adcq    $0, %rdx
      
      which is optimal.
      
      2023-12-31  Uros Bizjak  <ubizjak@gmail.com>
      	    Roger Sayle  <roger@nextmovesoftware.com>
      
      gcc/ChangeLog
      	PR target/43644
      	* config/i386/i386.md (*add<dwi>3_doubleword_concat_zext): Tweak
      	order of instructions after split, to minimize number of moves.
      
      gcc/testsuite/ChangeLog
      	PR target/43644
      	* gcc.target/i386/pr43644-2.c: Expect 2 movq instructions.
      79e1b23b
Loading