Skip to content
Snippets Groups Projects
  1. Jan 03, 2024
    • Jakub Jelinek's avatar
      Small tweaks for update-copyright.py · 9afc1915
      Jakub Jelinek authored
      update-copyright.py --this-year FAILs on two spots in the modula2
      directories.
      One is gpl_v3_without_node.texi, I think that is similar to other
      license files which we already exclude from updates.
      And the other is GmcOptions.cc, which has lines like
        mcPrintf_printf0 ((const char *) "Copyright ", 10);
        mcPrintf_printf1 ((const char *) "Copyright (C) %d Free Software Foundation, Inc.\\n", 49, (const unsigned char *) &year, (sizeof (year)-1));
        mcPrintf_printf1 ((const char *) "Copyright (C) %d Free Software Foundation, Inc.\\n", 49, (const unsigned char *) &year, (sizeof (year)-1));
      which update-copyhright.py obviously can't grok.  The file is generated
      and doesn't contain normal Copyright year which should be updated, so I think
      it is also ok to skip it.
      
      2024-01-03  Jakub Jelinek  <jakub@redhat.com>
      
      	* update-copyright.py (GenericFilter): Skip gpl_v3_without_node.texi.
      	(GCCFilter): Skip GmcOptions.cc.
      9afc1915
    • Jakub Jelinek's avatar
      Update copyright dates. · 4e053a7e
      Jakub Jelinek authored
      Manual part of copyright year updates.
      
      2024-01-03  Jakub Jelinek  <jakub@redhat.com>
      
      gcc/
      	* gcc.cc (process_command): Update copyright notice dates.
      	* gcov-dump.cc (print_version): Ditto.
      	* gcov.cc (print_version): Ditto.
      	* gcov-tool.cc (print_version): Ditto.
      	* gengtype.cc (create_file): Ditto.
      	* doc/cpp.texi: Bump @copying's copyright year.
      	* doc/cppinternals.texi: Ditto.
      	* doc/gcc.texi: Ditto.
      	* doc/gccint.texi: Ditto.
      	* doc/gcov.texi: Ditto.
      	* doc/install.texi: Ditto.
      	* doc/invoke.texi: Ditto.
      gcc/ada/
      	* gnat_ugn.texi: Bump @copying's copyright year.
      	* gnat_rm.texi: Likewise.
      gcc/d/
      	* gdc.texi: Bump @copyrights-d year.
      gcc/fortran/
      	* gfortranspec.cc (lang_specific_driver): Update copyright notice
      	dates.
      	* gfc-internals.texi: Bump @copying's copyright year.
      	* gfortran.texi: Ditto.
      	* intrinsic.texi: Ditto.
      	* invoke.texi: Ditto.
      gcc/go/
      	* gccgo.texi: Bump @copyrights-go year.
      libgomp/
      	* libgomp.texi: Bump @copying's copyright year.
      libitm/
      	* libitm.texi: Bump @copying's copyright year.
      libquadmath/
      	* libquadmath.texi: Bump @copying's copyright year.
      4e053a7e
    • Jakub Jelinek's avatar
      Update Copyright year in ChangeLog files · 6a720d41
      Jakub Jelinek authored
      2023 -> 2024
      6a720d41
    • Jakub Jelinek's avatar
      Rotate ChangeLog files. · 8c22aed4
      Jakub Jelinek authored
      Rotate ChangeLog files for ChangeLogs with yearly cadence.
      8c22aed4
    • Xi Ruoyao's avatar
      LoongArch: Provide fmin/fmax RTL pattern for vectors · 87acfc36
      Xi Ruoyao authored
      We already had smin/smax RTL pattern using vfmin/vfmax instructions.
      But for smin/smax, it's unspecified what will happen if either operand
      contains any NaN operands.  So we would not vectorize the loop with
      -fno-finite-math-only (the default for all optimization levels expect
      -Ofast).
      
      But, LoongArch vfmin/vfmax instruction is IEEE-754-2008 conformant so we
      can also use them and vectorize the loop.
      
      gcc/ChangeLog:
      
      	* config/loongarch/simd.md (fmax<mode>3): New define_insn.
      	(fmin<mode>3): Likewise.
      	(reduc_fmax_scal_<mode>3): New define_expand.
      	(reduc_fmin_scal_<mode>3): Likewise.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.target/loongarch/vfmax-vfmin.c: New test.
      Unverified
      87acfc36
    • Juzhe-Zhong's avatar
      RISC-V: Make liveness be aware of rgroup number of LENS[dynamic LMUL] · a43bd825
      Juzhe-Zhong authored
      This patch fixes the following situation:
      vl4re16.v       v12,0(a5)
      ...
      vl4re16.v       v16,0(a3)
      vs4r.v  v12,0(a5)
      ...
      vl4re16.v       v4,0(a0)
      vs4r.v  v16,0(a3)
      ...
      vsetvli a3,zero,e16,m4,ta,ma
      ...
      vmv.v.x v8,t6
      vmsgeu.vv       v2,v16,v8
      vsub.vv v16,v16,v8
      vs4r.v  v16,0(a5)
      ...
      vs4r.v  v4,0(a0)
      vmsgeu.vv       v1,v4,v8
      ...
      vsub.vv v4,v4,v8
      slli    a6,a4,2
      vs4r.v  v4,0(a5)
      ...
      vsub.vv v4,v12,v8
      vmsgeu.vv       v3,v12,v8
      vs4r.v  v4,0(a5)
      ...
      
      There are many spills which are 'vs4r.v'.  The root cause is that we don't count
      vector REG liveness referencing the rgroup controls.
      
      _29 = _25->iatom[0]; is transformed into the following vect statement with 4 different loop_len (loop_len_74, loop_len_75, loop_len_76, loop_len_77).
      
        vect__29.11_78 = .MASK_LEN_LOAD (vectp_sb.9_72, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_74, 0);
        vect__29.12_80 = .MASK_LEN_LOAD (vectp_sb.9_79, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_75, 0);
        vect__29.13_82 = .MASK_LEN_LOAD (vectp_sb.9_81, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_76, 0);
        vect__29.14_84 = .MASK_LEN_LOAD (vectp_sb.9_83, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_77, 0);
      
      which are the LENS number (LOOP_VINFO_LENS (loop_vinfo).length ()).
      
      Count liveness according to LOOP_VINFO_LENS (loop_vinfo).length () to compute liveness more accurately:
      
      vsetivli	zero,8,e16,m1,ta,ma
      vmsgeu.vi	v19,v14,8
      vadd.vi	v18,v14,-8
      vmsgeu.vi	v17,v1,8
      vadd.vi	v16,v1,-8
      vlm.v	v15,0(a5)
      ...
      
      Tested no regression, ok for trunk ?
      
      	PR target/113112
      
      gcc/ChangeLog:
      
      	* config/riscv/riscv-vector-costs.cc (compute_nregs_for_mode): Add rgroup info.
      	(max_number_of_live_regs): Ditto.
      	(has_unexpected_spills_p): Ditto.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.dg/vect/costmodel/riscv/rvv/pr113112-5.c: New test.
      a43bd825
    • Patrick Palka's avatar
      libstdc++: testsuite: Reduce max_size_type.cc exec time [PR113175] · a138b996
      Patrick Palka authored
      The adjustment to max_size_type.cc in r14-205-g83470a5cd4c3d2
      inadvertently increased the execution time of this test by over 5x due
      to making the two main loops actually run in the signed_p case instead
      of being dead code.
      
      To compensate, this patch cuts the relevant loops' range [-1000,1000] by
      10x as proposed in the PR.  This shouldn't significantly weaken the test
      since the same important edge cases are still checked in the smaller range
      and/or elsewhere.  On my machine this reduces the test's execution time by
      roughly 10x (and 1.6x relative to before r14-205).
      
      	PR testsuite/113175
      
      libstdc++-v3/ChangeLog:
      
      	* testsuite/std/ranges/iota/max_size_type.cc (test02): Reduce
      	'limit' to 100 from 1000 and adjust 'log2_limit' accordingly.
      	(test03): Likewise.
      a138b996
    • GCC Administrator's avatar
      Daily bump. · 45c807b7
      GCC Administrator authored
      45c807b7
  2. Jan 02, 2024
    • Jun Sha (Joshua)'s avatar
      RISC-V: Use vector_length_operand instead of csr_operand in vsetvl patterns · 152cd65b
      Jun Sha (Joshua) authored
      
      This patch replaces csr_operand by vector_length_operand in the vsetvl
      patterns.  This allows future changes in the vector code (i.e. in the
      vector_length_operand predicate) without affecting scalar patterns that
      use the csr_operand predicate.
      
      gcc/ChangeLog:
      
      	* config/riscv/vector.md:
      	Use vector_length_operand for vsetvl patterns.
      
      Co-authored-by: default avatarJin Ma <jinma@linux.alibaba.com>
      Co-authored-by: default avatarXianmiao Qu <cooper.qu@linux.alibaba.com>
      Co-authored-by: default avatarChristoph Müllner <christoph.muellner@vrull.eu>
      152cd65b
    • Andreas Schwab's avatar
      libsanitizer: Enable LSan and TSan for riscv64 · ae11ee8f
      Andreas Schwab authored
      libsanitizer:
      	* configure.tgt (riscv64-*-linux*): Enable LSan and TSan.
      ae11ee8f
    • Szabolcs Nagy's avatar
      aarch64: fortran: Adjust vect-8.f90 for libmvec · 046cea56
      Szabolcs Nagy authored
      With new glibc one more loop can be vectorized via simd exp in libmvec.
      
      Found by the Linaro TCWG CI.
      
      gcc/testsuite/ChangeLog:
      
      	* gfortran.dg/vect/vect-8.f90: Accept more vectorized loops.
      046cea56
    • Juzhe-Zhong's avatar
      RISC-V: Add simplification of dummy len and dummy mask COND_LEN_xxx pattern · 76f069fe
      Juzhe-Zhong authored
      In https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=d1eacedc6d9ba9f5522f2c8d49ccfdf7939ad72d
      I optimize COND_LEN_xxx pattern with dummy len and dummy mask with too simply solution which
      causes redundant vsetvli in the following case:
      
      	vsetvli	a5,a2,e8,m1,ta,ma
      	vle32.v	v8,0(a0)
      	vsetivli	zero,16,e32,m4,tu,mu   ----> We should apply VLMAX instead of a CONST_INT AVL
      	slli	a4,a5,2
      	vand.vv	v0,v8,v16
      	vand.vv	v4,v8,v12
      	vmseq.vi	v0,v0,0
      	sub	a2,a2,a5
      	vneg.v	v4,v8,v0.t
      	vsetvli	zero,a5,e32,m4,ta,ma
      
      The root cause above is the following codes:
      
      is_vlmax_len_p (...)
         return poly_int_rtx_p (len, &value)
              && known_eq (value, GET_MODE_NUNITS (mode))
              && !satisfies_constraint_K (len);            ---> incorrect check.
      
      Actually, we should not elide the VLMAX situation that has AVL in range of [0,31].
      
      After removing the the check above, we will have this following issue:
      
              vsetivli        zero,4,e32,m1,ta,ma
              vlseg4e32.v     v4,(a5)
              vlseg4e32.v     v12,(a3)
              vsetvli a5,zero,e32,m1,tu,ma             ---> This is redundant since VLMAX AVL = 4 when it is fixed-vlmax
              vfadd.vf        v3,v13,fa0
              vfadd.vf        v1,v12,fa1
              vfmul.vv        v17,v3,v5
              vfmul.vv        v16,v1,v5
      
      Since all the following operations (vfadd.vf ... etc) are COND_LEN_xxx with dummy len and dummy mask,
      we add the simplification operations dummy len and dummy mask into VLMAX TA and MA policy.
      
      So, after this patch. Both cases are optimal codegen now:
      
      case 1:
      	vsetvli	a5,a2,e32,m1,ta,mu
      	vle32.v	v2,0(a0)
      	slli	a4,a5,2
      	vand.vv	v1,v2,v3
      	vand.vv	v0,v2,v4
      	sub	a2,a2,a5
      	vmseq.vi	v0,v0,0
      	vneg.v	v1,v2,v0.t
      	vse32.v	v1,0(a1)
      
      case 2:
      	vsetivli zero,4,e32,m1,tu,ma
      	addi a4,a5,400
      	vlseg4e32.v v12,(a3)
      	vfadd.vf v3,v13,fa0
      	vfadd.vf v1,v12,fa1
      	vlseg4e32.v v4,(a4)
      	vfadd.vf v2,v14,fa1
      	vfmul.vv v17,v3,v5
      	vfmul.vv v16,v1,v5
      
      This patch is just additional fix of previous approved patch.
      Tested on both RV32 and RV64 newlib no regression. Committed.
      
      gcc/ChangeLog:
      
      	* config/riscv/riscv-v.cc (is_vlmax_len_p): Remove satisfies_constraint_K.
      	(expand_cond_len_op): Add simplification of dummy len and dummy mask.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.target/riscv/rvv/base/vf_avl-3.c: New test.
      76f069fe
    • Di Zhao's avatar
      aarch64: add 'AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA' · b041bd4e
      Di Zhao authored
      This patch adds a new tuning option
      'AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA', to consider fully
      pipelined FMAs in reassociation. Also, set this option by default
      for Ampere CPUs.
      
      gcc/ChangeLog:
      
      	* config/aarch64/aarch64-tuning-flags.def
      	(AARCH64_EXTRA_TUNING_OPTION): New tuning option
      	AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA.
      	* config/aarch64/aarch64.cc
      	(aarch64_override_options_internal): Set
      	param_fully_pipelined_fma according to tuning option.
      	* config/aarch64/tuning_models/ampere1.h: Add
      	AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA to tune_flags.
      	* config/aarch64/tuning_models/ampere1a.h: Likewise.
      	* config/aarch64/tuning_models/ampere1b.h: Likewise.
      b041bd4e
    • Feng Wang's avatar
      RISC-V: Modify copyright year of vector-crypto.md · 6be6305f
      Feng Wang authored
      gcc/ChangeLog:
      	* config/riscv/vector-crypto.md: Modify copyright year.
      6be6305f
    • Juzhe-Zhong's avatar
      RISC-V: Declare STMT_VINFO_TYPE (...) as local variable · d2e40f28
      Juzhe-Zhong authored
      Committed.
      
      gcc/ChangeLog:
      
      	* config/riscv/riscv-vector-costs.cc: Move STMT_VINFO_TYPE (...) to local.
      d2e40f28
    • Lulu Cheng's avatar
      LoongArch: Added TLS Le Relax support. · 3c20e626
      Lulu Cheng authored
      Check whether the assembler supports tls le relax. If it supports it, the assembly
      instruction sequence of tls le relax will be generated by default.
      
      The original way to obtain the tls le symbol address:
          lu12i.w $rd, %le_hi20(sym)
          ori $rd, $rd, %le_lo12(sym)
          add.{w/d} $rd, $rd, $tp
      
      If the assembler supports tls le relax, the following sequence is generated:
      
          lu12i.w $rd, %le_hi20_r(sym)
          add.{w/d} $rd,$rd,$tp,%le_add_r(sym)
          addi.{w/d} $rd,$rd,%le_lo12_r(sym)
      
      gcc/ChangeLog:
      
      	* config.in: Regenerate.
      	* config/loongarch/loongarch-opts.h (HAVE_AS_TLS_LE_RELAXATION): Define.
      	* config/loongarch/loongarch.cc (loongarch_legitimize_tls_address):
      	Added TLS Le Relax support.
      	(loongarch_print_operand_reloc): Add the output string of TLS Le Relax.
      	* config/loongarch/loongarch.md (@add_tls_le_relax<mode>): New template.
      	* configure: Regenerate.
      	* configure.ac: Check if binutils supports TLS le relax.
      
      gcc/testsuite/ChangeLog:
      
      	* lib/target-supports.exp: Add a function to check whether binutil supports
      	TLS Le Relax.
      	* gcc.target/loongarch/tls-le-relax.c: New test.
      3c20e626
    • Feng Wang's avatar
      RISC-V: Add crypto machine descriptions · d3d6a96d
      Feng Wang authored
      Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com>
      Co-Authored by: Ciyan Pan <panciyan@eswincomputing.com>
      gcc/ChangeLog:
      
      	* config/riscv/iterators.md: Add rotate insn name.
      	* config/riscv/riscv.md: Add new insns name for crypto vector.
      	* config/riscv/vector-iterators.md: Add new iterators for crypto vector.
      	* config/riscv/vector.md: Add the corresponding attr for crypto vector.
      	* config/riscv/vector-crypto.md: New file.The machine descriptions for crypto vector.
      d3d6a96d
    • Juzhe-Zhong's avatar
      RISC-V: Count pointer type SSA into RVV regs liveness for dynamic LMUL cost model · 9a29b003
      Juzhe-Zhong authored
      This patch fixes the following choosing unexpected big LMUL which cause register spillings.
      
      Before this patch, choosing LMUL = 4:
      
      	addi	sp,sp,-160
      	addiw	t1,a2,-1
      	li	a5,7
      	bleu	t1,a5,.L16
      	vsetivli	zero,8,e64,m4,ta,ma
      	vmv.v.x	v4,a0
      	vs4r.v	v4,0(sp)                        ---> spill to the stack.
      	vmv.v.x	v4,a1
      	addi	a5,sp,64
      	vs4r.v	v4,0(a5)                        ---> spill to the stack.
      
      The root cause is the following codes:
      
                        if (poly_int_tree_p (var)
                            || (is_gimple_val (var)
                               && !POINTER_TYPE_P (TREE_TYPE (var))))
      
      We count the variable as consuming a RVV reg group when it is not POINTER_TYPE.
      
      It is right for load/store STMT for example:
      
      _1 = (MEM)*addr -->  addr won't be allocated an RVV vector group.
      
      However, we find it is not right for non-load/store STMT:
      
      _3 = _1 == x_8(D);
      
      _1 is pointer type too but we does allocate a RVV register group for it.
      
      So after this patch, we are choosing the perfect LMUL for the testcase in this patch:
      
      	ble	a2,zero,.L17
      	addiw	a7,a2,-1
      	li	a5,3
      	bleu	a7,a5,.L15
      	srliw	a5,a7,2
      	slli	a6,a5,1
      	add	a6,a6,a5
      	lui	a5,%hi(replacements)
      	addi	t1,a5,%lo(replacements)
      	slli	a6,a6,5
      	lui	t4,%hi(.LANCHOR0)
      	lui	t3,%hi(.LANCHOR0+8)
      	lui	a3,%hi(.LANCHOR0+16)
      	lui	a4,%hi(.LC1)
      	vsetivli	zero,4,e16,mf2,ta,ma
      	addi	t4,t4,%lo(.LANCHOR0)
      	addi	t3,t3,%lo(.LANCHOR0+8)
      	addi	a3,a3,%lo(.LANCHOR0+16)
      	addi	a4,a4,%lo(.LC1)
      	add	a6,t1,a6
      	addi	a5,a5,%lo(replacements)
      	vle16.v	v18,0(t4)
      	vle16.v	v17,0(t3)
      	vle16.v	v16,0(a3)
      	vmsgeu.vi	v25,v18,4
      	vadd.vi	v24,v18,-4
      	vmsgeu.vi	v23,v17,4
      	vadd.vi	v22,v17,-4
      	vlm.v	v21,0(a4)
      	vmsgeu.vi	v20,v16,4
      	vadd.vi	v19,v16,-4
      	vsetvli	zero,zero,e64,m2,ta,mu
      	vmv.v.x	v12,a0
      	vmv.v.x	v14,a1
      .L4:
      	vlseg3e64.v	v6,(a5)
      	vmseq.vv	v2,v6,v12
      	vmseq.vv	v0,v8,v12
      	vmsne.vv	v1,v8,v12
      	vmand.mm	v1,v1,v2
      	vmerge.vvm	v2,v8,v14,v0
      	vmv1r.v	v0,v1
      	addi	a4,a5,24
      	vmerge.vvm	v6,v6,v14,v0
      	vmerge.vim	v2,v2,0,v0
      	vrgatherei16.vv	v4,v6,v18
      	vmv1r.v	v0,v25
      	vrgatherei16.vv	v4,v2,v24,v0.t
      	vs1r.v	v4,0(a5)
      	addi	a3,a5,48
      	vmv1r.v	v0,v21
      	vmv2r.v	v4,v2
      	vcompress.vm	v4,v6,v0
      	vs1r.v	v4,0(a4)
      	vmv1r.v	v0,v23
      	addi	a4,a5,72
      	vrgatherei16.vv	v4,v6,v17
      	vrgatherei16.vv	v4,v2,v22,v0.t
      	vs1r.v	v4,0(a3)
      	vmv1r.v	v0,v20
      	vrgatherei16.vv	v4,v6,v16
      	addi	a5,a5,96
      	vrgatherei16.vv	v4,v2,v19,v0.t
      	vs1r.v	v4,0(a4)
      	bne	a6,a5,.L4
      
      No spillings, no "sp" register used.
      
      Tested on both RV32 and RV64, no regression.
      
      Ok for trunk ?
      
      	PR target/113112
      
      gcc/ChangeLog:
      
      	* config/riscv/riscv-vector-costs.cc (compute_nregs_for_mode): Fix
      	pointer type liveness count.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.dg/vect/costmodel/riscv/rvv/pr113112-4.c: New test.
      9a29b003
    • GCC Administrator's avatar
      Daily bump. · c8170fe5
      GCC Administrator authored
      c8170fe5
  3. Jan 01, 2024
  4. Dec 31, 2023
    • Roger Sayle's avatar
      i386: Tweak define_insn_and_split to fix FAIL of gcc.target/i386/pr43644-2.c · 79e1b23b
      Roger Sayle authored
      This patch resolves the failure of pr43644-2.c in the testsuite, a code
      quality test I added back in July, that started failing as the code GCC
      generates for 128-bit values (and their parameter passing) has been in
      flux.
      
      The function:
      
      unsigned __int128 foo(unsigned __int128 x, unsigned long long y) {
        return x+y;
      }
      
      currently generates:
      
      foo:    movq    %rdx, %rcx
              movq    %rdi, %rax
              movq    %rsi, %rdx
              addq    %rcx, %rax
              adcq    $0, %rdx
              ret
      
      and with this patch, we now generate:
      
      foo:	movq    %rdi, %rax
              addq    %rdx, %rax
              movq    %rsi, %rdx
              adcq    $0, %rdx
      
      which is optimal.
      
      2023-12-31  Uros Bizjak  <ubizjak@gmail.com>
      	    Roger Sayle  <roger@nextmovesoftware.com>
      
      gcc/ChangeLog
      	PR target/43644
      	* config/i386/i386.md (*add<dwi>3_doubleword_concat_zext): Tweak
      	order of instructions after split, to minimize number of moves.
      
      gcc/testsuite/ChangeLog
      	PR target/43644
      	* gcc.target/i386/pr43644-2.c: Expect 2 movq instructions.
      79e1b23b
    • Hans-Peter Nilsson's avatar
      libstdc++ testsuite/20_util/hash/quality.cc: Increase timeout 3x · 26fe2808
      Hans-Peter Nilsson authored
      Testing for mmix (a 64-bit target using Knuth's simulator).  The test
      is largely pruned for simulators, but still needs 5m57s on my laptop
      from 3.5 years ago to run to successful completion.  Perhaps slow
      hosted targets could also have problems so increasing the timeout
      limit, not just for simulators but for everyone, and by more than a
      factor 2.
      
      	* testsuite/20_util/hash/quality.cc: Increase timeout by a factor 3.
      26fe2808
    • François Dumont's avatar
      libstdc++: [_Hashtable] Extend the small size optimization · 505110bb
      François Dumont authored
      A number of methods were still not using the small size optimization which
      is to prefer an O(N) research to a hash computation as long as N is small.
      
      libstdc++-v3/ChangeLog:
      
      	* include/bits/hashtable.h: Move comment about all equivalent values
      	being next to each other in the class documentation header.
      	(_M_reinsert_node, _M_merge_unique): Implement small size optimization.
      	(_M_find_tr, _M_count_tr, _M_equal_range_tr): Likewise.
      505110bb
    • François Dumont's avatar
      libstdc++: [_Hashtable] Enhance performance benches · 91b334d0
      François Dumont authored
      Add benches on insert with hint and before begin cache.
      
      libstdc++-v3/ChangeLog:
      
      	* testsuite/performance/23_containers/insert/54075.cc: Add lookup on unknown entries
      	w/o copy to see potential impact of memory fragmentation enhancements.
      	* testsuite/performance/23_containers/insert/unordered_multiset_hint.cc: Enhance hash
      	functor to make it perfect, exactly 1 entry per bucket. Also use hash functor tagged as
      	slow or not to bench w/o hash code cache.
      	* testsuite/performance/23_containers/insert/unordered_set_hint.cc: New test case. Like
      	previous one but using std::unordered_set.
      	* testsuite/performance/23_containers/insert/unordered_set_range_insert.cc: New test case.
      	Check performance of range-insertion compared to individual insertions.
      	* testsuite/performance/23_containers/insert_erase/unordered_small_size.cc: Add same bench
      	but after a copy to demonstrate impact of enhancements regarding memory fragmentation.
      91b334d0
    • GCC Administrator's avatar
      Daily bump. · 03fb8f27
      GCC Administrator authored
      03fb8f27
  5. Dec 30, 2023
    • Martin Uecker's avatar
      C: Fix type compatibility for structs with variable sized fields. · 38c33fd2
      Martin Uecker authored
      This fixes the test gcc.dg/gnu23-tag-4.c introduced by commit 23fee88f
      which fails for -march=... because the DECL_FIELD_BIT_OFFSET are set
      inconsistently for types with and without variable-sized field.  This
      is fixed by testing for DECL_ALIGN instead.  The code is further
      simplified by removing some unnecessary conditions, i.e. anon_field is
      set unconditionaly and all fields are assumed to be DECL_FIELDs.
      
      gcc/c:
      	* c-typeck.cc (tagged_types_tu_compatible_p): Revise.
      
      gcc/testsuite:
      	* gcc.dg/c23-tag-9.c: New test.
      38c33fd2
    • Joseph Myers's avatar
      MAINTAINERS: Update my email address · 77f30e22
      Joseph Myers authored
      There will be another update in January.
      
      	* MAINTAINERS: Update my email address.
      77f30e22
    • GCC Administrator's avatar
      Daily bump. · ab7f6701
      GCC Administrator authored
      ab7f6701
  6. Dec 29, 2023
    • Jan Hubicka's avatar
      Disable FMADD in chains for Zen4 and generic · 467cc398
      Jan Hubicka authored
      this patch disables use of FMA in matrix multiplication loop for generic (for
      x86-64-v3) and zen4.  I tested this on zen4 and Xenon Gold Gold 6212U.
      
      For Intel this is neutral both on the matrix multiplication microbenchmark
      (attached) and spec2k17 where the difference was within noise for Core.
      
      On core the micro-benchmark runs as follows:
      
      With FMA:
      
             578,500,241      cycles:u                         #    3.645 GHz
                      ( +-  0.12% )
             753,318,477      instructions:u                   #    1.30  insn per
      cycle              ( +-  0.00% )
             125,417,701      branches:u                       #  790.227 M/sec
                      ( +-  0.00% )
                0.159146 +- 0.000363 seconds time elapsed  ( +-  0.23% )
      
      No FMA:
      
             577,573,960      cycles:u                         #    3.514 GHz
                      ( +-  0.15% )
             878,318,479      instructions:u                   #    1.52  insn per
      cycle              ( +-  0.00% )
             125,417,702      branches:u                       #  763.035 M/sec
                      ( +-  0.00% )
                0.164734 +- 0.000321 seconds time elapsed  ( +-  0.19% )
      
      So the cycle count is unchanged and discrete multiply+add takes same time as
      FMA.
      
      While on zen:
      
      With FMA:
               484875179      cycles:u                         #    3.599 GHz
                   ( +-  0.05% )  (82.11%)
               752031517      instructions:u                   #    1.55  insn per
      cycle
               125106525      branches:u                       #  928.712 M/sec
                   ( +-  0.03% )  (85.09%)
                  128356      branch-misses:u                  #    0.10% of all
      branches          ( +-  0.06% )  (83.58%)
      
      No FMA:
               375875209      cycles:u                         #    3.592 GHz
                   ( +-  0.08% )  (80.74%)
               875725341      instructions:u                   #    2.33  insn per
      cycle
               124903825      branches:u                       #    1.194 G/sec
                   ( +-  0.04% )  (84.59%)
                0.105203 +- 0.000188 seconds time elapsed  ( +-  0.18% )
      
      The diffrerence is that Cores understand the fact that fmadd does not need
      all three parameters to start computation, while Zen cores doesn't.
      
      Since this seems noticeable win on zen and not loss on Core it seems like good
      default for generic.
      
      float a[SIZE][SIZE];
      float b[SIZE][SIZE];
      float c[SIZE][SIZE];
      
      void init(void)
      {
         int i, j, k;
         for(i=0; i<SIZE; ++i)
         {
            for(j=0; j<SIZE; ++j)
            {
               a[i][j] = (float)i + j;
               b[i][j] = (float)i - j;
               c[i][j] = 0.0f;
            }
         }
      }
      
      void mult(void)
      {
         int i, j, k;
      
         for(i=0; i<SIZE; ++i)
         {
            for(j=0; j<SIZE; ++j)
            {
               for(k=0; k<SIZE; ++k)
               {
                  c[i][j] += a[i][k] * b[k][j];
               }
            }
         }
      }
      
      int main(void)
      {
         clock_t s, e;
      
         init();
         s=clock();
         mult();
         e=clock();
         printf("        mult took %10d clocks\n", (int)(e-s));
      
         return 0;
      
      }
      
      gcc/ChangeLog:
      
      	* config/i386/x86-tune.def (X86_TUNE_AVOID_128FMA_CHAINS,
      	X86_TUNE_AVOID_256FMA_CHAINS): Enable for znver4 and Core.
      467cc398
    • Tamar Christina's avatar
      AArch64: Update costing for vector conversions [PR110625] · 984bdeaa
      Tamar Christina authored
      In gimple the operation
      
      short _8;
      double _9;
      _9 = (double) _8;
      
      denotes two operations on AArch64.  First we have to widen from short to
      long and then convert this integer to a double.
      
      Currently however we only count the widen/truncate operations:
      
      (double) _5 6 times vec_promote_demote costs 12 in body
      (double) _5 12 times vec_promote_demote costs 24 in body
      
      but not the actual conversion operation, which needs an additional 12
      instructions in the attached testcase.   Without this the attached testcase ends
      up incorrectly thinking that it's beneficial to vectorize the loop at a very
      high VF = 8 (4x unrolled).
      
      Because we can't change the mid-end to account for this the costing code in the
      backend now keeps track of whether the previous operation was a
      promotion/demotion and ajdusts the expected number of instructions to:
      
      1. If it's the first FLOAT_EXPR and the precision of the lhs and rhs are
         different, double it, since we need to convert and promote.
      2. If it's the previous operation was a demonition/promotion then reduce the
         cost of the current operation by the amount we added extra in the last.
      
      with the patch we get:
      
      (double) _5 6 times vec_promote_demote costs 24 in body
      (double) _5 12 times vec_promote_demote costs 36 in body
      
      which correctly accounts for 30 operations.
      
      This fixes the 16% regression in imagick in SPECCPU 2017 reported on Neoverse N2
      and using the new generic Armv9-a cost model.
      
      gcc/ChangeLog:
      
      	PR target/110625
      	* config/aarch64/aarch64.cc (aarch64_vector_costs::add_stmt_cost):
      	Adjust throughput and latency calculations for vector conversions.
      	(class aarch64_vector_costs): Add m_num_last_promote_demote.
      
      gcc/testsuite/ChangeLog:
      
      	PR target/110625
      	* gcc.target/aarch64/pr110625_4.c: New test.
      	* gcc.target/aarch64/sve/unpack_fcvt_signed_1.c: Add
      	--param aarch64-sve-compare-costs=0.
      	* gcc.target/aarch64/sve/unpack_fcvt_unsigned_1.c: Likewise
      984bdeaa
    • Xi Ruoyao's avatar
      LoongArch: Fix the format of bstrins_<mode>_for_ior_mask condition (NFC) · 748a4e90
      Xi Ruoyao authored
      gcc/ChangeLog:
      
      	* config/loongarch/loongarch.md (bstrins_<mode>_for_ior_mask):
      	For the condition, remove unneeded trailing "\" and move "&&" to
      	follow GNU coding style.  NFC.
      Unverified
      748a4e90
    • Xi Ruoyao's avatar
      LoongArch: Replace -mexplicit-relocs=auto simple-used address peephole2 with combine · 8b61d109
      Xi Ruoyao authored
      The problem with peephole2 is it uses a naive sliding-window algorithm
      and misses many cases.  For example:
      
          float a[10000];
          float t() { return a[0] + a[8000]; }
      
      is compiled to:
      
          la.local    $r13,a
          la.local    $r12,a+32768
          fld.s       $f1,$r13,0
          fld.s       $f0,$r12,-768
          fadd.s      $f0,$f1,$f0
      
      by trunk.  But as we've explained in r14-4851, the following would be
      better with -mexplicit-relocs=auto:
      
          pcalau12i   $r13,%pc_hi20(a)
          pcalau12i   $r12,%pc_hi20(a+32000)
          fld.s       $f1,$r13,%pc_lo12(a)
          fld.s       $f0,$r12,%pc_lo12(a+32000)
          fadd.s      $f0,$f1,$f0
      
      However the sliding-window algorithm just won't detect the pcalau12i/fld
      pair to be optimized.  Use a define_insn_and_rewrite in combine pass
      will work around the issue.
      
      gcc/ChangeLog:
      
      	* config/loongarch/predicates.md
      	(symbolic_pcrel_offset_operand): New define_predicate.
      	(mem_simple_ldst_operand): Likewise.
      	* config/loongarch/loongarch-protos.h
      	(loongarch_rewrite_mem_for_simple_ldst): Declare.
      	* config/loongarch/loongarch.cc
      	(loongarch_rewrite_mem_for_simple_ldst): Implement.
      	* config/loongarch/loongarch.md (simple_load<mode>): New
      	define_insn_and_rewrite.
      	(simple_load_<su>ext<SUBDI:mode><GPR:mode>): Likewise.
      	(simple_store<mode>): Likewise.
      	(define_peephole2): Remove la.local/[f]ld peepholes.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.target/loongarch/explicit-relocs-auto-single-load-store-2.c:
      	New test.
      	* gcc.target/loongarch/explicit-relocs-auto-single-load-store-3.c:
      	New test.
      Unverified
      8b61d109
    • Uros Bizjak's avatar
      i386: Fix TARGET_USE_VECTOR_FP_CONVERTS SF->DF float_extend splitter [PR113133] · 1e7f9abb
      Uros Bizjak authored
      The post-reload splitter currently allows xmm16+ registers with TARGET_EVEX512.
      The splitter changes SFmode of the output operand to V4SFmode, but the vector
      mode is currently unsupported in xmm16+ without TARGET_AVX512VL. lowpart_subreg
      returns NULL_RTX in this case and the compilation fails with invalid RTX.
      
      The patch removes support for x/ymm16+ registers with TARGET_EVEX512.  The
      support should be restored once ix86_hard_regno_mode_ok is fixed to allow
      16-byte modes in x/ymm16+ with TARGET_EVEX512.
      
      	PR target/113133
      
      gcc/ChangeLog:
      
      	* config/i386/i386.md
      	(TARGET_USE_VECTOR_FP_CONVERTS SF->DF float_extend splitter):
      	Do not handle xmm16+ with TARGET_EVEX512.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.target/i386/pr113133-1.c: New test.
      	* gcc.target/i386/pr113133-2.c: New test.
      1e7f9abb
    • Andrew Pinski's avatar
      Fix gen-vect-26.c testcase after loops with multiple exits [PR113167] · 200531d5
      Andrew Pinski authored
      
      This fixes the gcc.dg/tree-ssa/gen-vect-26.c testcase by adding
      `#pragma GCC novector` in front of the loop that is doing the checking
      of the result. We only want to test the first loop to see if it can be
      vectorize.
      
      Committed as obvious after testing on x86_64-linux-gnu with -m32.
      
      gcc/testsuite/ChangeLog:
      
      	PR testsuite/113167
      	* gcc.dg/tree-ssa/gen-vect-26.c: Mark the test/check loop
      	as novector.
      
      Signed-off-by: default avatarAndrew Pinski <quic_apinski@quicinc.com>
      200531d5
    • Juzhe-Zhong's avatar
      RISC-V: Robostify testcase pr113112-1.c · 7dc868cb
      Juzhe-Zhong authored
      The redudant dump check is fragile and easily changed, not necessary.
      
      Tested on both RV32/RV64 no regression.
      
      Remove it and committed.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c: Remove redundant checks.
      7dc868cb
    • Juzhe-Zhong's avatar
      RISC-V: Disallow transformation into VLMAX AVL for cond_len_xxx when length is in range [0, 31] · d1eacedc
      Juzhe-Zhong authored
      Notice we have this following situation:
      
              vsetivli        zero,4,e32,m1,ta,ma
              vlseg4e32.v     v4,(a5)
              vlseg4e32.v     v12,(a3)
              vsetvli a5,zero,e32,m1,tu,ma             ---> This is redundant since VLMAX AVL = 4 when it is fixed-vlmax
              vfadd.vf        v3,v13,fa0
              vfadd.vf        v1,v12,fa1
              vfmul.vv        v17,v3,v5
              vfmul.vv        v16,v1,v5
      
      The rootcause is that we transform COND_LEN_xxx into VLMAX AVL when len == NUNITS blindly.
      However, we don't need to transform all of them since when len is range of [0,31], we don't need to
      consume scalar registers.
      
      After this patch:
      
      	vsetivli	zero,4,e32,m1,tu,ma
      	addi	a4,a5,400
      	vlseg4e32.v	v12,(a3)
      	vfadd.vf	v3,v13,fa0
      	vfadd.vf	v1,v12,fa1
      	vlseg4e32.v	v4,(a4)
      	vfadd.vf	v2,v14,fa1
      	vfmul.vv	v17,v3,v5
      	vfmul.vv	v16,v1,v5
      
      Tested on both RV32 and RV64 no regression.
      
      Ok for trunk ?
      
      gcc/ChangeLog:
      
      	* config/riscv/riscv-v.cc (is_vlmax_len_p): New function.
      	(expand_load_store): Disallow transformation into VLMAX when len is in range of [0,31]
      	(expand_cond_len_op): Ditto.
      	(expand_gather_scatter): Ditto.
      	(expand_lanes_load_store): Ditto.
      	(expand_fold_extract_last): Ditto.
      
      gcc/testsuite/ChangeLog:
      
      	* gcc.target/riscv/rvv/autovec/post-ra-avl.c: Adapt test.
      	* gcc.target/riscv/rvv/base/vf_avl-2.c: New test.
      d1eacedc
    • GCC Administrator's avatar
      Daily bump. · 7de05ad4
      GCC Administrator authored
      7de05ad4
  7. Dec 28, 2023
    • Rimvydas Jasinskas's avatar
      Fortran: Add Developer Options mini-section to documentation · 2cb93e66
      Rimvydas Jasinskas authored
      
      Separate out -fdump-* options to the new section.  Sort by option name.
      
      While there, document -save-temps intermediates.
      
      gcc/fortran/ChangeLog:
      
      	PR fortran/81615
      	* invoke.texi: Add Developer Options section.  Move '-fdump-*'
      	to it.  Add small examples about changed '-save-temps' behavior.
      
      Signed-off-by: default avatarRimvydas Jasinskas <rimvydas.jas@gmail.com>
      2cb93e66
    • David Edelsohn's avatar
      testsuite: XFAIL linkage testcases on AIX. · bf5c00d7
      David Edelsohn authored
      
      The template linkage2.C and linkage3.C testcases expect a
      decoration that does not match AIX assembler syntax.  Expect failure.
      
      gcc/testsuite/ChangeLog:
      	* g++.dg/template/linkage2.C: XFAIL on AIX.
      	* g++.dg/template/linkage3.C: Same.
      
      Signed-off-by: default avatarDavid Edelsohn <dje.gcc@gmail.com>
      bf5c00d7
    • Uros Bizjak's avatar
      i386: Cleanup ix86_expand_{unary|binary}_operator issues · d74cceb6
      Uros Bizjak authored
      Move ix86_expand_unary_operator from i386.cc to i386-expand.cc, re-arrange
      prototypes and do some cosmetic changes with the usage of TARGET_APX_NDD.
      
      No functional changes.
      
      gcc/ChangeLog:
      
      	* config/i386/i386.cc (ix86_unary_operator_ok): Move from here...
      	* config/i386/i386-expand.cc (ix86_unary_operator_ok): ... to here.
      	* config/i386/i386-protos.h: Re-arrange ix86_{unary|binary}_operator_ok
      	and ix86_expand_{unary|binary}_operator prototypes.
      	* config/i386/i386.md: Cosmetic changes with the usage of
      	TARGET_APX_NDD in ix86_expand_{unary|binary}_operator
      	and ix86_{unary|binary}_operator_ok function calls.
      d74cceb6
Loading