- Jan 04, 2024
-
-
YunQiang Su authored
When combine some instructions, the generic `rtx_cost` may over estimate the cost of result RTL, due to that the RTL may be quite complex and `rtx_cost` has no information that this RTL can be convert to simple hardware instruction(s). In this case, Let's use `insn_count * perf_ratio` to estimate the cost if both of them are available. Otherwise fallback to pattern_cost. When non-speed, Let's use the length as cost. gcc * config/mips/mips.cc (mips_insn_cost): New function. gcc/testsuite * gcc.target/mips/data-sym-multi-pool.c: Skip Os or -O0.
-
YunQiang Su authored
The accurate cost of an pattern can get with insn_count * perf_ratio The default value is set to 0 instead of 1, since that we will need to distinguish the default value and it is really set for an pattern. Since it is not set for most patterns yet, to use it, we will need to be sure that it's value is greater than 0. This attr will be used in `mips_insn_cost`. gcc * config/mips/mips.md (perf_ratio): New attribute.
-
Juzhe-Zhong authored
As PR113206 and PR113209, the bugs happens on the following situation: li a4,32 ... vsetvli zero,a4,e8,m8,ta,ma ... slliw a4,a3,24 sraiw a4,a4,24 bge a3,a1,.L8 sb a4,%lo(e)(a0) vsetvli zero,a4,e8,m8,ta,ma --> a4 is polluted value not the expected "32". ... .L7: j .L7 ---> infinite loop. The root cause is that infinite loop confuse earliest computation and let earliest fusion happens on unexpected place. Disable blocks that belong to infinite loop to fix this bug since applying ealiest LCM fusion on infinite loop seems quite complicated and we don't see any benefits. Note that disabling earliest fusion on infinite loops doesn't hurt the vsetvli performance, instead, it does improve codegen of some cases. Tested on both RV32 and RV64 no regression. PR target/113206 PR target/113209 gcc/ChangeLog: * config/riscv/riscv-vsetvl.cc (invalid_opt_bb_p): New function. (pre_vsetvl::compute_lcm_local_properties): Disable earliest fusion on blocks belong to infinite loop. (pre_vsetvl::emit_vsetvl): Remove fake edges. * config/riscv/t-riscv: Add a new include file. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/vsetvl/avl_single-23.c: Adapt test. * gcc.target/riscv/rvv/vsetvl/vlmax_call-1.c: Robostify test. * gcc.target/riscv/rvv/vsetvl/vlmax_call-2.c: Ditto. * gcc.target/riscv/rvv/vsetvl/vlmax_call-3.c: Ditto. * gcc.target/riscv/rvv/vsetvl/vlmax_conflict-5.c: Ditto. * gcc.target/riscv/rvv/vsetvl/vlmax_single_vtype-1.c: Ditto. * gcc.target/riscv/rvv/vsetvl/vlmax_single_vtype-2.c: Ditto. * gcc.target/riscv/rvv/vsetvl/vlmax_single_vtype-3.c: Ditto. * gcc.target/riscv/rvv/vsetvl/vlmax_single_vtype-4.c: Ditto. * gcc.target/riscv/rvv/vsetvl/vlmax_single_vtype-5.c: Ditto. * gcc.target/riscv/rvv/autovec/pr113206-1.c: New test. * gcc.target/riscv/rvv/autovec/pr113206-2.c: New test. * gcc.target/riscv/rvv/autovec/pr113209.c: New test.
-
Juzhe-Zhong authored
Fix indent of some codes to make them 8 spaces align. Committed. gcc/ChangeLog: * config/riscv/vector.md: Fix indent.
-
GCC Administrator authored
-
- Jan 03, 2024
-
-
Patrick Palka authored
When computing a direct reference binding via a conversion function yields a bad conversion, reference_binding incorrectly commits to that conversion instead of trying a conversion via a temporary. This causes us to reject the first testcase because the bad direct conversion to B&& via the && conversion operator prevents us from considering the good conversion via the & conversion operator and a temporary. (Similar story for the second testcase.) This patch fixes this by making reference_binding not prematurely commit to such a bad direct conversion. We still fall back to it if using a temporary also fails (otherwise the diagnostic for cpp0x/explicit7.C regresses). PR c++/113064 gcc/cp/ChangeLog: * call.cc (reference_binding): Still try a conversion via a temporary if a direct conversion was bad. gcc/testsuite/ChangeLog: * g++.dg/cpp0x/rv-conv4.C: New test. * g++.dg/cpp0x/rv-conv5.C: New test.
-
Harald Anlauf authored
gcc/fortran/ChangeLog: * trans-types.cc (gfc_get_nodesc_array_type): Clear used gmp variables.
-
Kwok Cheung Yeung authored
Move OMP_CLAUSE_INDIRECT so that it is outside of the range checked by OMP_CLAUSE_SIZE and OMP_CLAUSE_DECL. 2024-01-03 Kwok Cheung Yeung <kcy@codesourcery.com> gcc/c/ * c-parser.cc (c_parser_omp_clause_name): Move handling of indirect clause to correspond to alphabetical order. gcc/cp/ * parser.cc (cp_parser_omp_clause_name): Move handling of indirect clause to correspond to alphabetical order. gcc/ * tree-core.h (enum omp_clause_code): Move OMP_CLAUSE_INDIRECT to before OMP_CLAUSE__SIMDUID_. * tree.cc (omp_clause_num_ops): Update position of entry for OMP_CLAUSE_INDIRECT to correspond with omp_clause_code. (omp_clause_code_name): Likewise.
-
Kwok Cheung Yeung authored
This restructures the code generating FUNC_MAP and IND_FUNC_MAP labels in the assembly code for mkoffload to consume, hopefully making it a bit clearer and easier to search for. 2024-01-03 Kwok Cheung Yeung <kcy@codesourcery.com> gcc/ * config/nvptx/nvptx.cc (nvptx_record_offload_symbol): Restucture printing of FUNC_MAP/IND_FUNC_MAP labels.
-
Jakub Jelinek authored
-
Jakub Jelinek authored
update-copyright.py --this-year FAILs on two spots in the modula2 directories. One is gpl_v3_without_node.texi, I think that is similar to other license files which we already exclude from updates. And the other is GmcOptions.cc, which has lines like mcPrintf_printf0 ((const char *) "Copyright ", 10); mcPrintf_printf1 ((const char *) "Copyright (C) %d Free Software Foundation, Inc.\\n", 49, (const unsigned char *) &year, (sizeof (year)-1)); mcPrintf_printf1 ((const char *) "Copyright (C) %d Free Software Foundation, Inc.\\n", 49, (const unsigned char *) &year, (sizeof (year)-1)); which update-copyhright.py obviously can't grok. The file is generated and doesn't contain normal Copyright year which should be updated, so I think it is also ok to skip it. 2024-01-03 Jakub Jelinek <jakub@redhat.com> * update-copyright.py (GenericFilter): Skip gpl_v3_without_node.texi. (GCCFilter): Skip GmcOptions.cc.
-
Jakub Jelinek authored
Manual part of copyright year updates. 2024-01-03 Jakub Jelinek <jakub@redhat.com> gcc/ * gcc.cc (process_command): Update copyright notice dates. * gcov-dump.cc (print_version): Ditto. * gcov.cc (print_version): Ditto. * gcov-tool.cc (print_version): Ditto. * gengtype.cc (create_file): Ditto. * doc/cpp.texi: Bump @copying's copyright year. * doc/cppinternals.texi: Ditto. * doc/gcc.texi: Ditto. * doc/gccint.texi: Ditto. * doc/gcov.texi: Ditto. * doc/install.texi: Ditto. * doc/invoke.texi: Ditto. gcc/ada/ * gnat_ugn.texi: Bump @copying's copyright year. * gnat_rm.texi: Likewise. gcc/d/ * gdc.texi: Bump @copyrights-d year. gcc/fortran/ * gfortranspec.cc (lang_specific_driver): Update copyright notice dates. * gfc-internals.texi: Bump @copying's copyright year. * gfortran.texi: Ditto. * intrinsic.texi: Ditto. * invoke.texi: Ditto. gcc/go/ * gccgo.texi: Bump @copyrights-go year. libgomp/ * libgomp.texi: Bump @copying's copyright year. libitm/ * libitm.texi: Bump @copying's copyright year. libquadmath/ * libquadmath.texi: Bump @copying's copyright year.
-
Jakub Jelinek authored
2023 -> 2024
-
Jakub Jelinek authored
Rotate ChangeLog files for ChangeLogs with yearly cadence.
-
Xi Ruoyao authored
We already had smin/smax RTL pattern using vfmin/vfmax instructions. But for smin/smax, it's unspecified what will happen if either operand contains any NaN operands. So we would not vectorize the loop with -fno-finite-math-only (the default for all optimization levels expect -Ofast). But, LoongArch vfmin/vfmax instruction is IEEE-754-2008 conformant so we can also use them and vectorize the loop. gcc/ChangeLog: * config/loongarch/simd.md (fmax<mode>3): New define_insn. (fmin<mode>3): Likewise. (reduc_fmax_scal_<mode>3): New define_expand. (reduc_fmin_scal_<mode>3): Likewise. gcc/testsuite/ChangeLog: * gcc.target/loongarch/vfmax-vfmin.c: New test.
-
Juzhe-Zhong authored
This patch fixes the following situation: vl4re16.v v12,0(a5) ... vl4re16.v v16,0(a3) vs4r.v v12,0(a5) ... vl4re16.v v4,0(a0) vs4r.v v16,0(a3) ... vsetvli a3,zero,e16,m4,ta,ma ... vmv.v.x v8,t6 vmsgeu.vv v2,v16,v8 vsub.vv v16,v16,v8 vs4r.v v16,0(a5) ... vs4r.v v4,0(a0) vmsgeu.vv v1,v4,v8 ... vsub.vv v4,v4,v8 slli a6,a4,2 vs4r.v v4,0(a5) ... vsub.vv v4,v12,v8 vmsgeu.vv v3,v12,v8 vs4r.v v4,0(a5) ... There are many spills which are 'vs4r.v'. The root cause is that we don't count vector REG liveness referencing the rgroup controls. _29 = _25->iatom[0]; is transformed into the following vect statement with 4 different loop_len (loop_len_74, loop_len_75, loop_len_76, loop_len_77). vect__29.11_78 = .MASK_LEN_LOAD (vectp_sb.9_72, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_74, 0); vect__29.12_80 = .MASK_LEN_LOAD (vectp_sb.9_79, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_75, 0); vect__29.13_82 = .MASK_LEN_LOAD (vectp_sb.9_81, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_76, 0); vect__29.14_84 = .MASK_LEN_LOAD (vectp_sb.9_83, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_77, 0); which are the LENS number (LOOP_VINFO_LENS (loop_vinfo).length ()). Count liveness according to LOOP_VINFO_LENS (loop_vinfo).length () to compute liveness more accurately: vsetivli zero,8,e16,m1,ta,ma vmsgeu.vi v19,v14,8 vadd.vi v18,v14,-8 vmsgeu.vi v17,v1,8 vadd.vi v16,v1,-8 vlm.v v15,0(a5) ... Tested no regression, ok for trunk ? PR target/113112 gcc/ChangeLog: * config/riscv/riscv-vector-costs.cc (compute_nregs_for_mode): Add rgroup info. (max_number_of_live_regs): Ditto. (has_unexpected_spills_p): Ditto. gcc/testsuite/ChangeLog: * gcc.dg/vect/costmodel/riscv/rvv/pr113112-5.c: New test.
-
Patrick Palka authored
The adjustment to max_size_type.cc in r14-205-g83470a5cd4c3d2 inadvertently increased the execution time of this test by over 5x due to making the two main loops actually run in the signed_p case instead of being dead code. To compensate, this patch cuts the relevant loops' range [-1000,1000] by 10x as proposed in the PR. This shouldn't significantly weaken the test since the same important edge cases are still checked in the smaller range and/or elsewhere. On my machine this reduces the test's execution time by roughly 10x (and 1.6x relative to before r14-205). PR testsuite/113175 libstdc++-v3/ChangeLog: * testsuite/std/ranges/iota/max_size_type.cc (test02): Reduce 'limit' to 100 from 1000 and adjust 'log2_limit' accordingly. (test03): Likewise.
-
GCC Administrator authored
-
- Jan 02, 2024
-
-
Jun Sha (Joshua) authored
This patch replaces csr_operand by vector_length_operand in the vsetvl patterns. This allows future changes in the vector code (i.e. in the vector_length_operand predicate) without affecting scalar patterns that use the csr_operand predicate. gcc/ChangeLog: * config/riscv/vector.md: Use vector_length_operand for vsetvl patterns. Co-authored-by:
Jin Ma <jinma@linux.alibaba.com> Co-authored-by:
Xianmiao Qu <cooper.qu@linux.alibaba.com> Co-authored-by:
Christoph Müllner <christoph.muellner@vrull.eu>
-
Andreas Schwab authored
libsanitizer: * configure.tgt (riscv64-*-linux*): Enable LSan and TSan.
-
Szabolcs Nagy authored
With new glibc one more loop can be vectorized via simd exp in libmvec. Found by the Linaro TCWG CI. gcc/testsuite/ChangeLog: * gfortran.dg/vect/vect-8.f90: Accept more vectorized loops.
-
Juzhe-Zhong authored
In https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=d1eacedc6d9ba9f5522f2c8d49ccfdf7939ad72d I optimize COND_LEN_xxx pattern with dummy len and dummy mask with too simply solution which causes redundant vsetvli in the following case: vsetvli a5,a2,e8,m1,ta,ma vle32.v v8,0(a0) vsetivli zero,16,e32,m4,tu,mu ----> We should apply VLMAX instead of a CONST_INT AVL slli a4,a5,2 vand.vv v0,v8,v16 vand.vv v4,v8,v12 vmseq.vi v0,v0,0 sub a2,a2,a5 vneg.v v4,v8,v0.t vsetvli zero,a5,e32,m4,ta,ma The root cause above is the following codes: is_vlmax_len_p (...) return poly_int_rtx_p (len, &value) && known_eq (value, GET_MODE_NUNITS (mode)) && !satisfies_constraint_K (len); ---> incorrect check. Actually, we should not elide the VLMAX situation that has AVL in range of [0,31]. After removing the the check above, we will have this following issue: vsetivli zero,4,e32,m1,ta,ma vlseg4e32.v v4,(a5) vlseg4e32.v v12,(a3) vsetvli a5,zero,e32,m1,tu,ma ---> This is redundant since VLMAX AVL = 4 when it is fixed-vlmax vfadd.vf v3,v13,fa0 vfadd.vf v1,v12,fa1 vfmul.vv v17,v3,v5 vfmul.vv v16,v1,v5 Since all the following operations (vfadd.vf ... etc) are COND_LEN_xxx with dummy len and dummy mask, we add the simplification operations dummy len and dummy mask into VLMAX TA and MA policy. So, after this patch. Both cases are optimal codegen now: case 1: vsetvli a5,a2,e32,m1,ta,mu vle32.v v2,0(a0) slli a4,a5,2 vand.vv v1,v2,v3 vand.vv v0,v2,v4 sub a2,a2,a5 vmseq.vi v0,v0,0 vneg.v v1,v2,v0.t vse32.v v1,0(a1) case 2: vsetivli zero,4,e32,m1,tu,ma addi a4,a5,400 vlseg4e32.v v12,(a3) vfadd.vf v3,v13,fa0 vfadd.vf v1,v12,fa1 vlseg4e32.v v4,(a4) vfadd.vf v2,v14,fa1 vfmul.vv v17,v3,v5 vfmul.vv v16,v1,v5 This patch is just additional fix of previous approved patch. Tested on both RV32 and RV64 newlib no regression. Committed. gcc/ChangeLog: * config/riscv/riscv-v.cc (is_vlmax_len_p): Remove satisfies_constraint_K. (expand_cond_len_op): Add simplification of dummy len and dummy mask. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/base/vf_avl-3.c: New test.
-
Di Zhao authored
This patch adds a new tuning option 'AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA', to consider fully pipelined FMAs in reassociation. Also, set this option by default for Ampere CPUs. gcc/ChangeLog: * config/aarch64/aarch64-tuning-flags.def (AARCH64_EXTRA_TUNING_OPTION): New tuning option AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA. * config/aarch64/aarch64.cc (aarch64_override_options_internal): Set param_fully_pipelined_fma according to tuning option. * config/aarch64/tuning_models/ampere1.h: Add AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA to tune_flags. * config/aarch64/tuning_models/ampere1a.h: Likewise. * config/aarch64/tuning_models/ampere1b.h: Likewise.
-
Feng Wang authored
gcc/ChangeLog: * config/riscv/vector-crypto.md: Modify copyright year.
-
Juzhe-Zhong authored
Committed. gcc/ChangeLog: * config/riscv/riscv-vector-costs.cc: Move STMT_VINFO_TYPE (...) to local.
-
Lulu Cheng authored
Check whether the assembler supports tls le relax. If it supports it, the assembly instruction sequence of tls le relax will be generated by default. The original way to obtain the tls le symbol address: lu12i.w $rd, %le_hi20(sym) ori $rd, $rd, %le_lo12(sym) add.{w/d} $rd, $rd, $tp If the assembler supports tls le relax, the following sequence is generated: lu12i.w $rd, %le_hi20_r(sym) add.{w/d} $rd,$rd,$tp,%le_add_r(sym) addi.{w/d} $rd,$rd,%le_lo12_r(sym) gcc/ChangeLog: * config.in: Regenerate. * config/loongarch/loongarch-opts.h (HAVE_AS_TLS_LE_RELAXATION): Define. * config/loongarch/loongarch.cc (loongarch_legitimize_tls_address): Added TLS Le Relax support. (loongarch_print_operand_reloc): Add the output string of TLS Le Relax. * config/loongarch/loongarch.md (@add_tls_le_relax<mode>): New template. * configure: Regenerate. * configure.ac: Check if binutils supports TLS le relax. gcc/testsuite/ChangeLog: * lib/target-supports.exp: Add a function to check whether binutil supports TLS Le Relax. * gcc.target/loongarch/tls-le-relax.c: New test.
-
Feng Wang authored
Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com> Co-Authored by: Ciyan Pan <panciyan@eswincomputing.com> gcc/ChangeLog: * config/riscv/iterators.md: Add rotate insn name. * config/riscv/riscv.md: Add new insns name for crypto vector. * config/riscv/vector-iterators.md: Add new iterators for crypto vector. * config/riscv/vector.md: Add the corresponding attr for crypto vector. * config/riscv/vector-crypto.md: New file.The machine descriptions for crypto vector.
-
Juzhe-Zhong authored
This patch fixes the following choosing unexpected big LMUL which cause register spillings. Before this patch, choosing LMUL = 4: addi sp,sp,-160 addiw t1,a2,-1 li a5,7 bleu t1,a5,.L16 vsetivli zero,8,e64,m4,ta,ma vmv.v.x v4,a0 vs4r.v v4,0(sp) ---> spill to the stack. vmv.v.x v4,a1 addi a5,sp,64 vs4r.v v4,0(a5) ---> spill to the stack. The root cause is the following codes: if (poly_int_tree_p (var) || (is_gimple_val (var) && !POINTER_TYPE_P (TREE_TYPE (var)))) We count the variable as consuming a RVV reg group when it is not POINTER_TYPE. It is right for load/store STMT for example: _1 = (MEM)*addr --> addr won't be allocated an RVV vector group. However, we find it is not right for non-load/store STMT: _3 = _1 == x_8(D); _1 is pointer type too but we does allocate a RVV register group for it. So after this patch, we are choosing the perfect LMUL for the testcase in this patch: ble a2,zero,.L17 addiw a7,a2,-1 li a5,3 bleu a7,a5,.L15 srliw a5,a7,2 slli a6,a5,1 add a6,a6,a5 lui a5,%hi(replacements) addi t1,a5,%lo(replacements) slli a6,a6,5 lui t4,%hi(.LANCHOR0) lui t3,%hi(.LANCHOR0+8) lui a3,%hi(.LANCHOR0+16) lui a4,%hi(.LC1) vsetivli zero,4,e16,mf2,ta,ma addi t4,t4,%lo(.LANCHOR0) addi t3,t3,%lo(.LANCHOR0+8) addi a3,a3,%lo(.LANCHOR0+16) addi a4,a4,%lo(.LC1) add a6,t1,a6 addi a5,a5,%lo(replacements) vle16.v v18,0(t4) vle16.v v17,0(t3) vle16.v v16,0(a3) vmsgeu.vi v25,v18,4 vadd.vi v24,v18,-4 vmsgeu.vi v23,v17,4 vadd.vi v22,v17,-4 vlm.v v21,0(a4) vmsgeu.vi v20,v16,4 vadd.vi v19,v16,-4 vsetvli zero,zero,e64,m2,ta,mu vmv.v.x v12,a0 vmv.v.x v14,a1 .L4: vlseg3e64.v v6,(a5) vmseq.vv v2,v6,v12 vmseq.vv v0,v8,v12 vmsne.vv v1,v8,v12 vmand.mm v1,v1,v2 vmerge.vvm v2,v8,v14,v0 vmv1r.v v0,v1 addi a4,a5,24 vmerge.vvm v6,v6,v14,v0 vmerge.vim v2,v2,0,v0 vrgatherei16.vv v4,v6,v18 vmv1r.v v0,v25 vrgatherei16.vv v4,v2,v24,v0.t vs1r.v v4,0(a5) addi a3,a5,48 vmv1r.v v0,v21 vmv2r.v v4,v2 vcompress.vm v4,v6,v0 vs1r.v v4,0(a4) vmv1r.v v0,v23 addi a4,a5,72 vrgatherei16.vv v4,v6,v17 vrgatherei16.vv v4,v2,v22,v0.t vs1r.v v4,0(a3) vmv1r.v v0,v20 vrgatherei16.vv v4,v6,v16 addi a5,a5,96 vrgatherei16.vv v4,v2,v19,v0.t vs1r.v v4,0(a4) bne a6,a5,.L4 No spillings, no "sp" register used. Tested on both RV32 and RV64, no regression. Ok for trunk ? PR target/113112 gcc/ChangeLog: * config/riscv/riscv-vector-costs.cc (compute_nregs_for_mode): Fix pointer type liveness count. gcc/testsuite/ChangeLog: * gcc.dg/vect/costmodel/riscv/rvv/pr113112-4.c: New test.
-
GCC Administrator authored
-
- Jan 01, 2024
-
-
GCC Administrator authored
-
- Dec 31, 2023
-
-
Roger Sayle authored
This patch resolves the failure of pr43644-2.c in the testsuite, a code quality test I added back in July, that started failing as the code GCC generates for 128-bit values (and their parameter passing) has been in flux. The function: unsigned __int128 foo(unsigned __int128 x, unsigned long long y) { return x+y; } currently generates: foo: movq %rdx, %rcx movq %rdi, %rax movq %rsi, %rdx addq %rcx, %rax adcq $0, %rdx ret and with this patch, we now generate: foo: movq %rdi, %rax addq %rdx, %rax movq %rsi, %rdx adcq $0, %rdx which is optimal. 2023-12-31 Uros Bizjak <ubizjak@gmail.com> Roger Sayle <roger@nextmovesoftware.com> gcc/ChangeLog PR target/43644 * config/i386/i386.md (*add<dwi>3_doubleword_concat_zext): Tweak order of instructions after split, to minimize number of moves. gcc/testsuite/ChangeLog PR target/43644 * gcc.target/i386/pr43644-2.c: Expect 2 movq instructions.
-
Hans-Peter Nilsson authored
Testing for mmix (a 64-bit target using Knuth's simulator). The test is largely pruned for simulators, but still needs 5m57s on my laptop from 3.5 years ago to run to successful completion. Perhaps slow hosted targets could also have problems so increasing the timeout limit, not just for simulators but for everyone, and by more than a factor 2. * testsuite/20_util/hash/quality.cc: Increase timeout by a factor 3.
-
François Dumont authored
A number of methods were still not using the small size optimization which is to prefer an O(N) research to a hash computation as long as N is small. libstdc++-v3/ChangeLog: * include/bits/hashtable.h: Move comment about all equivalent values being next to each other in the class documentation header. (_M_reinsert_node, _M_merge_unique): Implement small size optimization. (_M_find_tr, _M_count_tr, _M_equal_range_tr): Likewise.
-
François Dumont authored
Add benches on insert with hint and before begin cache. libstdc++-v3/ChangeLog: * testsuite/performance/23_containers/insert/54075.cc: Add lookup on unknown entries w/o copy to see potential impact of memory fragmentation enhancements. * testsuite/performance/23_containers/insert/unordered_multiset_hint.cc: Enhance hash functor to make it perfect, exactly 1 entry per bucket. Also use hash functor tagged as slow or not to bench w/o hash code cache. * testsuite/performance/23_containers/insert/unordered_set_hint.cc: New test case. Like previous one but using std::unordered_set. * testsuite/performance/23_containers/insert/unordered_set_range_insert.cc: New test case. Check performance of range-insertion compared to individual insertions. * testsuite/performance/23_containers/insert_erase/unordered_small_size.cc: Add same bench but after a copy to demonstrate impact of enhancements regarding memory fragmentation.
-
GCC Administrator authored
-
- Dec 30, 2023
-
-
Martin Uecker authored
This fixes the test gcc.dg/gnu23-tag-4.c introduced by commit 23fee88f which fails for -march=... because the DECL_FIELD_BIT_OFFSET are set inconsistently for types with and without variable-sized field. This is fixed by testing for DECL_ALIGN instead. The code is further simplified by removing some unnecessary conditions, i.e. anon_field is set unconditionaly and all fields are assumed to be DECL_FIELDs. gcc/c: * c-typeck.cc (tagged_types_tu_compatible_p): Revise. gcc/testsuite: * gcc.dg/c23-tag-9.c: New test.
-
Joseph Myers authored
There will be another update in January. * MAINTAINERS: Update my email address.
-
GCC Administrator authored
-
- Dec 29, 2023
-
-
Jan Hubicka authored
this patch disables use of FMA in matrix multiplication loop for generic (for x86-64-v3) and zen4. I tested this on zen4 and Xenon Gold Gold 6212U. For Intel this is neutral both on the matrix multiplication microbenchmark (attached) and spec2k17 where the difference was within noise for Core. On core the micro-benchmark runs as follows: With FMA: 578,500,241 cycles:u # 3.645 GHz ( +- 0.12% ) 753,318,477 instructions:u # 1.30 insn per cycle ( +- 0.00% ) 125,417,701 branches:u # 790.227 M/sec ( +- 0.00% ) 0.159146 +- 0.000363 seconds time elapsed ( +- 0.23% ) No FMA: 577,573,960 cycles:u # 3.514 GHz ( +- 0.15% ) 878,318,479 instructions:u # 1.52 insn per cycle ( +- 0.00% ) 125,417,702 branches:u # 763.035 M/sec ( +- 0.00% ) 0.164734 +- 0.000321 seconds time elapsed ( +- 0.19% ) So the cycle count is unchanged and discrete multiply+add takes same time as FMA. While on zen: With FMA: 484875179 cycles:u # 3.599 GHz ( +- 0.05% ) (82.11%) 752031517 instructions:u # 1.55 insn per cycle 125106525 branches:u # 928.712 M/sec ( +- 0.03% ) (85.09%) 128356 branch-misses:u # 0.10% of all branches ( +- 0.06% ) (83.58%) No FMA: 375875209 cycles:u # 3.592 GHz ( +- 0.08% ) (80.74%) 875725341 instructions:u # 2.33 insn per cycle 124903825 branches:u # 1.194 G/sec ( +- 0.04% ) (84.59%) 0.105203 +- 0.000188 seconds time elapsed ( +- 0.18% ) The diffrerence is that Cores understand the fact that fmadd does not need all three parameters to start computation, while Zen cores doesn't. Since this seems noticeable win on zen and not loss on Core it seems like good default for generic. float a[SIZE][SIZE]; float b[SIZE][SIZE]; float c[SIZE][SIZE]; void init(void) { int i, j, k; for(i=0; i<SIZE; ++i) { for(j=0; j<SIZE; ++j) { a[i][j] = (float)i + j; b[i][j] = (float)i - j; c[i][j] = 0.0f; } } } void mult(void) { int i, j, k; for(i=0; i<SIZE; ++i) { for(j=0; j<SIZE; ++j) { for(k=0; k<SIZE; ++k) { c[i][j] += a[i][k] * b[k][j]; } } } } int main(void) { clock_t s, e; init(); s=clock(); mult(); e=clock(); printf(" mult took %10d clocks\n", (int)(e-s)); return 0; } gcc/ChangeLog: * config/i386/x86-tune.def (X86_TUNE_AVOID_128FMA_CHAINS, X86_TUNE_AVOID_256FMA_CHAINS): Enable for znver4 and Core.
-
Tamar Christina authored
In gimple the operation short _8; double _9; _9 = (double) _8; denotes two operations on AArch64. First we have to widen from short to long and then convert this integer to a double. Currently however we only count the widen/truncate operations: (double) _5 6 times vec_promote_demote costs 12 in body (double) _5 12 times vec_promote_demote costs 24 in body but not the actual conversion operation, which needs an additional 12 instructions in the attached testcase. Without this the attached testcase ends up incorrectly thinking that it's beneficial to vectorize the loop at a very high VF = 8 (4x unrolled). Because we can't change the mid-end to account for this the costing code in the backend now keeps track of whether the previous operation was a promotion/demotion and ajdusts the expected number of instructions to: 1. If it's the first FLOAT_EXPR and the precision of the lhs and rhs are different, double it, since we need to convert and promote. 2. If it's the previous operation was a demonition/promotion then reduce the cost of the current operation by the amount we added extra in the last. with the patch we get: (double) _5 6 times vec_promote_demote costs 24 in body (double) _5 12 times vec_promote_demote costs 36 in body which correctly accounts for 30 operations. This fixes the 16% regression in imagick in SPECCPU 2017 reported on Neoverse N2 and using the new generic Armv9-a cost model. gcc/ChangeLog: PR target/110625 * config/aarch64/aarch64.cc (aarch64_vector_costs::add_stmt_cost): Adjust throughput and latency calculations for vector conversions. (class aarch64_vector_costs): Add m_num_last_promote_demote. gcc/testsuite/ChangeLog: PR target/110625 * gcc.target/aarch64/pr110625_4.c: New test. * gcc.target/aarch64/sve/unpack_fcvt_signed_1.c: Add --param aarch64-sve-compare-costs=0. * gcc.target/aarch64/sve/unpack_fcvt_unsigned_1.c: Likewise
-