- Jan 04, 2024
-
-
David Malcolm authored
gcc/analyzer/ChangeLog: PR analyzer/113222 * access-diagram.cc (valid_region_spatial_item::add_boundaries): Handle TYPE_DOMAIN being null. (valid_region_spatial_item::add_array_elements_to_table): Likewise. gcc/testsuite/ChangeLog: PR analyzer/113222 * gcc.dg/analyzer/out-of-bounds-diagram-pr113222.c: New test. Signed-off-by:
David Malcolm <dmalcolm@redhat.com>
-
Kuan-Lin Chen authored
According to spec, fmv.h checks if the input operands are correctly NaN-boxed. If not, the input value is treated as an n-bit canonical NaN. This patch fixs the issue that operands returned by soft-fp16 libgcc (i.e., __truncdfhf2) was not correctly NaN-boxed. gcc/ChangeLog: * config/riscv/riscv.cc (riscv_legitimize_move): Expand movfh with Nan-boxing value. * config/riscv/riscv.md (*movhf_softfloat_unspec): New pattern. gcc/testsuite/ChangeLog: * gcc.target/riscv/_Float16-nanboxing.c: New test. Co-authored-by:
Patrick Lin <patrick@andestech.com> Co-authored-by:
Rufus Chen <rufus@andestech.com> Co-authored-by:
Monk Chiang <monk.chiang@sifive.com>
-
Roger Sayle authored
This patch fixes PR rtl-optmization/104914 by tweaking/improving the way the fields are written into a pseudo register that needs to be kept sign extended. The motivating example from the bugzilla PR is: extern void ext(int); void foo(const unsigned char *buf) { int val; ((unsigned char*)&val)[0] = *buf++; ((unsigned char*)&val)[1] = *buf++; ((unsigned char*)&val)[2] = *buf++; ((unsigned char*)&val)[3] = *buf++; if(val > 0) ext(1); else ext(0); } which at the end of the tree optimization passes looks like: void foo (const unsigned char * buf) { int val; unsigned char _1; unsigned char _2; unsigned char _3; unsigned char _4; int val.5_5; <bb 2> [local count: 1073741824]: _1 = *buf_7(D); MEM[(unsigned char *)&val] = _1; _2 = MEM[(const unsigned char *)buf_7(D) + 1B]; MEM[(unsigned char *)&val + 1B] = _2; _3 = MEM[(const unsigned char *)buf_7(D) + 2B]; MEM[(unsigned char *)&val + 2B] = _3; _4 = MEM[(const unsigned char *)buf_7(D) + 3B]; MEM[(unsigned char *)&val + 3B] = _4; val.5_5 = val; if (val.5_5 > 0) goto <bb 3>; [59.00%] else goto <bb 4>; [41.00%] <bb 3> [local count: 633507681]: ext (1); goto <bb 5>; [100.00%] <bb 4> [local count: 440234144]: ext (0); <bb 5> [local count: 1073741824]: val ={v} {CLOBBER(eol)}; return; } Here four bytes are being sequentially written into the SImode value val. On some platforms, such as MIPS64, this SImode value is kept in a 64-bit register, suitably sign-extended. The function expand_assignment contains logic to handle this via SUBREG_PROMOTED_VAR_P (around line 6264 in expr.cc) which outputs an explicit extension operation after each store_field (typically insv) to such promoted/extended pseudos. The first observation is that there's no need to perform sign extension after each byte in the example above; the extension is only required after changes to the most significant byte (i.e. to a field that overlaps the most significant bit). The bug fix is actually a bit more subtle, but at this point during code expansion it's not safe to use a SUBREG when sign-extending this field. Currently, GCC generates (sign_extend:DI (subreg:SI (reg:DI) 0)) but combine (and other RTL optimizers) later realize that because SImode values are always sign-extended in their 64-bit hard registers that this is a no-op and eliminates it. The trouble is that it's unsafe to refer to the SImode lowpart of a 64-bit register using SUBREG at those critical points when temporarily the value isn't correctly sign-extended, and the usual backend invariants don't hold. At these critical points, the middle-end needs to use an explicit TRUNCATE rtx (as this isn't a TRULY_NOOP_TRUNCATION), so that the explicit sign-extension looks like (sign_extend:DI (truncate:SI (reg:DI)), which avoids the problem. 2024-01-04 Roger Sayle <roger@nextmovesoftware.com> Jeff Law <jlaw@ventanamicro.com> gcc/ChangeLog PR rtl-optimization/104914 * expr.cc (expand_assignment): When target is SUBREG_PROMOTED_VAR_P a sign or zero extension is only required if the modified field overlaps the SUBREG's most significant bit. On MODE_REP_EXTENDED targets, don't refer to the temporarily incorrectly extended value using a SUBREG, but instead generate an explicit TRUNCATE rtx.
-
Juzhe-Zhong authored
Consider this following case: void f (int *restrict a, int *restrict b, int *restrict c, int *restrict d, int n) { for (int i = 0; i < n; i++) { int tmp = b[i] + 15; int tmp2 = tmp + b[i]; c[i] = tmp2 + b[i]; d[i] = tmp + tmp2 + b[i]; } } Current dynamic LMUL cost model choose LMUL = 4 because we count the "15" as consuming 1 vector register group which is not accurate. We teach the dynamic LMUL cost model be aware of the potential vi variant instructions transformation, so that we can choose LMUL = 8 according to more accurate cost model. After this patch: f: ble a4,zero,.L5 .L3: vsetvli a5,a4,e32,m8,ta,ma slli a0,a5,2 vle32.v v16,0(a1) vadd.vi v24,v16,15 vadd.vv v8,v24,v16 vadd.vv v0,v8,v16 vse32.v v0,0(a2) vadd.vv v8,v8,v24 vadd.vv v8,v8,v16 vse32.v v8,0(a3) add a1,a1,a0 add a2,a2,a0 add a3,a3,a0 sub a4,a4,a5 bne a4,zero,.L3 .L5: ret Tested on both RV32 and RV64 no regression. Ok for trunk ? gcc/ChangeLog: * config/riscv/riscv-vector-costs.cc (variable_vectorized_p): Teach vi variant. gcc/testsuite/ChangeLog: * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-13.c: New test.
-
Kito Cheng authored
`interrupt` function will backup fcsr register, but it fixed to SImode, it's not big issue since fcsr only used 8 bits so far, however the offset should still using UNITS_PER_WORD to prevent the stack offset become non 8 byte aligned, it will cause problem for RV64. gcc/ChangeLog: * config/riscv/riscv.cc (riscv_for_each_saved_reg): Adjust the offset of fcsr. gcc/testsuite/ChangeLog: * gcc.target/riscv/interrupt-misaligned.c: New.
-
chenxiaolong authored
In the LoongArch architecture, GCC supports the vectorization function tested by vect/slp-26.c, but there is no detection of loongarch in dg-finals. Add loongarch to the appropriate dg-finals. gcc/testsuite/ChangeLog: * gcc.dg/vect/slp-26.c: Add loongarch.
-
Juzhe-Zhong authored
Notice a case has "Maximum lmul = 16" which is incorrect. Correct LMUL estimation for MASK_LEN_LOAD/MASK_LEN_STORE. Committed. gcc/ChangeLog: * config/riscv/riscv-vector-costs.cc (variable_vectorized_p): New function. (compute_nregs_for_mode): Refine LMUL. (max_number_of_live_regs): Ditto. (compute_estimated_lmul): Ditto. (has_unexpected_spills_p): Ditto. gcc/testsuite/ChangeLog: * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-11.c: New test.
-
chenxiaolong authored
After implementing the cost model on the LoongArch architecture, the GCC compiler code has this feature turned on by default, which causes the lasx-xvstelm.c file test to fail. Through analysis, this test case can generate vectorization instructions required for detection only after disabling the functionality of the cost model with the "-fno-vect-cost-model" compilation option. gcc/testsuite/ChangeLog: * gcc.target/loongarch/vector/lasx/lasx-xvstelm.c:Add compile option "-fno-vect-cost-model" to dg-options.
-
Li Wei authored
There are currently two versions of the implementations of constant vector permutation: loongarch_expand_vec_perm_const_1 and loongarch_expand_vec_perm_const_2. The implementations of the two versions are different. Currently, only the implementation of loongarch_expand_vec_perm_const_1 is used for 256-bit vectors. We hope to streamline the code as much as possible while retaining the better-performing implementation of the two. By repeatedly testing spec2006 and spec2017, we got the following Merged version. Compared with the pre-merger version, the number of lines of code in loongarch.cc has been reduced by 888 lines. At the same time, the performance of SPECint2006 under Ofast has been improved by 0.97%, and the performance of SPEC2017 fprate has been improved by 0.27%. gcc/ChangeLog: * config/loongarch/loongarch.cc (loongarch_is_odd_extraction): Remove useless forward declaration. (loongarch_is_even_extraction): Remove useless forward declaration. (loongarch_try_expand_lsx_vshuf_const): Removed. (loongarch_expand_vec_perm_const_1): Merged. (loongarch_is_double_duplicate): Removed. (loongarch_is_center_extraction): Ditto. (loongarch_is_reversing_permutation): Ditto. (loongarch_is_di_misalign_extract): Ditto. (loongarch_is_si_misalign_extract): Ditto. (loongarch_is_lasx_lowpart_extract): Ditto. (loongarch_is_op_reverse_perm): Ditto. (loongarch_is_single_op_perm): Ditto. (loongarch_is_divisible_perm): Ditto. (loongarch_is_triple_stride_extract): Ditto. (loongarch_expand_vec_perm_const_2): Merged. (loongarch_expand_vec_perm_const): New. (loongarch_vectorize_vec_perm_const): Adjust.
-
Sandra Loosemore authored
gcc/ChangeLog * omp-general.cc: Fix comment typos and misplaced/confusing comments. Delete redundant include of omp-general.h.
-
YunQiang Su authored
gcc/testsuite * gcc.c-torture/compile/mipscop-1.c: Include stdio.h. * gcc.c-torture/compile/mipscop-2.c: Ditto. * gcc.c-torture/compile/mipscop-3.c: Ditto. * gcc.c-torture/compile/mipscop-4.c: Ditto.
-
YunQiang Su authored
This match pattern allows combination (zero_extract:DI 8, 24, QI) with an sign-extend to 32bit INS instruction on TARGET_64BIT. For SI mode, if the sign-bit is modified by bitops, we will need a sign-extend operation. Since 32bit INS instruction can be sure that result is sign-extended, and the QImode src register is safe for INS, too. (insn 19 18 20 2 (set (zero_extract:DI (reg/v:DI 200 [ val ]) (const_int 8 [0x8]) (const_int 24 [0x18])) (subreg:DI (reg:QI 205) 0)) "../xx.c":7:29 -1 (nil)) (insn 20 19 23 2 (set (reg/v:DI 200 [ val ]) (sign_extend:DI (subreg:SI (reg/v:DI 200 [ val ]) 0))) "../xx.c":7:29 -1 (nil)) Combine try to merge them to: (insn 20 19 23 2 (set (reg/v:DI 200 [ val ]) (sign_extend:DI (ior:SI (and:SI (subreg:SI (reg/v:DI 200 [ val ]) 0) (const_int 16777215 [0xffffff])) (ashift:SI (subreg:SI (reg:QI 205 [ MEM[(const unsigned char *)buf_8(D) + 3B] ]) 0) (const_int 24 [0x18]))))) "../xx.c":7:29 18 {*insv_extended} (expr_list:REG_DEAD (reg:QI 205 [ MEM[(const unsigned char *)buf_8(D) + 3B] ]) (nil))) And do similarly for 16/16 pair: (insn 13 12 14 2 (set (zero_extract:DI (reg/v:DI 198 [ val ]) (const_int 16 [0x10]) (const_int 16 [0x10])) (subreg:DI (reg:HI 201 [ MEM[(const short unsigned int *)buf_6(D) + 2B] ]) 0)) "xx.c":5:30 286 {*insvdi} (expr_list:REG_DEAD (reg:HI 201 [ MEM[(const short unsigned int *)buf_6(D) + 2B] ]) (nil))) (insn 14 13 17 2 (set (reg/v:DI 198 [ val ]) (sign_extend:DI (subreg:SI (reg/v:DI 198 [ val ]) 0))) "xx.c":5:30 241 {extendsidi2} (nil)) ------------> (insn 14 13 17 2 (set (reg/v:DI 198 [ val ]) (sign_extend:DI (ior:SI (ashift:SI (subreg:SI (reg:HI 201 [ MEM[(const short unsigned int *)buf_6(D) + 2B] ]) 0) (const_int 16 [0x10])) (zero_extend:SI (subreg:HI (reg/v:DI 198 [ val ]) 0))))) "xx.c":5:30 284 {*inshisi_extended} (expr_list:REG_DEAD (reg:HI 201 [ MEM[(const short unsigned int *)buf_6(D) + 2B] ]) (nil))) Let's accept these patterns, and set the cost to 1 instruction. gcc PR rtl-optimization/104914 * config/mips/mips.md (insqisi_extended): New patterns. (inshisi_extended): Ditto. gcc/testsuite * gcc.target/mips/pr104914.c: New test.
-
YunQiang Su authored
When combine some instructions, the generic `rtx_cost` may over estimate the cost of result RTL, due to that the RTL may be quite complex and `rtx_cost` has no information that this RTL can be convert to simple hardware instruction(s). In this case, Let's use `insn_count * perf_ratio` to estimate the cost if both of them are available. Otherwise fallback to pattern_cost. When non-speed, Let's use the length as cost. gcc * config/mips/mips.cc (mips_insn_cost): New function. gcc/testsuite * gcc.target/mips/data-sym-multi-pool.c: Skip Os or -O0.
-
YunQiang Su authored
The accurate cost of an pattern can get with insn_count * perf_ratio The default value is set to 0 instead of 1, since that we will need to distinguish the default value and it is really set for an pattern. Since it is not set for most patterns yet, to use it, we will need to be sure that it's value is greater than 0. This attr will be used in `mips_insn_cost`. gcc * config/mips/mips.md (perf_ratio): New attribute.
-
Juzhe-Zhong authored
As PR113206 and PR113209, the bugs happens on the following situation: li a4,32 ... vsetvli zero,a4,e8,m8,ta,ma ... slliw a4,a3,24 sraiw a4,a4,24 bge a3,a1,.L8 sb a4,%lo(e)(a0) vsetvli zero,a4,e8,m8,ta,ma --> a4 is polluted value not the expected "32". ... .L7: j .L7 ---> infinite loop. The root cause is that infinite loop confuse earliest computation and let earliest fusion happens on unexpected place. Disable blocks that belong to infinite loop to fix this bug since applying ealiest LCM fusion on infinite loop seems quite complicated and we don't see any benefits. Note that disabling earliest fusion on infinite loops doesn't hurt the vsetvli performance, instead, it does improve codegen of some cases. Tested on both RV32 and RV64 no regression. PR target/113206 PR target/113209 gcc/ChangeLog: * config/riscv/riscv-vsetvl.cc (invalid_opt_bb_p): New function. (pre_vsetvl::compute_lcm_local_properties): Disable earliest fusion on blocks belong to infinite loop. (pre_vsetvl::emit_vsetvl): Remove fake edges. * config/riscv/t-riscv: Add a new include file. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/vsetvl/avl_single-23.c: Adapt test. * gcc.target/riscv/rvv/vsetvl/vlmax_call-1.c: Robostify test. * gcc.target/riscv/rvv/vsetvl/vlmax_call-2.c: Ditto. * gcc.target/riscv/rvv/vsetvl/vlmax_call-3.c: Ditto. * gcc.target/riscv/rvv/vsetvl/vlmax_conflict-5.c: Ditto. * gcc.target/riscv/rvv/vsetvl/vlmax_single_vtype-1.c: Ditto. * gcc.target/riscv/rvv/vsetvl/vlmax_single_vtype-2.c: Ditto. * gcc.target/riscv/rvv/vsetvl/vlmax_single_vtype-3.c: Ditto. * gcc.target/riscv/rvv/vsetvl/vlmax_single_vtype-4.c: Ditto. * gcc.target/riscv/rvv/vsetvl/vlmax_single_vtype-5.c: Ditto. * gcc.target/riscv/rvv/autovec/pr113206-1.c: New test. * gcc.target/riscv/rvv/autovec/pr113206-2.c: New test. * gcc.target/riscv/rvv/autovec/pr113209.c: New test.
-
Juzhe-Zhong authored
Fix indent of some codes to make them 8 spaces align. Committed. gcc/ChangeLog: * config/riscv/vector.md: Fix indent.
-
GCC Administrator authored
-
- Jan 03, 2024
-
-
Patrick Palka authored
When computing a direct reference binding via a conversion function yields a bad conversion, reference_binding incorrectly commits to that conversion instead of trying a conversion via a temporary. This causes us to reject the first testcase because the bad direct conversion to B&& via the && conversion operator prevents us from considering the good conversion via the & conversion operator and a temporary. (Similar story for the second testcase.) This patch fixes this by making reference_binding not prematurely commit to such a bad direct conversion. We still fall back to it if using a temporary also fails (otherwise the diagnostic for cpp0x/explicit7.C regresses). PR c++/113064 gcc/cp/ChangeLog: * call.cc (reference_binding): Still try a conversion via a temporary if a direct conversion was bad. gcc/testsuite/ChangeLog: * g++.dg/cpp0x/rv-conv4.C: New test. * g++.dg/cpp0x/rv-conv5.C: New test.
-
Harald Anlauf authored
gcc/fortran/ChangeLog: * trans-types.cc (gfc_get_nodesc_array_type): Clear used gmp variables.
-
Kwok Cheung Yeung authored
Move OMP_CLAUSE_INDIRECT so that it is outside of the range checked by OMP_CLAUSE_SIZE and OMP_CLAUSE_DECL. 2024-01-03 Kwok Cheung Yeung <kcy@codesourcery.com> gcc/c/ * c-parser.cc (c_parser_omp_clause_name): Move handling of indirect clause to correspond to alphabetical order. gcc/cp/ * parser.cc (cp_parser_omp_clause_name): Move handling of indirect clause to correspond to alphabetical order. gcc/ * tree-core.h (enum omp_clause_code): Move OMP_CLAUSE_INDIRECT to before OMP_CLAUSE__SIMDUID_. * tree.cc (omp_clause_num_ops): Update position of entry for OMP_CLAUSE_INDIRECT to correspond with omp_clause_code. (omp_clause_code_name): Likewise.
-
Kwok Cheung Yeung authored
This restructures the code generating FUNC_MAP and IND_FUNC_MAP labels in the assembly code for mkoffload to consume, hopefully making it a bit clearer and easier to search for. 2024-01-03 Kwok Cheung Yeung <kcy@codesourcery.com> gcc/ * config/nvptx/nvptx.cc (nvptx_record_offload_symbol): Restucture printing of FUNC_MAP/IND_FUNC_MAP labels.
-
Jakub Jelinek authored
-
Jakub Jelinek authored
update-copyright.py --this-year FAILs on two spots in the modula2 directories. One is gpl_v3_without_node.texi, I think that is similar to other license files which we already exclude from updates. And the other is GmcOptions.cc, which has lines like mcPrintf_printf0 ((const char *) "Copyright ", 10); mcPrintf_printf1 ((const char *) "Copyright (C) %d Free Software Foundation, Inc.\\n", 49, (const unsigned char *) &year, (sizeof (year)-1)); mcPrintf_printf1 ((const char *) "Copyright (C) %d Free Software Foundation, Inc.\\n", 49, (const unsigned char *) &year, (sizeof (year)-1)); which update-copyhright.py obviously can't grok. The file is generated and doesn't contain normal Copyright year which should be updated, so I think it is also ok to skip it. 2024-01-03 Jakub Jelinek <jakub@redhat.com> * update-copyright.py (GenericFilter): Skip gpl_v3_without_node.texi. (GCCFilter): Skip GmcOptions.cc.
-
Jakub Jelinek authored
Manual part of copyright year updates. 2024-01-03 Jakub Jelinek <jakub@redhat.com> gcc/ * gcc.cc (process_command): Update copyright notice dates. * gcov-dump.cc (print_version): Ditto. * gcov.cc (print_version): Ditto. * gcov-tool.cc (print_version): Ditto. * gengtype.cc (create_file): Ditto. * doc/cpp.texi: Bump @copying's copyright year. * doc/cppinternals.texi: Ditto. * doc/gcc.texi: Ditto. * doc/gccint.texi: Ditto. * doc/gcov.texi: Ditto. * doc/install.texi: Ditto. * doc/invoke.texi: Ditto. gcc/ada/ * gnat_ugn.texi: Bump @copying's copyright year. * gnat_rm.texi: Likewise. gcc/d/ * gdc.texi: Bump @copyrights-d year. gcc/fortran/ * gfortranspec.cc (lang_specific_driver): Update copyright notice dates. * gfc-internals.texi: Bump @copying's copyright year. * gfortran.texi: Ditto. * intrinsic.texi: Ditto. * invoke.texi: Ditto. gcc/go/ * gccgo.texi: Bump @copyrights-go year. libgomp/ * libgomp.texi: Bump @copying's copyright year. libitm/ * libitm.texi: Bump @copying's copyright year. libquadmath/ * libquadmath.texi: Bump @copying's copyright year.
-
Jakub Jelinek authored
2023 -> 2024
-
Jakub Jelinek authored
Rotate ChangeLog files for ChangeLogs with yearly cadence.
-
Xi Ruoyao authored
We already had smin/smax RTL pattern using vfmin/vfmax instructions. But for smin/smax, it's unspecified what will happen if either operand contains any NaN operands. So we would not vectorize the loop with -fno-finite-math-only (the default for all optimization levels expect -Ofast). But, LoongArch vfmin/vfmax instruction is IEEE-754-2008 conformant so we can also use them and vectorize the loop. gcc/ChangeLog: * config/loongarch/simd.md (fmax<mode>3): New define_insn. (fmin<mode>3): Likewise. (reduc_fmax_scal_<mode>3): New define_expand. (reduc_fmin_scal_<mode>3): Likewise. gcc/testsuite/ChangeLog: * gcc.target/loongarch/vfmax-vfmin.c: New test.
-
Juzhe-Zhong authored
This patch fixes the following situation: vl4re16.v v12,0(a5) ... vl4re16.v v16,0(a3) vs4r.v v12,0(a5) ... vl4re16.v v4,0(a0) vs4r.v v16,0(a3) ... vsetvli a3,zero,e16,m4,ta,ma ... vmv.v.x v8,t6 vmsgeu.vv v2,v16,v8 vsub.vv v16,v16,v8 vs4r.v v16,0(a5) ... vs4r.v v4,0(a0) vmsgeu.vv v1,v4,v8 ... vsub.vv v4,v4,v8 slli a6,a4,2 vs4r.v v4,0(a5) ... vsub.vv v4,v12,v8 vmsgeu.vv v3,v12,v8 vs4r.v v4,0(a5) ... There are many spills which are 'vs4r.v'. The root cause is that we don't count vector REG liveness referencing the rgroup controls. _29 = _25->iatom[0]; is transformed into the following vect statement with 4 different loop_len (loop_len_74, loop_len_75, loop_len_76, loop_len_77). vect__29.11_78 = .MASK_LEN_LOAD (vectp_sb.9_72, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_74, 0); vect__29.12_80 = .MASK_LEN_LOAD (vectp_sb.9_79, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_75, 0); vect__29.13_82 = .MASK_LEN_LOAD (vectp_sb.9_81, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_76, 0); vect__29.14_84 = .MASK_LEN_LOAD (vectp_sb.9_83, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_77, 0); which are the LENS number (LOOP_VINFO_LENS (loop_vinfo).length ()). Count liveness according to LOOP_VINFO_LENS (loop_vinfo).length () to compute liveness more accurately: vsetivli zero,8,e16,m1,ta,ma vmsgeu.vi v19,v14,8 vadd.vi v18,v14,-8 vmsgeu.vi v17,v1,8 vadd.vi v16,v1,-8 vlm.v v15,0(a5) ... Tested no regression, ok for trunk ? PR target/113112 gcc/ChangeLog: * config/riscv/riscv-vector-costs.cc (compute_nregs_for_mode): Add rgroup info. (max_number_of_live_regs): Ditto. (has_unexpected_spills_p): Ditto. gcc/testsuite/ChangeLog: * gcc.dg/vect/costmodel/riscv/rvv/pr113112-5.c: New test.
-
Patrick Palka authored
The adjustment to max_size_type.cc in r14-205-g83470a5cd4c3d2 inadvertently increased the execution time of this test by over 5x due to making the two main loops actually run in the signed_p case instead of being dead code. To compensate, this patch cuts the relevant loops' range [-1000,1000] by 10x as proposed in the PR. This shouldn't significantly weaken the test since the same important edge cases are still checked in the smaller range and/or elsewhere. On my machine this reduces the test's execution time by roughly 10x (and 1.6x relative to before r14-205). PR testsuite/113175 libstdc++-v3/ChangeLog: * testsuite/std/ranges/iota/max_size_type.cc (test02): Reduce 'limit' to 100 from 1000 and adjust 'log2_limit' accordingly. (test03): Likewise.
-
GCC Administrator authored
-
- Jan 02, 2024
-
-
Jun Sha (Joshua) authored
This patch replaces csr_operand by vector_length_operand in the vsetvl patterns. This allows future changes in the vector code (i.e. in the vector_length_operand predicate) without affecting scalar patterns that use the csr_operand predicate. gcc/ChangeLog: * config/riscv/vector.md: Use vector_length_operand for vsetvl patterns. Co-authored-by:
Jin Ma <jinma@linux.alibaba.com> Co-authored-by:
Xianmiao Qu <cooper.qu@linux.alibaba.com> Co-authored-by:
Christoph Müllner <christoph.muellner@vrull.eu>
-
Andreas Schwab authored
libsanitizer: * configure.tgt (riscv64-*-linux*): Enable LSan and TSan.
-
Szabolcs Nagy authored
With new glibc one more loop can be vectorized via simd exp in libmvec. Found by the Linaro TCWG CI. gcc/testsuite/ChangeLog: * gfortran.dg/vect/vect-8.f90: Accept more vectorized loops.
-
Juzhe-Zhong authored
In https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=d1eacedc6d9ba9f5522f2c8d49ccfdf7939ad72d I optimize COND_LEN_xxx pattern with dummy len and dummy mask with too simply solution which causes redundant vsetvli in the following case: vsetvli a5,a2,e8,m1,ta,ma vle32.v v8,0(a0) vsetivli zero,16,e32,m4,tu,mu ----> We should apply VLMAX instead of a CONST_INT AVL slli a4,a5,2 vand.vv v0,v8,v16 vand.vv v4,v8,v12 vmseq.vi v0,v0,0 sub a2,a2,a5 vneg.v v4,v8,v0.t vsetvli zero,a5,e32,m4,ta,ma The root cause above is the following codes: is_vlmax_len_p (...) return poly_int_rtx_p (len, &value) && known_eq (value, GET_MODE_NUNITS (mode)) && !satisfies_constraint_K (len); ---> incorrect check. Actually, we should not elide the VLMAX situation that has AVL in range of [0,31]. After removing the the check above, we will have this following issue: vsetivli zero,4,e32,m1,ta,ma vlseg4e32.v v4,(a5) vlseg4e32.v v12,(a3) vsetvli a5,zero,e32,m1,tu,ma ---> This is redundant since VLMAX AVL = 4 when it is fixed-vlmax vfadd.vf v3,v13,fa0 vfadd.vf v1,v12,fa1 vfmul.vv v17,v3,v5 vfmul.vv v16,v1,v5 Since all the following operations (vfadd.vf ... etc) are COND_LEN_xxx with dummy len and dummy mask, we add the simplification operations dummy len and dummy mask into VLMAX TA and MA policy. So, after this patch. Both cases are optimal codegen now: case 1: vsetvli a5,a2,e32,m1,ta,mu vle32.v v2,0(a0) slli a4,a5,2 vand.vv v1,v2,v3 vand.vv v0,v2,v4 sub a2,a2,a5 vmseq.vi v0,v0,0 vneg.v v1,v2,v0.t vse32.v v1,0(a1) case 2: vsetivli zero,4,e32,m1,tu,ma addi a4,a5,400 vlseg4e32.v v12,(a3) vfadd.vf v3,v13,fa0 vfadd.vf v1,v12,fa1 vlseg4e32.v v4,(a4) vfadd.vf v2,v14,fa1 vfmul.vv v17,v3,v5 vfmul.vv v16,v1,v5 This patch is just additional fix of previous approved patch. Tested on both RV32 and RV64 newlib no regression. Committed. gcc/ChangeLog: * config/riscv/riscv-v.cc (is_vlmax_len_p): Remove satisfies_constraint_K. (expand_cond_len_op): Add simplification of dummy len and dummy mask. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/base/vf_avl-3.c: New test.
-
Di Zhao authored
This patch adds a new tuning option 'AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA', to consider fully pipelined FMAs in reassociation. Also, set this option by default for Ampere CPUs. gcc/ChangeLog: * config/aarch64/aarch64-tuning-flags.def (AARCH64_EXTRA_TUNING_OPTION): New tuning option AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA. * config/aarch64/aarch64.cc (aarch64_override_options_internal): Set param_fully_pipelined_fma according to tuning option. * config/aarch64/tuning_models/ampere1.h: Add AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA to tune_flags. * config/aarch64/tuning_models/ampere1a.h: Likewise. * config/aarch64/tuning_models/ampere1b.h: Likewise.
-
Feng Wang authored
gcc/ChangeLog: * config/riscv/vector-crypto.md: Modify copyright year.
-
Juzhe-Zhong authored
Committed. gcc/ChangeLog: * config/riscv/riscv-vector-costs.cc: Move STMT_VINFO_TYPE (...) to local.
-
Lulu Cheng authored
Check whether the assembler supports tls le relax. If it supports it, the assembly instruction sequence of tls le relax will be generated by default. The original way to obtain the tls le symbol address: lu12i.w $rd, %le_hi20(sym) ori $rd, $rd, %le_lo12(sym) add.{w/d} $rd, $rd, $tp If the assembler supports tls le relax, the following sequence is generated: lu12i.w $rd, %le_hi20_r(sym) add.{w/d} $rd,$rd,$tp,%le_add_r(sym) addi.{w/d} $rd,$rd,%le_lo12_r(sym) gcc/ChangeLog: * config.in: Regenerate. * config/loongarch/loongarch-opts.h (HAVE_AS_TLS_LE_RELAXATION): Define. * config/loongarch/loongarch.cc (loongarch_legitimize_tls_address): Added TLS Le Relax support. (loongarch_print_operand_reloc): Add the output string of TLS Le Relax. * config/loongarch/loongarch.md (@add_tls_le_relax<mode>): New template. * configure: Regenerate. * configure.ac: Check if binutils supports TLS le relax. gcc/testsuite/ChangeLog: * lib/target-supports.exp: Add a function to check whether binutil supports TLS Le Relax. * gcc.target/loongarch/tls-le-relax.c: New test.
-
Feng Wang authored
Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com> Co-Authored by: Ciyan Pan <panciyan@eswincomputing.com> gcc/ChangeLog: * config/riscv/iterators.md: Add rotate insn name. * config/riscv/riscv.md: Add new insns name for crypto vector. * config/riscv/vector-iterators.md: Add new iterators for crypto vector. * config/riscv/vector.md: Add the corresponding attr for crypto vector. * config/riscv/vector-crypto.md: New file.The machine descriptions for crypto vector.
-