- Jan 08, 2024
-
-
Jonathan Wakely authored
This change ensures that char and wchar_t arguments are formatted consistently when using integer presentation types. This avoids non-portable std::format output that depends on whether char and wchar_t happen to be signed or unsigned on the target. Formatting '\xff' as an integer will now always format 255 and not sometimes -1. This was approved in Kona 2023 as a DR for C++20 so the change is implemented unconditionally. Also make character formatters check for _Pres_c explicitly and call _M_format_character directly. This avoid the overhead of calling format and _S_to_character and then calling _M_format_character anyway. libstdc++-v3/ChangeLog: * include/bits/version.def (format_uchar): Define. * include/bits/version.h: Regenerate. * include/std/format (formatter<C, C>::format): Check for _Pres_c and call _M_format_character directly. Cast C to its unsigned equivalent for formatting as an integer. (formatter<char, wchar_t>::format): Likewise. (basic_format_arg(T&)): Store char arguments as unsigned char for formatting to a wide string. * testsuite/std/format/functions/format.cc: Adjust test. Check formatting of
-
Feng Wang authored
This patch fix the rtl-checking error for crypto vector. The root cause is the avl-type index of zvbc ins is error,it should be operand[8] not operand[5]. gcc/ChangeLog: * config/riscv/vector.md: Modify avl_type operand index of zvbc ins.
-
GCC Administrator authored
-
- Jan 07, 2024
-
-
Georg-Johann Lay authored
gcc/testsuite/ * gcc.target/avr/lra-cpymem_qi.c: Remove duplicate -mmcu=. * gcc.target/avr/lra-elim.c: Same. * gcc.target/avr/pr112830.c: Skip for Reduced Tiny. * gcc.target/avr/pr46779-1.c: Same. * gcc.target/avr/pr46779-2.c: Same. * gcc.target/avr/pr86869.c: Skip for Reduced Tiny and add -std=gnu99 for GNU-C due to address spaces. * gcc.target/avr/pr89270.c: Same. * gcc.target/avr/torture/builtins-2-flash.c: Only test address space __flash1 if we have it. * gcc.target/avr/torture/addr-space-1-1.c: Same. * gcc.target/avr/torture/addr-space-2-1.c: Same.
-
Jerry DeLisle authored
PR libgfortran/113223 libgfortran/ChangeLog: * io/write.c (namelist_write): If internal_unit precede with space. gcc/testsuite/ChangeLog: * gfortran.dg/dtio_25.f90: Update. * gfortran.dg/namelist_57.f90: Update. * gfortran.dg/namelist_65.f90: Update.
-
Roger Sayle authored
This patch improves the cost/gain calculation used during the i386 backend's SImode/DImode scalar-to-vector (STV) conversion pass. The current code handles loads and stores, but doesn't consider that converting other scalar operations with a memory destination, requires an explicit load before and an explicit store after the vector equivalent. To ease the review, the significant change looks like: /* For operations on memory operands, include the overhead of explicit load and store instructions. */ if (MEM_P (dst)) igain += !optimize_insn_for_size_p () ? -COSTS_N_BYTES (8); : (m * (ix86_cost->int_load[2] + ix86_cost->int_store[2]) - (ix86_cost->sse_load[sse_cost_idx] + ix86_cost->sse_store[sse_cost_idx])); however the patch itself is complicated by a change in indentation which leads to a number of lines with only whitespace changes. For architectures where integer load/store costs are the same as vector load/store costs, there should be no change without -Os/-Oz. 2024-01-07 Roger Sayle <roger@nextmovesoftware.com> Uros Bizjak <ubizjak@gmail.com> gcc/ChangeLog PR target/113231 * config/i386/i386-features.cc (compute_convert_gain): Include the overhead of explicit load and store (movd) instructions when converting non-store scalar operations with memory destinations. Various indentation whitespace fixes. gcc/testsuite/ChangeLog PR target/113231 * gcc.target/i386/pr113231.c: New test case.
-
Tamar Christina authored
This adds an implementation for conditional branch optab for AArch32. For e.g. void f1 () { for (int i = 0; i < N; i++) { b[i] += a[i]; if (a[i] > 0) break; } } For 128-bit vectors we generate: vcgt.s32 q8, q9, #0 vpmax.u32 d7, d16, d17 vpmax.u32 d7, d7, d7 vmov r3, s14 @ int cmp r3, #0 and of 64-bit vector we can omit one vpmax as we still need to compress to 32-bits. gcc/ChangeLog: * config/arm/neon.md (cbranch<mode>4): New. gcc/testsuite/ChangeLog: * gcc.dg/vect/vect-early-break_2.c: Skip Arm. * gcc.dg/vect/vect-early-break_7.c: Likewise. * gcc.dg/vect/vect-early-break_75.c: Likewise. * gcc.dg/vect/vect-early-break_77.c: Likewise. * gcc.dg/vect/vect-early-break_82.c: Likewise. * gcc.dg/vect/vect-early-break_88.c: Likewise. * lib/target-supports.exp (add_options_for_vect_early_break, check_effective_target_vect_early_break_hw, check_effective_target_vect_early_break): Support AArch32. * gcc.target/arm/vect-early-break-cbranch.c: New test.
-
Jeff Law authored
gcc/testsuite * gcc.dg/tree-ssa/phi-opt-25b.c: Remove extraneous "short".
-
Georg-Johann Lay authored
gcc/testsuite/ PR testsuite/52641 * gcc.dg/torture/pr110838.c: Use proper shift offset to get MSB or int. * gcc.dg/torture/pr112282.c: Use at least 32 bits for :20 bit-fields. * gcc.dg/tree-ssa/bitcmp-5.c: Use integral type with 32 bits or more. * gcc.dg/tree-ssa/bitcmp-6.c: Same. * gcc.dg/tree-ssa/cltz-complement-max.c: Same. * gcc.dg/tree-ssa/cltz-max.c: Same. * gcc.dg/tree-ssa/if-to-switch-8.c: Use literals that fit int. * gcc.dg/tree-ssa/if-to-switch-9.c [avr]: Set case-values-threshold=3. * gcc.dg/tree-ssa/negneg-3.c: Discriminate [not] large_double. * gcc.dg/tree-ssa/phi-opt-25b.c: Use types of correct widths for __builtin_bswapN. * gcc.dg/tree-ssa/pr55177-1.c: Same. * gcc.dg/tree-ssa/popcount-max.c: Use int32_t where required. * gcc.dg/tree-ssa/pr111583-1.c: Use intptr_t as needed. * gcc.dg/tree-ssa/pr111583-2.c: Same.
-
Georg-Johann Lay authored
gcc/testsuite/ PR testsuite/52641 * gcc.dg/memchr-3.c [avr]: Anticipate -Wbuiltin-declaration-mismatch. * gcc.dg/pr103207.c: Use __INT32_TYPE__ instead of int. * gcc.dg/pr103451.c [void* != long]: Anticipate -Wpointer-to-int-cast. * gcc.dg/pr110496.c [void* != long]: Anticipate -Wint-to-pointer-cast. * gcc.dg/pr109977.c: Use __SIZEOF_DOUBLE__ instead of 8. * gcc.dg/pr110506-2.c: Use __UINT32_TYPE__ for uint32_t. * gcc.dg/pr110582.c: Require int32plus. * gcc.dg/pr111039.c: [sizeof(int) < 4]: Use __INT32_TYPE__. * gcc.dg/pr111599.c: Same. * gcc.dg/builtin-dynamic-object-size-0.c: Require size20plus. * gcc.dg/builtin-object-size-1.c [avr]: Skip tests with strndup. * gcc.dg/builtin-object-size-2.c: Same. * gcc.dg/builtin-object-size-3.c: Same. * gcc.dg/builtin-object-size-4.c: Same. * gcc.dg/pr111070.c: Use __UINTPTR_TYPE__ instead of unsigned long. * gcc.dg/debug/btf/btf-pr106773.c: Same. * gcc.dg/debug/btf/btf-bitfields-2.c: [sizeof(int) < 4]: Use __UINT32_TYPE__.
-
Georg-Johann Lay authored
gcc/testsuite/ PR testsuite/52641 * gcc.c-torture/compile/attr-complex-method-2.c [target=avr]: Check for "divsc3" as double = float per default. * gcc.c-torture/compile/pr106537-1.c: Use __INTPTR_TYPE__ instead of hard-coded "long". * gcc.c-torture/compile/pr106537-2.c: Same. * gcc.c-torture/compile/pr106537-3.c: Same. * gcc.c-torture/execute/20230630-3.c: Use __INT32_TYPE__ for bit-field wider than 16 bits. * gcc.c-torture/execute/20230630-4.c: Same. * gcc.c-torture/execute/pr109938.c: Require int32plus. * gcc.c-torture/execute/pr109986.c: Same. * gcc.dg/fold-ior-4.c: Same. * gcc.dg/fold-ior-5.c: Same * gcc.dg/fold-parity-5.c: Same. * gcc.dg/fold-popcount-5.c: Same. * gcc.dg/builtin-bswap-13.c [sizeof(int) < 4]: Use __INT32_TYPE__ instead of int. * gcc.dg/builtin-bswap-14.c: Use __INT32_TYPE__ instead of int where required by code. * gcc.dg/c23-constexpr-9.c: Require large_double. * gcc.dg/c23-nullptr-1.c [target=avr]: xfail. * gcc.dg/loop-unswitch-10.c: Require size32plus. * gcc.dg/loop-unswitch-14.c: Same. * gcc.dg/loop-unswitch-11.c: Require int32. * gcc.dg/pr101836.c: Use __SIZEOF_INT instead of hard-coded 4. * gcc.dg/pr101836_1.c: Same. * gcc.dg/pr101836_2.c: Same. * gcc.dg/pr101836_3.c: Same.
-
Nathaniel Shead authored
The attached testcase Patrick found in PR c++/112899 ICEs because it is attempting to write a variable initializer that is no longer in the static_aggregates map. The issue is that, for non-header modules, the loop in c_parse_final_cleanups prunes the static_aggregates list, which means that by the time we get to emitting module information those initialisers have been lost. However, we don't actually need to write non-trivial initialisers for non-header modules, because they've already been emitted as part of the module TU itself. Instead let's just only write the initializers from header modules (which skipped writing them in c_parse_final_cleanups). gcc/cp/ChangeLog: * module.cc (trees_out::write_var_def): Only write initializers in header modules. gcc/testsuite/ChangeLog: * g++.dg/modules/init-5_a.C: New test. * g++.dg/modules/init-5_b.C: New test. Signed-off-by:
Nathaniel Shead <nathanieloshead@gmail.com>
-
Nathaniel Shead authored
This patch stops 'add_binding_entity' from ignoring all names in the global module fragment, since they should still be exported if named in an exported using-declaration. PR c++/109679 gcc/cp/ChangeLog: * module.cc (depset::hash::add_binding_entity): Don't skip names in the GMF if they've been exported with a using declaration. gcc/testsuite/ChangeLog: * g++.dg/modules/using-11.h: New test. * g++.dg/modules/using-11_a.C: New test. * g++.dg/modules/using-11_b.C: New test. Signed-off-by:
Nathaniel Shead <nathanieloshead@gmail.com>
-
Nathaniel Shead authored
This patch cleans up the parsing of module-declarations and import-declarations to more closely follow the grammar defined by the standard. For instance, currently we allow declarations like 'import A:B', even from an unrelated source file (not part of module A), which causes errors in merging declarations. However, the syntax in [module.import] doesn't even allow this form of import, so this patch prevents this from parsing at all and avoids the error that way. Additionally, we sometimes allow statements like 'import :X' or 'module :X' even when not in a named module, and this causes segfaults, so we disallow this too. PR c++/110808 gcc/cp/ChangeLog: * parser.cc (cp_parser_module_name): Rewrite to handle module-names and module-partitions independently. (cp_parser_module_partition): New function. (cp_parser_module_declaration): Parse module partitions explicitly. Don't change state if parsing module decl failed. (cp_parser_import_declaration): Handle different kinds of import-declarations locally. gcc/testsuite/ChangeLog: * g++.dg/modules/part-hdr-1_c.C: Fix syntax. * g++.dg/modules/part-mac-1_c.C: Likewise. * g++.dg/modules/mod-invalid-1.C: New test. * g++.dg/modules/part-8_a.C: New test. * g++.dg/modules/part-8_b.C: New test. * g++.dg/modules/part-8_c.C: New test. Signed-off-by:
Nathaniel Shead <nathanieloshead@gmail.com>
-
Juzhe-Zhong authored
Obvious fix, Committed. gcc/ChangeLog: * config/riscv/riscv-vsetvl.cc: replace std::max by MAX.
-
Jonathan Wakely authored
r14-1527-g2415024e0f81f8 changed the parameter of the __cxa_call_terminate definition, but there's also a declaration in unwind-cxx.h which should have been changed too. libstdc++-v3/ChangeLog: PR libstdc++/112997 * libsupc++/unwind-cxx.h (__cxa_call_terminate): Change first parameter to void*.
-
Jonathan Wakely authored
As the comment notes, the increased timeout was needed because of PR 102780, but that was fixed long ago. libstdc++-v3/ChangeLog: * testsuite/20_util/variant/87619.cc: Remove dg-timeout-factor.
-
Jonathan Wakely authored
This reduces the overhead of using std::is_trivially_destructible_v and as a result fixes some recent regressions seen with a non-default GLIBCXX_TESTSUITE_STDS env var: FAIL: 20_util/variant/87619.cc -std=gnu++20 (test for excess errors) FAIL: 20_util/variant/87619.cc -std=gnu++23 (test for excess errors) FAIL: 20_util/variant/87619.cc -std=gnu++26 (test for excess errors) libstdc++-v3/ChangeLog: * include/std/type_traits (is_trivially_destructible_v): Use built-in directly when concepts are supported. * testsuite/20_util/is_trivially_destructible/value_v.cc: New test.
-
GCC Administrator authored
-
- Jan 06, 2024
-
-
Gwenole Beauchesne authored
Fix testsuite when compiling with -Wformat. Use nonnull arguments so that -Wformat does not cause extraneous output to be reported as an error. FAIL: tr1/8_c_compatibility/cinttypes/functions.cc (test for excess errors) libstdc++-v3/ChangeLog: * testsuite/tr1/8_c_compatibility/cinttypes/functions.cc: Use nonnull arguments to strtoimax() and wcstoimax() functions. Signed-off-by:
Gwenole Beauchesne <gb.devel@gmail.com>
-
Harald Anlauf authored
gcc/fortran/ChangeLog: PR fortran/96724 * iresolve.cc (gfc_resolve_repeat): Force conversion to gfc_charlen_int_kind before call to gfc_multiply. gcc/testsuite/ChangeLog: PR fortran/96724 * gfortran.dg/repeat_8.f90: New test. Co-authored-by:
José Rui Faustino de Sousa <jrfsousa@gmail.com>
-
Tobias Burnus authored
Additionally, it fixes a typo and changes the OpenMP 5.2 section references (18.8.x) to OpenMP 5.1 ones (3.8.x) matching the mentioned OpenMP number. libgomp/ChangeLog: * libgomp.texi (OpenMP Technical Report 12): Fix a typo. (Device Memory Routines): Fix OpenMP 5.1 spec refs; add omp_target_is_accessible. (Environment Display Routine): Uncomment and add omp_display_env description. (OMP_DISPLAY_ENV): Update wording, add 'see also'.
-
Jiahao Xu authored
For instruction xvpermi.q, unused bits in operands[3] need be set to 0 to avoid causing undefined behavior on LA464. gcc/ChangeLog: * config/loongarch/lasx.md: Set the unused bits in operand[3] to 0. gcc/testsuite/ChangeLog: * gcc.target/loongarch/vector/lasx/lasx-xvpremi.c: Removed. * gcc.target/loongarch/vector/lasx/lasx-xvpermi_q.c: New test.
-
Juzhe-Zhong authored
This patch fixes a bug of VSETVL PASS in this following situation: Ignore curr info since prev info available with it: prev_info: VALID (insn 8, bb 2) Demand fields: demand_ratio_and_ge_sew demand_avl SEW=16, VLMUL=mf4, RATIO=64, MAX_SEW=64 TAIL_POLICY=agnostic, MASK_POLICY=agnostic AVL=(const_int 1 [0x1]) VL=(nil) curr_info: VALID (insn 12, bb 2) Demand fields: demand_ge_sew demand_non_zero_avl SEW=16, VLMUL=m1, RATIO=16, MAX_SEW=32 TAIL_POLICY=agnostic, MASK_POLICY=agnostic AVL=(const_int 1 [0x1]) VL=(nil) We should update prev_info MAX_SEW from 64 into 32. Before this patch: foo: vsetivli zero,1,e64,m1,ta,ma vle64.v v1,0(a1) vmv.s.x v3,a0 vfmv.s.f v2,fa0 vadd.vv v1,v1,v1 ret After this patch: foo: vsetivli zero,1,e16,mf4,ta,ma vle64.v v1,0(a1) vmv.s.x v3,a0 vfmv.s.f v2,fa0 vsetvli zero,zero,e64,m1,ta,ma vadd.vv v1,v1,v1 ret Tested on both RV32 and RV64 no regression. Committed. PR target/113248 gcc/ChangeLog: * config/riscv/riscv-vsetvl.cc (pre_vsetvl::fuse_local_vsetvl_info): Update the MAX_SEW. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/vsetvl/pr113248.c: New test.
-
Juzhe-Zhong authored
1). We not only have vashl_optab,vashr_optab,vlshr_optab which vectorize shift with vector shift amount, that is, vectorization of 'a[i] >> x[i]', the shift amount is loop variant. 2). But also, we have ashl_optab, ashr_optab, lshr_optab which can vectorize shift with scalar shift amount, that is, vectorization of 'a[i] >> x', the shift amount is loop invariant. For the 2) case, we don't need to allocate a vector register group for shift amount. So consider this following case: void f (int *restrict a, int *restrict b, int *restrict c, int *restrict d, int x, int n) { for (int i = 0; i < n; i++) { int tmp = b[i] >> x; int tmp2 = tmp * b[i]; c[i] = tmp2 * b[i]; d[i] = tmp * tmp2 * b[i] >> x; } } Before this patch, we choose LMUL = 4, now after this patch, we can choose LMUL = 8: f: ble a5,zero,.L5 .L3: vsetvli a0,a5,e32,m8,ta,ma slli a6,a0,2 vle32.v v16,0(a1) vsra.vx v24,v16,a4 vmul.vv v8,v24,v16 vmul.vv v0,v8,v16 vse32.v v0,0(a2) vmul.vv v8,v8,v24 vmul.vv v8,v8,v16 vsra.vx v8,v8,a4 vse32.v v8,0(a3) add a1,a1,a6 add a2,a2,a6 add a3,a3,a6 sub a5,a5,a0 bne a5,zero,.L3 .L5: ret Tested on both RV32/RV64 no regression. Ok for trunk ? Note that we will apply same heuristic for vadd.vx, ... etc when the late-combine pass from Richard Sandiford is committed (Since we need late combine pass to do vv->vx transformation for vadd). gcc/ChangeLog: * config/riscv/riscv-vector-costs.cc (loop_invariant_op_p): New function. (variable_vectorized_p): Teach loop invariant. (has_unexpected_spills_p): Ditto. gcc/testsuite/ChangeLog: * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-12.c: New test. * gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-14.c: New test.
-
Juzhe-Zhong authored
V2: Address comments from Robin. While working on fixing a bug, I notice this following code has redundant move: #include "riscv_vector.h" void f (float x, float y, void *out) { float f[4] = { x, x, x, y }; vfloat32m1_t v = __riscv_vle32_v_f32m1 (f, 4); __riscv_vse32_v_f32m1 (out, v, 4); } Before this patch: f: vsetivli zero,4,e32,m1,ta,ma addi sp,sp,-16 vfmv.v.f v1,fa0 vfslide1down.vf v1,v1,fa1 vmv.v.v v1,v1 ----> redundant move. vse32.v v1,0(a0) addi sp,sp,16 jr ra The rootcause is that the complicate vmv.v.v pattern doesn't simplify it into simple (set (reg) (reg)) reg-to-reg move pattern. Currently, we support such simplification for VLMAX. However, the case I found is non-VLMAX but with LEN = NUNITS which should be considered as equivalent to VLMAX. Add a simple fix for such situation. Tested on both RV32/RV64 no regressions. gcc/ChangeLog: * config/riscv/riscv-protos.h (whole_reg_to_reg_move_p): New function. * config/riscv/riscv-v.cc (whole_reg_to_reg_move_p): Ditto. * config/riscv/vector.md: Allow non-vlmax with len = NUNITS simplification. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/base/vf_avl-4.c: New test.
-
Mark Wielaard authored
commit a945c346 updated libgomp/plugin/configfrag.ac but didn't regenerate/update libgomp/configure which includes that configfrag. libgomp/Changelog: * configure: Regenerate.
-
GCC Administrator authored
-
- Jan 05, 2024
-
-
Gaius Mulley authored
A missing bool/int detected in the global variable initialized. The majority of ints were replaced by bool but this one was missed. libgm2/ChangeLog: * libm2iso/RTco.cc (initialized): Use bool instead of int. Signed-off-by:
Gaius Mulley <gaiusmod2@gmail.com>
-
Richard Sandiford authored
When SVE is enabled, we try vectorising with multiple different SVE and Advanced SIMD approaches and use the cost model to pick the best one. Until now, we've not done that for Advanced SIMD, since "the first mode that works should always be the best". The testcase is a counterexample. Each iteration of the scalar loop vectorises naturally with 64-bit input vectors and 128-bit output vectors. We do try that for SVE, and choose it as the best approach. But the first approach we try is instead to use: - a vectorisation factor of 2 - 1 128-bit vector for the inputs - 2 128-bit vectors for the outputs But since the stride is variable, the cost of marshalling the input vector from two iterations outweighs the benefit of doing two iterations at once. This patch therefore generalises aarch64-sve-compare-costs to aarch64-vect-compare-costs and applies it to non-SVE compilations. gcc/ PR target/113104 * doc/invoke.texi (aarch64-sve-compare-costs): Replace with... (aarch64-vect-compare-costs): ...this. * config/aarch64/aarch64.opt (-param=aarch64-sve-compare-costs=): Replace with... (-param=aarch64-vect-compare-costs=): ...this new param. * config/aarch64/aarch64.cc (aarch64_override_options_internal): Don't disable it when vectorizing for Advanced SIMD only. (aarch64_autovectorize_vector_modes): Apply VECT_COMPARE_COSTS whenever aarch64_vect_compare_costs is true. gcc/testsuite/ PR target/113104 * gcc.target/aarch64/pr113104.c: New test. * gcc.target/aarch64/sve/cond_arith_1.c: Update for new parameter names. * gcc.target/aarch64/sve/cond_arith_1_run.c: Likewise. * gcc.target/aarch64/sve/cond_arith_3.c: Likewise. * gcc.target/aarch64/sve/cond_arith_3_run.c: Likewise. * gcc.target/aarch64/sve/gather_load_6.c: Likewise. * gcc.target/aarch64/sve/gather_load_7.c: Likewise. * gcc.target/aarch64/sve/load_const_offset_2.c: Likewise. * gcc.target/aarch64/sve/load_const_offset_3.c: Likewise. * gcc.target/aarch64/sve/mask_gather_load_6.c: Likewise. * gcc.target/aarch64/sve/mask_gather_load_7.c: Likewise. * gcc.target/aarch64/sve/mask_load_slp_1.c: Likewise. * gcc.target/aarch64/sve/mask_struct_load_1.c: Likewise. * gcc.target/aarch64/sve/mask_struct_load_2.c: Likewise. * gcc.target/aarch64/sve/mask_struct_load_3.c: Likewise. * gcc.target/aarch64/sve/mask_struct_load_4.c: Likewise. * gcc.target/aarch64/sve/mask_struct_store_1.c: Likewise. * gcc.target/aarch64/sve/mask_struct_store_1_run.c: Likewise. * gcc.target/aarch64/sve/mask_struct_store_2.c: Likewise. * gcc.target/aarch64/sve/mask_struct_store_2_run.c: Likewise. * gcc.target/aarch64/sve/pack_1.c: Likewise. * gcc.target/aarch64/sve/reduc_4.c: Likewise. * gcc.target/aarch64/sve/scatter_store_6.c: Likewise. * gcc.target/aarch64/sve/scatter_store_7.c: Likewise. * gcc.target/aarch64/sve/strided_load_3.c: Likewise. * gcc.target/aarch64/sve/strided_store_3.c: Likewise. * gcc.target/aarch64/sve/unpack_fcvt_signed_1.c: Likewise. * gcc.target/aarch64/sve/unpack_fcvt_unsigned_1.c: Likewise. * gcc.target/aarch64/sve/unpack_signed_1.c: Likewise. * gcc.target/aarch64/sve/unpack_unsigned_1.c: Likewise. * gcc.target/aarch64/sve/unpack_unsigned_1_run.c: Likewise. * gcc.target/aarch64/sve/vcond_11.c: Likewise. * gcc.target/aarch64/sve/vcond_11_run.c: Likewise.
-
Jonathan Wakely authored
This prevents a std::filesystem::path from exceeding INT_MAX/4 components (which is unlikely to ever be a problem except on 16-bit targets). That limit ensures that the capacity*1.5 calculation doesn't overflow. We should also check that we don't exceed SIZE_MAX when calculating how many bytes to allocate. That only needs to be checked when int is at least as large as size_t, because otherwise we know that the product INT_MAX/4 * sizeof(value_type) will fit in SIZE_MAX. For targets where size_t is twice as wide as int this obviously holds. For msp430-elf we have 16-bit int and 20-bit size_t, so the condition holds as long as sizeof(value_type) fits in 7 bits, which it does. We can also remove some floating-point arithmetic in operator/= which ensures exponential growth of the buffer. That's redundant because path::_List::reserve does that anyway (and does so more efficiently since the commit immediately before this one). libstdc++-v3/ChangeLog: * src/c++17/fs_path.cc (path::_List::reserve): Limit maximum size and check for overflows in arithmetic. (path::operator/=(const path&)): Remove redundant exponential growth calculation.
-
Martin Küttler authored
libstdc++-v3/ChangeLog: * src/c++17/fs_path.cc (path::_List::reserve): Avoid floating-point arithmetic.
-
Jonathan Wakely authored
The new __is_convertible built-in should only be used after checking that it's supported. libstdc++-v3/ChangeLog: PR libstdc++/113241 * include/std/type_traits (is_convertible_v): Guard use of built-in with preprocessor check.
-
Jonathan Wakely authored
These Python scripts have "*/" at the end of the license header comment blocks, presumably copy&pasted from C files. contrib/ChangeLog: * analyze_brprob.py: Remove stray text at end of comment. * analyze_brprob_spec.py: Likewise. * check-params-in-docs.py: Likewise. * check_GNU_style.py: Likewise. * check_GNU_style_lib.py: Likewise. * filter-clang-warnings.py: Likewise. * gcc-changelog/git_check_commit.py: Likewise. * gcc-changelog/git_commit.py: Likewise. * gcc-changelog/git_email.py: Likewise. * gcc-changelog/git_repository.py: Likewise. * gcc-changelog/git_update_version.py: Likewise. * gcc-changelog/test_email.py: Likewise. * gen_autofdo_event.py: Likewise. * mark_spam.py: Likewise. * unicode/gen-box-drawing-chars.py: Likewise. * unicode/gen-combining-chars.py: Likewise. * unicode/gen-printable-chars.py: Likewise. * unicode/gen_wcwidth.py: Likewise.
-
Jonathan Wakely authored
contrib/ChangeLog: * unicode/gen_wcwidth.py: Add sys.argv[0] to usage error.
-
Lulu Cheng authored
LoongArch: Fixed the problem of incorrect judgment of the immediate field of the [x]vld/[x]vst instruction. The [x]vld/[x]vst directive is defined as follows: [x]vld/[x]vst {x/v}d, rj, si12 When not modified, the immediate field of [x]vld/[x]vst is between 10 and 14 bits depending on the type. However, in loongarch_valid_offset_p, the immediate field is restricted first, so there is no error. However, in some cases redundant instructions will be generated, see test cases. Now modify it according to the description in the instruction manual. gcc/ChangeLog: * config/loongarch/lasx.md (lasx_mxld_<lasxfmt_f>): Modify the method of determining the memory offset of [x]vld/[x]vst. (lasx_mxst_<lasxfmt_f>): Likewise. * config/loongarch/loongarch.cc (loongarch_valid_offset_p): Delete. (loongarch_address_insns): Likewise. * config/loongarch/lsx.md (lsx_ld_<lsxfmt_f>): Likewise. (lsx_st_<lsxfmt_f>): Likewise. * config/loongarch/predicates.md (aq10b_operand): Likewise. (aq10h_operand): Likewise. (aq10w_operand): Likewise. (aq10d_operand): Likewise. gcc/testsuite/ChangeLog: * gcc.target/loongarch/vect-ld-st-imm12.c: New test.
-
chenxiaolong authored
On the LoongArch architecture, the above four test cases need to be waived during testing. There are two situations: 1. The function of fma-{3,6}.c test is to find the value of c-a*b, but on the LoongArch architecture, the function of the existing fnmsub instruction is to find the value of -(a*b - c); 2. The function of fma-{4,7}.c test is to find the value of -(a*b)-c, but on the LoongArch architecture, the function of the existing fnmadd instruction is to find the value of -(a*b + c); Through the analysis of the above two cases, there will be positive and negative zero inequality. gcc/testsuite/ChangeLog * gcc.dg/fma-3.c: The intermediate file corresponding to the function does not produce the corresponding FNMA symbol, so the test rules should be skipped when testing. * gcc.dg/fma-4.c: The intermediate file corresponding to the function does not produce the corresponding FNMS symbol, so skip the test rules when testing. * gcc.dg/fma-6.c: The cause is the same as fma-3.c. * gcc.dg/fma-7.c: The cause is the same as fma-4.c
-
chenxiaolong authored
In the LoongArch architecture, the reason for not adding the 128-bit vector-width-*hi* instruction template in the GCC back end is that it causes program performance loss, so we can only add the "-mlasx" compilation option to use 256-bit vectorization functions in test files. gcc/testsuite/ChangeLog: * gcc.dg/vect/bb-slp-pattern-1.c: If you are testing on the LoongArch architecture, you need to add the "-mlasx" compilation option to generate vectorized code. * gcc.dg/vect/slp-widen-mult-half.c: Dito. * gcc.dg/vect/vect-widen-mult-const-s16.c: Dito. * gcc.dg/vect/vect-widen-mult-const-u16.c: Dito. * gcc.dg/vect/vect-widen-mult-half-u8.c: Dito. * gcc.dg/vect/vect-widen-mult-half.c: Dito. * gcc.dg/vect/vect-widen-mult-u16.c: Dito. * gcc.dg/vect/vect-widen-mult-u8-s16-s32.c: Dito. * gcc.dg/vect/vect-widen-mult-u8-u32.c: Dito. * gcc.dg/vect/vect-widen-mult-u8.c: Dito.
-
chenxiaolong authored
When binutils does not support vector instruction sets, the test program fails because it does not recognize vectorization at the assembly stage. Therefore, the default run behavior of the program is deleted, so that the behavior of the program depends on whether the software supports vectorization. gcc/testsuite/ChangeLog: * gfortran.dg/vect/pr60510.f: Delete the default behavior of the program.
-
chenxiaolong authored
On the LoongArch architecture, an error was found in the bind_c_array_params_2.f90 file because there was no proper assembly code for the regular expression detection function call, such as bl %plt(myBindC). gcc/testsuite/ChangeLog: * gfortran.dg/bind_c_array_params_2.f90: Add code test rules to support testing of the loongArch architecture.
-