- Aug 28, 2024
-
-
Richard Sandiford authored
generic_vector_cost is not currently used by any SVE target by default; it has to be specifically selected by -mtune=generic. Its SVE costing has historically been somewhat idealised, since it predated any actual SVE cores. This seems like a useful tradition to continue, at least for testing purposes. The ideal case is that gathers and scatters do not induce a specific one-off overhead. This patch therefore sets the gather/scatter init costs to zero. This patch is necessary to switch -mtune=generic over to the "new" vector costs. gcc/ * config/aarch64/tuning_models/generic.h (generic_sve_vector_cost): Set gather_load_x32_init_cost and gather_load_x64_init_cost to 0.
-
Richard Sandiford authored
The SVE gather and scatter costs are classified based on whether they do 4 loads per 128 bits (x32) or 2 loads per 128 bits (x64). The number after the "x" refers to the number of bits in each "container". However, the test for which to use was based on the element size rather than the container size. This meant that we'd use the overly conservative x32 costs for VNx2SI gathers. VNx2SI gathers are really .D gathers in which the upper half of each extension result is ignored. This patch is necessary to switch -mtune=generic over to the "new" vector costs. gcc/ * config/aarch64/aarch64.cc (aarch64_detect_vector_stmt_subtype) (aarch64_vector_costs::add_stmt_cost): Use the x64 cost rather than x32 cost for all VNx2 modes.
-
Richard Sandiford authored
g:8d6c6fbc improved the code generated for functions like: void test_s8 (int8x8x2_t *ptr) { *ptr = (int8x8x2_t) {}; } Previously we would load zero from the constant pool, whereas now we just use "stp xzr, xzr". This patch adds a test for this improvement. gcc/testsuite/ * gcc.target/aarch64/struct_zero.c: New test.
-
Richard Sandiford authored
The documentation of ASM_INPUT_P implied that the flag has no effect on ASM_EXPRs that have operands (and which therefore must be extended asms). In fact we require ASM_INPUT_P to be false for all extended asms. gcc/ * tree.h (ASM_INPUT_P): Fix documentation.
-
Francois-Xavier Coudert authored
libquadmath/ChangeLog: * libquadmath.texi (M_LOG2Eq, M_LOG10Eq, M_1_PIq, M_2_PIq, M_2_SQRTPIq, M_SQRT1_2q): Adjust descriptioni of these constants.
-
Filip Kastl authored
The gen_pow2p function generates (a & -a) == a as a fallback for POPCOUNT (a) == 1. Not only is the bitmagic not equivalent to POPCOUNT (a) == 1 but it also introduces UB (consider signed a = INT_MIN). This patch rewrites gen_pow2p to always use __builtin_popcount instead. This means that what the end result GIMPLE code is gets decided by an already existing machinery in a later pass. That is a cleaner solution I think. This existing machinery also uses a ^ (a - 1) > a - 1 which is the correct bitmagic. While rewriting gen_pow2p I had to add logic for converting the operand's type to a type that __builtin_popcount accepts. I naturally also added this logic to gen_log2. Thanks to this, exponential index transform gains the capability to handle all operand types with precision at most that of long long int. gcc/ChangeLog: PR tree-optimization/116355 * tree-switch-conversion.cc (can_log2): Add capability to suggest converting the operand to a different type. (gen_log2): Add capability to generate a conversion in case the operand is of a type incompatible with the logarithm operation. (can_pow2p): New function. (gen_pow2p): Rewrite to use __builtin_popcount instead of manually inserting an internal fn call or bitmagic. Also add capability to generate a conversion. (switch_conversion::is_exp_index_transform_viable): Call can_pow2p. Store types suggested by can_log2 and gen_log2. (switch_conversion::exp_index_transform): Params of gen_pow2p and gen_log2 changed so update their calls. * tree-switch-conversion.h: Add m_exp_index_transform_log2_type and m_exp_index_transform_pow2p_type to switch_conversion class to track type conversions needed to generate the "is power of 2" and logarithm operations. gcc/testsuite/ChangeLog: PR tree-optimization/116355 * gcc.target/i386/switch-exp-transform-1.c: Don't test for presence of POPCOUNT internal fn after switch conversion. Test for it after __builtin_popcount has had a chance to get expanded. * gcc.target/i386/switch-exp-transform-3.c: Also test char and short. Signed-off-by:
Filip Kastl <fkastl@suse.cz>
-
Jason Merrill authored
-Wsign-compare complained about these comparisons between (unsigned) size_t and (signed) streamsize, or between (unsigned) native_handle_type and (signed) -1. Fixed by adding casts to unify the types. libstdc++-v3/ChangeLog: * include/std/istream: Add cast to avoid -Wsign-compare. * include/std/stacktrace: Likewise.
-
Alex Coplan authored
This extends the scan-ltrans-tree* helpers to create RTL variants. This is needed to check the behaviour of an RTL pass under LTO. gcc/ChangeLog: PR libstdc++/116140 * doc/sourcebuild.texi: Document ltrans-rtl value of kind for scan-<kind>-dump*. gcc/testsuite/ChangeLog: PR libstdc++/116140 * lib/scanltranstree.exp (scan-ltrans-rtl-dump): New. (scan-ltrans-rtl-dump-not): New. (scan-ltrans-rtl-dump-dem): New. (scan-ltrans-rtl-dump-dem-not): New. (scan-ltrans-rtl-dump-times): New.
-
Richard Biener authored
I found it helpful to be able to print a whole SLP instance from gdb. * tree-vect-slp.cc (debug): Add overload for slp_instance.
-
Richard Biener authored
The following fixes a leak of the discovered single-lane store SLP nodes from which we only use their children. This uncovers a latent reference counting issue in the interleaving build where we fail to increment their reference count. * tree-vect-slp.cc (vect_build_slp_store_interleaving): Fix reference counting. (vect_build_slp_instance): Release rhs_nodes.
-
Richard Biener authored
This splits out SLP store interleaving into a separate function. * tree-vect-slp.cc (vect_build_slp_store_interleaving): Split out from ... (vect_build_slp_instance): Here.
-
Jason Merrill authored
The pedwarns for each of these features should be silenced by the appropriate -Wno-c++??-extensions. The handle_pragma_diagnostic_impl change is necessary so that we handle -Wc++23-extensions early so it's available to interpret_float while lexing. gcc/c-family/ChangeLog: * c-pragma.cc (handle_pragma_diagnostic_impl): Also handle -Wc++23-extensions early. * c-lex.cc (interpret_float): Use -Wc++23-extensions for extended floating point literal pedwarn. gcc/cp/ChangeLog: * parser.cc (cp_parser_simple_type_specifier): Use -Wc++20-extensions for auto parameter pedwarn. * pt.cc (do_decl_instantiation, do_type_instantiation): Use -Wc++11-extensions for 'extern template'. gcc/testsuite/ChangeLog: * g++.dg/cpp0x/extern_template-7.C: New test. * g++.dg/cpp23/ext-floating19.C: New test. * g++.dg/cpp2a/abbrev-fn1.C: New test.
-
Tobias Burnus authored
This commit adds OpenMP 5.1+'s interop enumeration, type and routine declarations to the C/C++ header file and, new in OpenMP TR13, also to the Fortran module and omp_lib.h header file. While a stub implementation is provided, only with foreign runtime support by the libgomp GPU plugins and with the 'interop' directive, this becomes really useful. libgomp/ChangeLog: * fortran.c (omp_get_interop_str_, omp_get_interop_name_, omp_get_interop_type_desc_, omp_get_interop_rc_desc_): Add. * libgomp.map (GOMP_5.1.3): New; add interop routines. * omp.h.in: Add interop typedefs, enum and prototypes. (__GOMP_DEFAULT_NULL): Define. (omp_target_memcpy_async, omp_target_memcpy_rect_async): Use it for the optional depend argument. * omp_lib.f90.in: Add paramters and interfaces for interop. * omp_lib.h.in: Likewise; move F90 '&' to column 81 for -ffree-length-80. * target.c (omp_get_num_interop_properties, omp_get_interop_int, omp_get_interop_ptr, omp_get_interop_str, omp_get_interop_name, omp_get_interop_type_desc, omp_get_interop_rc_desc): Add. * config/gcn/target.c (omp_get_num_interop_properties, omp_get_interop_int, omp_get_interop_ptr, omp_get_interop_str, omp_get_interop_name, omp_get_interop_type_desc, omp_get_interop_rc_desc): Add. * config/nvptx/target.c (omp_get_num_interop_properties, omp_get_interop_int, omp_get_interop_ptr, omp_get_interop_str, omp_get_interop_name, omp_get_interop_type_desc, omp_get_interop_rc_desc): Add. * testsuite/libgomp.c-c++-common/interop-routines-1.c: New test. * testsuite/libgomp.c-c++-common/interop-routines-2.c: New test. * testsuite/libgomp.fortran/interop-routines-1.F90: New test. * testsuite/libgomp.fortran/interop-routines-2.F90: New test. * testsuite/libgomp.fortran/interop-routines-3.F: New test. * testsuite/libgomp.fortran/interop-routines-4.F: New test. * testsuite/libgomp.fortran/interop-routines-5.F: New test. * testsuite/libgomp.fortran/interop-routines-6.F: New test. * testsuite/libgomp.fortran/interop-routines-7.F90: New test.
-
Jason Merrill authored
The unescaped * broke the match. libstdc++-v3/ChangeLog: * testsuite/20_util/default_delete/void_neg.cc: Fix regexp quoting.
-
Jason Merrill authored
libstdc++-v3/ChangeLog: * include/std/coroutine (coroutine_handle): Use nullptr instead of 0 as initializer for _M_fr_ptr.
-
Jason Merrill authored
The return seems to have been lost in the r15-1858 RAII overhaul. libstdc++-v3/ChangeLog: * include/bits/stl_uninitialized.h (__uninitialized_move_copy): Add missing return.
-
Jason Merrill authored
The semicolons after each macro invocation here end up following the closing brace of a function, leading to -Wextra-semi pedwarns. libstdc++-v3/ChangeLog: * include/decimal/decimal.h (_DEFINE_DECIMAL_BINARY_OP_WITH_INT): Remove redundant semicolons.
-
Pan Li authored
Move the run test of pr116278 to dg/torture and leave the risc-v the asm check under risc-v part. PR target/116278 gcc/testsuite/ChangeLog: * gcc.target/riscv/pr116278-run-1.c: Take compile instead of run. * gcc.target/riscv/pr116278-run-2.c: Ditto. * gcc.dg/torture/pr116278-run-1.c: New test. * gcc.dg/torture/pr116278-run-2.c: New test. Signed-off-by:
Pan Li <pan2.li@intel.com>
-
Pan Li authored
The .SAT_ADD has 2 operand, when one of the operand may be INTEGER_CST. For example _1 = .SAT_ADD (_2, 9) comes from below sample code. Form 3: #define DEF_VEC_SAT_U_ADD_IMM_FMT_3(T, IMM) \ T __attribute__((noinline)) \ vec_sat_u_add_imm##IMM##_##T##_fmt_3 (T *out, T *in, unsigned limit) \ { \ unsigned i; \ T ret; \ for (i = 0; i < limit; i++) \ { \ out[i] = __builtin_add_overflow (in[i], IMM, &ret) ? -1 : ret; \ } \ } DEF_VEC_SAT_U_ADD_IMM_FMT_3(uint64_t, 9) It will fail to vectorize as the vectorizable_call will check the operands is type_compatiable but the imm will be (const_int 9) with the SImode, which is different from _2 (DImode). Aka: uint64_t _1; uint64_t _2; _1 = .SAT_ADD (_2, 9); This patch would like to reconcile the imm operand to the operand type mode of _2 by fold_convert to make the vectorizable_call happy. The below test suites are passed for this patch: 1. The rv64gcv fully regression tests. 2. The x86 bootstrap tests. 3. The x86 fully regression tests. gcc/ChangeLog: * tree-vect-patterns.cc (vect_recog_sat_add_pattern): Add fold convert for const_int to the type of operand 0. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/autovec/vec_sat_arith.h: Add test helper macros. * gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm_reconcile-1.c: New test. * gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm_reconcile-10.c: New test. * gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm_reconcile-11.c: New test. * gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm_reconcile-12.c: New test. * gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm_reconcile-13.c: New test. * gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm_reconcile-14.c: New test. * gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm_reconcile-15.c: New test. * gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm_reconcile-2.c: New test. * gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm_reconcile-3.c: New test. * gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm_reconcile-4.c: New test. * gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm_reconcile-5.c: New test. * gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm_reconcile-6.c: New test. * gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm_reconcile-7.c: New test. * gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm_reconcile-8.c: New test. * gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm_reconcile-9.c: New test. Signed-off-by:
Pan Li <pan2.li@intel.com>
-
Kito Cheng authored
We add pattern for vector rotate, but seems like we forgot adding mode_idx which used in AVL propgation (riscv-avlprop.cc). gcc/ChangeLog: * config/riscv/vector.md (mode_idx): Add vrol and vror. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/autovec/rotr.c: New.
-
Pan Li authored
This patch would like to support the form 1 of the scalar signed integer .SAT_ADD. Aka below example: Form 1: #define DEF_SAT_S_ADD_FMT_1(T, UT, MIN, MAX) \ T __attribute__((noinline)) \ sat_s_add_##T##_fmt_1 (T x, T y) \ { \ T sum = (UT)x + (UT)y; \ return (x ^ y) < 0 \ ? sum \ : (sum ^ x) >= 0 \ ? sum \ : x < 0 ? MIN : MAX; \ } DEF_SAT_S_ADD_FMT_1(int64_t, uint64_t, INT64_MIN, INT64_MAX) We can tell the difference before and after this patch if backend implemented the ssadd<m>3 pattern similar as below. Before this patch: 4 │ __attribute__((noinline)) 5 │ int64_t sat_s_add_int64_t_fmt_1 (int64_t x, int64_t y) 6 │ { 7 │ int64_t sum; 8 │ long unsigned int x.0_1; 9 │ long unsigned int y.1_2; 10 │ long unsigned int _3; 11 │ long int _4; 12 │ long int _5; 13 │ int64_t _6; 14 │ _Bool _11; 15 │ long int _12; 16 │ long int _13; 17 │ long int _14; 18 │ long int _16; 19 │ long int _17; 20 │ 21 │ ;; basic block 2, loop depth 0 22 │ ;; pred: ENTRY 23 │ x.0_1 = (long unsigned int) x_7(D); 24 │ y.1_2 = (long unsigned int) y_8(D); 25 │ _3 = x.0_1 + y.1_2; 26 │ sum_9 = (int64_t) _3; 27 │ _4 = x_7(D) ^ y_8(D); 28 │ _5 = x_7(D) ^ sum_9; 29 │ _17 = ~_4; 30 │ _16 = _5 & _17; 31 │ if (_16 < 0) 32 │ goto <bb 3>; [41.00%] 33 │ else 34 │ goto <bb 4>; [59.00%] 35 │ ;; succ: 3 36 │ ;; 4 37 │ 38 │ ;; basic block 3, loop depth 0 39 │ ;; pred: 2 40 │ _11 = x_7(D) < 0; 41 │ _12 = (long int) _11; 42 │ _13 = -_12; 43 │ _14 = _13 ^ 9223372036854775807; 44 │ ;; succ: 4 45 │ 46 │ ;; basic block 4, loop depth 0 47 │ ;; pred: 2 48 │ ;; 3 49 │ # _6 = PHI <sum_9(2), _14(3)> 50 │ return _6; 51 │ ;; succ: EXIT 52 │ 53 │ } After this patch: 4 │ __attribute__((noinline)) 5 │ int64_t sat_s_add_int64_t_fmt_1 (int64_t x, int64_t y) 6 │ { 7 │ int64_t _4; 8 │ 9 │ ;; basic block 2, loop depth 0 10 │ ;; pred: ENTRY 11 │ _4 = .SAT_ADD (x_5(D), y_6(D)); [tail call] 12 │ return _4; 13 │ ;; succ: EXIT 14 │ 15 │ } The below test suites are passed for this patch. * The rv64gcv fully regression test. * The x86 bootstrap test. * The x86 fully regression test. gcc/ChangeLog: * match.pd: Add the matching for signed .SAT_ADD. * tree-ssa-math-opts.cc (gimple_signed_integer_sat_add): Add new matching func decl. (match_unsigned_saturation_add): Try signed .SAT_ADD and rename to ... (match_saturation_add): ... here. (math_opts_dom_walker::after_dom_children): Update the above renamed func from caller. Signed-off-by:
Pan Li <pan2.li@intel.com>
-
Joern Rennecke authored
gcc/testsuite: PR testsuite/116271 * gcc.dg/vect/tsvc/vect-tsvc-s176.c [TRUNCATE_TEST]: Make sure that m stays the same as the loop bound of the middle loop. * gcc.dg/vect/tsvc/tsvc.h (get_expected_result) <s176> [TRUNCATE_TEST]: Adjust expected value.
-
Pan Li authored
This patch would like to add test cases for the unsigned scalar .SAT_SUB IMM form 4. Aka: Form 4: #define DEF_SAT_U_SUB_IMM_FMT_4(T, IMM) \ T __attribute__((noinline)) \ sat_u_sub_imm##IMM##_##T##_fmt_4 (T x) \ { \ return x > (T)IMM ? x - (T)IMM : 0; \ } DEF_SAT_U_SUB_IMM_FMT_4(uint64_t, 23) The below test is passed for this patch. * The rv64gcv regression test. gcc/testsuite/ChangeLog: * gcc.target/riscv/sat_arith.h: Add test helper macros. * gcc.target/riscv/sat_u_sub_imm-13.c: New test. * gcc.target/riscv/sat_u_sub_imm-13_1.c: New test. * gcc.target/riscv/sat_u_sub_imm-13_2.c: New test. * gcc.target/riscv/sat_u_sub_imm-14.c: New test. * gcc.target/riscv/sat_u_sub_imm-14_1.c: New test. * gcc.target/riscv/sat_u_sub_imm-14_2.c: New test. * gcc.target/riscv/sat_u_sub_imm-15.c: New test. * gcc.target/riscv/sat_u_sub_imm-15_1.c: New test. * gcc.target/riscv/sat_u_sub_imm-15_2.c: New test. * gcc.target/riscv/sat_u_sub_imm-16.c: New test. * gcc.target/riscv/sat_u_sub_imm-run-13.c: New test. * gcc.target/riscv/sat_u_sub_imm-run-14.c: New test. * gcc.target/riscv/sat_u_sub_imm-run-15.c: New test. * gcc.target/riscv/sat_u_sub_imm-run-16.c: New test. Signed-off-by:
Pan Li <pan2.li@intel.com>
-
Pan Li authored
This patch would like to add test cases for the unsigned scalar .SAT_SUB IMM form 3. Aka: Form 3: #define DEF_SAT_U_SUB_IMM_FMT_3(T, IMM) \ T __attribute__((noinline)) \ sat_u_sub_imm##IMM##_##T##_fmt_3 (T y) \ { \ return (T)IMM > y ? (T)IMM - y : 0; \ } DEF_SAT_U_SUB_IMM_FMT_3(uint64_t, 23) The below test is passed for this patch. * The rv64gcv regression test. gcc/testsuite/ChangeLog: * gcc.target/riscv/sat_arith.h: Add test helper macros. * gcc.target/riscv/sat_u_sub_imm-10.c: New test. * gcc.target/riscv/sat_u_sub_imm-10_1.c: New test. * gcc.target/riscv/sat_u_sub_imm-10_2.c: New test. * gcc.target/riscv/sat_u_sub_imm-11.c: New test. * gcc.target/riscv/sat_u_sub_imm-11_1.c: New test. * gcc.target/riscv/sat_u_sub_imm-11_2.c: New test. * gcc.target/riscv/sat_u_sub_imm-12.c: New test. * gcc.target/riscv/sat_u_sub_imm-9.c: New test. * gcc.target/riscv/sat_u_sub_imm-9_1.c: New test. * gcc.target/riscv/sat_u_sub_imm-9_2.c: New test. * gcc.target/riscv/sat_u_sub_imm-run-10.c: New test. * gcc.target/riscv/sat_u_sub_imm-run-11.c: New test. * gcc.target/riscv/sat_u_sub_imm-run-12.c: New test. * gcc.target/riscv/sat_u_sub_imm-run-9.c: New test. Signed-off-by:
Pan Li <pan2.li@intel.com>
-
GCC Administrator authored
-
- Aug 27, 2024
-
-
Andi Kleen authored
SPARC does not support vectorizing conditions, which this test relies on. Use vect_condition as effective target. Committed as obvious. PR testsuite/116500 gcc/testsuite/ChangeLog: * gcc.dg/vect/vect-switch-ifcvt-1.c: Use vect_condition to check if vectorizing conditions is supported for target.
-
Joseph Myers authored
* zh_CN.po: Update.
-
Arsen Arsenović authored
Previously, we were building and inserting case_labels manually, which led to them not being added into the currently running switch via c_add_case_label. This led to false diagnostics that the user could not act on. PR c++/109867 gcc/cp/ChangeLog: * coroutines.cc (expand_one_await_expression): Replace uses of build_case_label with finish_case_label. (build_actor_fn): Ditto. (create_anon_label_with_ctx): Remove now-unused function. gcc/testsuite/ChangeLog: * g++.dg/coroutines/torture/pr109867.C: New test. Reviewed-by:
Iain Sandoe <iain@sandoe.co.uk>
-
Andreas Schwab authored
When LRA pulls an address operand out of a MEM it caninoicalizes a containing MULT into ASHIFT. Adjust the address decomposer to recognize this form. PR target/116413 * config/m68k/m68k.cc (m68k_decompose_index): Accept ASHIFT like MULT. (m68k_rtx_costs) [PLUS]: Likewise. (m68k_legitimize_address): Likewise.
-
Simon Martin authored
We mention 'X::__ct' instead of 'X::X' in the "names the constructor, not the type" error for this invalid code: === cut here === struct X {}; void g () { X::X x; } === cut here === The problem is that we use %<%T::%D%> to build the error message, while %qE does exactly what we need since we have DECL_CONSTRUCTOR_P. This is what this patch does. It also skips until the end of the statement and returns error_mark_node for this and the preceding if block, to avoid emitting extra (useless) errors. PR c++/105483 gcc/cp/ChangeLog: * parser.cc (cp_parser_expression_statement): Use %qE instead of incorrect %<%T::%D%>. Skip to end of statement and return error_mark_node in case of error. gcc/testsuite/ChangeLog: * g++.dg/parse/error36.C: Adjust test expectation. * g++.dg/tc1/dr147.C: Likewise. * g++.old-deja/g++.other/typename1.C: Likewise. * g++.dg/diagnostic/pr105483.C: New test.
-
Patrick O'Neill authored
These subroutines will be used in expand_const_vector in a future patch. Relocate so expand_const_vector can use them. gcc/ChangeLog: * config/riscv/riscv-v.cc (expand_vector_init_insert_elems): Relocate. (expand_vector_init_trailing_same_elem): Ditto. Signed-off-by:
Patrick O'Neill <patrick@rivosinc.com>
-
Patrick O'Neill authored
Currently we assert when encountering a non-duplicate boolean vector. This patch allows non-duplicate vectors to fall through to the gcc_unreachable and assert there. This will be useful when adding a catch-all pattern to emit costs and handle arbitary vectors. gcc/ChangeLog: * config/riscv/riscv-v.cc (expand_const_vector): Allow non-duplicate to fall through other patterns before asserting. Signed-off-by:
Patrick O'Neill <patrick@rivosinc.com>
-
Patrick O'Neill authored
The comment previously here stated that the Wc0/Wc1 cases are handled by the vi constraint but that is not true for the 0.0 Wc0 case. gcc/ChangeLog: * config/riscv/riscv-v.h (valid_vec_immediate_p): Add new helper. * config/riscv/riscv-v.cc (valid_vec_immediate_p): Ditto. (expand_const_vector): Use new helper. * config/riscv/riscv.cc (riscv_const_insns): Handle 0.0 floating-point case. Signed-off-by:
Patrick O'Neill <patrick@rivosinc.com>
-
Patrick O'Neill authored
These cases are handled in the expander (riscv-v.cc:expand_const_vector). We need the vector builder to detect these cases so extract that out into a new riscv-v.h header file. gcc/ChangeLog: * config/riscv/riscv-v.cc (class rvv_builder): Move to riscv-v.h. * config/riscv/riscv.cc (riscv_const_insns): Emit placeholder costs for bool/stepped const vectors. * config/riscv/riscv-v.h: New file. Signed-off-by:
Patrick O'Neill <patrick@rivosinc.com>
-
Patrick O'Neill authored
This manifests in RTL that is optimized away which causes runtime failures in the testsuite. Update all patterns to use a temp result register if required. gcc/ChangeLog: * config/riscv/riscv-v.cc (expand_const_vector): Use tmp register if needed. Signed-off-by:
Patrick O'Neill <patrick@rivosinc.com>
-
Patrick O'Neill authored
The corresponding expander (riscv-v.cc:expand_const_vector) matches const_vec_duplicate_p before const_vec_series_p. Reorder to match this behavior when calculating costs. gcc/ChangeLog: * config/riscv/riscv.cc (riscv_const_insns): Relocate. Signed-off-by:
Patrick O'Neill <patrick@rivosinc.com>
-
Patrick O'Neill authored
Prior to this patch the expander would emit vectors like: { 0, 0, 5, 5, 10, 10, ...} as: { 0, 0, 2, 2, 4, 4, ...} This patch sets the step size to the requested value. gcc/ChangeLog: * config/riscv/riscv-v.cc (expand_const_vector): Fix STEP size in expander. Signed-off-by:
Patrick O'Neill <patrick@rivosinc.com>
-
Christophe Lyon authored
With MVE, vmov.f64 is always supported (no need for +fp.dp extension). This patch updates two patterns: - in movdi_vfp, we incorrectly checked TARGET_VFP_SINGLE || TARGET_HAVE_MVE instead of TARGET_VFP_SINGLE && !TARGET_HAVE_MVE, and didn't take into account these two possibilities when computing the length attribute. - in thumb2_movdf_vfp, we checked only TARGET_VFP_SINGLE. No need to update movdf_vfp, since it is enabled only for TARGET_ARM (which is not the case when MVE is enabled). The patch also updates gcc.target/arm/armv8_1m-fp64-move-1.c, to accept only vmov.f64 instead of vmov.f32. Tested on arm-none-eabi with: qemu/-mthumb/-mtune=cortex-m55/-mfloat-abi=hard/-mfpu=auto qemu/-mthumb/-mtune=cortex-m55/-mfloat-abi=hard/-mfpu=auto/-march=armv8.1-m.main+mve qemu/-mthumb/-mtune=cortex-m55/-mfloat-abi=hard/-mfpu=auto/-march=armv8.1-m.main+mve.fp qemu/-mthumb/-mtune=cortex-m55/-mfloat-abi=hard/-mfpu=auto/-march=armv8.1-m.main+mve.fp+fp.dp 2024-08-21 Christophe Lyon <christophe.lyon@linaro.org> gcc/ * config/arm/vfp.md (movdi_vfp, thumb2_movdf_vfp): Handle MVE case. gcc/testsuite/ * gcc.target/arm/armv8_1m-fp64-move-1.c: Update expected code.
-
H.J. Lu authored
* gcc.target/i386/pr116174.c: Add the missing */. Signed-off-by:
H.J. Lu <hjl.tools@gmail.com>
-
H.J. Lu authored
As PR target/116174 shown, we may need to verify labels and the directive order. Extend check-function-bodies to support matched output lines to allow label and directives. gcc/ * doc/sourcebuild.texi (check-function-bodies): Add an optional argument for matched output lines. gcc/testsuite/ * gcc.target/i386/pr116174.c: Use check-function-bodies. * lib/scanasm.exp (parse_function_bodies): Append the line if $up_config(matched) matches the line. (check-function-bodies): Add an argument for matched. Set up_config(matched) to $matched. Append the expected line without $config(line_prefix) to function_regexp if it starts with ".L". Signed-off-by:
H.J. Lu <hjl.tools@gmail.com>
-