- Oct 19, 2024
-
-
GCC Administrator authored
-
- Oct 18, 2024
-
-
Alejandro Colomar authored
There were two identical definitions, and none of them are available where they are needed for implementing a number-of-elements-of operator. Merge them, and provide the single definition in gcc/tree.{h,cc}, where it's available for that operator, which will be added in a following commit. gcc/ChangeLog: * tree.h (array_type_nelts_top) * tree.cc (array_type_nelts_top): Define function (moved from gcc/cp/). gcc/cp/ChangeLog: * cp-tree.h (array_type_nelts_top) * tree.cc (array_type_nelts_top): Remove function (move to gcc/). gcc/rust/ChangeLog: * backend/rust-tree.h (array_type_nelts_top) * backend/rust-tree.cc (array_type_nelts_top): Remove function. Signed-off-by:
Alejandro Colomar <alx@kernel.org>
-
Alejandro Colomar authored
The old name was misleading. While at it, also rename some temporary variables that are used with this function, for consistency. Link: <https://inbox.sourceware.org/gcc-patches/9fffd80-dca-2c7e-14b-6c9b509a7215@redhat.com/T/#m2f661c67c8f7b2c405c8c7fc3152dd85dc729120 > gcc/ChangeLog: * tree.cc (array_type_nelts, array_type_nelts_minus_one) * tree.h (array_type_nelts, array_type_nelts_minus_one) * expr.cc (count_type_elements) * config/aarch64/aarch64.cc (pure_scalable_type_info::analyze_array) * config/i386/i386.cc (ix86_canonical_va_list_type): Rename array_type_nelts => array_type_nelts_minus_one The old name was misleading. gcc/c/ChangeLog: * c-decl.cc (one_element_array_type_p, get_parm_array_spec) * c-fold.cc (c_fold_array_ref): Rename array_type_nelts => array_type_nelts_minus_one gcc/cp/ChangeLog: * decl.cc (reshape_init_array) * init.cc (build_zero_init_1) (build_value_init_noctor) (build_vec_init) (build_delete) * lambda.cc (add_capture) * tree.cc (array_type_nelts_top): Rename array_type_nelts => array_type_nelts_minus_one gcc/fortran/ChangeLog: * trans-array.cc (structure_alloc_comps) * trans-openmp.cc (gfc_walk_alloc_comps) (gfc_omp_clause_linear_ctor): Rename array_type_nelts => array_type_nelts_minus_one gcc/rust/ChangeLog: * backend/rust-tree.cc (array_type_nelts_top): Rename array_type_nelts => array_type_nelts_minus_one Suggested-by:
Richard Biener <richard.guenther@gmail.com> Signed-off-by:
Alejandro Colomar <alx@kernel.org>
-
Ian Lance Taylor authored
Fixes https://github.com/ianlancetaylor/libbacktrace/issues/137. * dwarf.c (resolve_unit_addrs_overlap_walk): New static function. (resolve_unit_addrs_overlap): New static function. (build_dwarf_data): Call resolve_unit_addrs_overlap.
-
John David Anglin authored
2024-10-18 John David Anglin <danglin@gcc.gnu.org> gcc/ChangeLog: * config/pa/pa.opt.urls: Fix for -mlra.
-
Thomas Koenig authored
gcc/fortran/ChangeLog: * error.cc (notify_std_msg): Handle GFC_STD_UNSIGNED. gcc/testsuite/ChangeLog: * gfortran.dg/unsigned_37.f90: New test.
-
John David Anglin authored
LRA is not enabled as default since there are some new test fails remaining to resolve. 2024-10-18 John David Anglin <danglin@gcc.gnu.org> gcc/ChangeLog: PR target/113933 * config/pa/pa.cc (pa_use_lra_p): Declare. (TARGET_LRA_P): Change define to pa_use_lra_p. (pa_use_lra_p): New function. (legitimize_pic_address): Also check lra_in_progress. (pa_emit_move_sequence): Likewise. (pa_legitimate_constant_p): Likewise. (pa_legitimate_address_p): Likewise. (pa_secondary_reload): For floating-point loads and stores, return NO_REGS for REG and SUBREG operands. Return GENERAL_REGS for some shift register spills. * config/pa/pa.opt: Add mlra option. * config/pa/predicates.md (integer_store_memory_operand): Also check lra_in_progress. (floating_point_store_memory_operand): Likewise. (reg_before_reload_operand): Likewise.
-
Craig Blackmore authored
If riscv_vector::expand_block_move is generating a straight-line memcpy using a predicated store, it tries to use a smaller LMUL to reduce register pressure if it still allows an entire transfer. This happens in the inner loop of riscv_vector::expand_block_move, however, the vmode chosen by this loop gets overwritten later in the function, so I have added the missing break from the outer loop. I have also addressed a couple of issues with the conditions of the if statement within the inner loop. The first condition did not make sense to me: ``` TARGET_MIN_VLEN * lmul <= nunits * BITS_PER_UNIT ``` I think this was supposed to be checking that the length fits within the given LMUL, so I have changed it to do that. The second condition: ``` /* Avoid loosing the option of using vsetivli . */ && (nunits <= 31 * lmul || nunits > 31 * 8) ``` seems to imply that lmul affects the range of AVL immediate that vsetivli can take but I don't think that is correct. Anyway, I don't think this condition is necessary because if we find a suitable mode we should stick with it, regardless of whether it allowed vsetivli, rather than continuing to try larger lmul which would increase register pressure or smaller potential_ew which would increase AVL. I have removed this condition. gcc/ChangeLog: * config/riscv/riscv-string.cc (expand_block_move): Fix condition for using smaller LMUL. Break outer loop if a suitable vmode has been found. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/vsetvl/pr112929-1.c: Expect smaller lmul. * gcc.target/riscv/rvv/vsetvl/pr112988-1.c: Likewise. * gcc.target/riscv/rvv/base/cpymem-3.c: New test.
-
Craig Blackmore authored
gcc/ChangeLog: * config/riscv/riscv-string.cc (expand_block_move): Replace `end` with `length_rtx` in gen_rtx_NE.
-
Craig Blackmore authored
gcc/ChangeLog: * config/riscv/riscv-string.cc (expand_block_move): Fix indentation.
-
Uros Bizjak authored
Fix the order of operands in andn<MMXMODEI:mode>3 expander to comply with the specification, where bitwise-complement applies to operand 2. PR target/117192 gcc/ChangeLog: * config/i386/mmx.md (andn<MMXMODEI:mode>3): Swap operand indexes 1 and 2 to comply with andn specification. gcc/testsuite/ChangeLog: * gcc.target/i386/pr117192.c: New test.
-
Jonathan Wakely authored
Use std::__assign_one instead of ranges::__assign_one. Adjust the uses, because std::__assign_one has the arguments in the opposite order (the same order as an assignment expression). libstdc++-v3/ChangeLog: * include/bits/ranges_algobase.h (ranges::__assign_one): Remove. (__copy_or_move, __copy_or_move_backward): Use std::__assign_one instead of ranges::__assign_one. Reviewed-by:
Patrick Palka <ppalka@redhat.com>
-
Jonathan Wakely authored
We implement std::copy, std::fill etc. as a series of calls to other overloads which incrementally peel off layers of iterator wrappers. This adds a high abstraction penalty for -O0 and potentially even -O1. Add the always_inline attribute to several functions that are just a single return statement (and maybe a static_assert, or some concept-checking assertions which are disabled by default). libstdc++-v3/ChangeLog: * include/bits/stl_algobase.h (__copy_move_a1, __copy_move_a) (__copy_move_backward_a1, __copy_move_backward_a, move_backward) (__fill_a1, __fill_a, fill, __fill_n_a, fill_n, __equal_aux): Add always_inline attribute to one-line forwarding functions. Reviewed-by:
Patrick Palka <ppalka@redhat.com>
-
Jonathan Wakely authored
I missed this one out in r14-9478-gdf483ebd24689a but I don't think that was intentional. I see no reason std::find shouldn't be [[nodiscard]]. libstdc++-v3/ChangeLog: * include/bits/stl_algo.h (find): Add nodiscard. Reviewed-by:
Patrick Palka <ppalka@redhat.com>
-
Jonathan Wakely authored
This removes all the __copy_move class template specializations that decide how to optimize std::copy and std::copy_n. We can inline those optimizations into the algorithms, using if-constexpr (and macros for C++98 compatibility) and remove the code dispatching to the various class template specializations. Doing this means we implement the optimization directly for std::copy_n instead of deferring to std::copy, That avoids the unwanted consequence of advancing the iterator in copy_n only to take the difference later to get back to the length that we already had in copy_n originally (as described in PR 115444). With the new flattened implementations, we can also lower contiguous iterators to pointers in std::copy/std::copy_n/std::copy_backwards, so that they benefit from the same memmove optimizations as pointers. There's a subtlety though: contiguous iterators can potentially throw exceptions to exit the algorithm early. So we can only transform the loop to memmove if dereferencing the iterator is noexcept. We don't check that incrementing the iterator is noexcept because we advance the contiguous iterators before using memmove, so that if incrementing would throw, that happens first. I am writing a proposal (P3349R0) which would make this unnecessary, so I hope we can drop the nothrow requirements later. This change also solves PR 114817 by checking is_trivially_assignable before optimizing copy/copy_n etc. to memmove. It's not enough to check that the types are trivially copyable (a precondition for using memmove at all), we also need to check that the specific assignment that would be performed by the algorithm is also trivial. Replacing a non-trivial assignment with memmove would be observable, so not allowed. libstdc++-v3/ChangeLog: PR libstdc++/115444 PR libstdc++/114817 * include/bits/stl_algo.h (__copy_n): Remove generic overload and overload for random access iterators. (copy_n): Inline generic version of __copy_n here. Do not defer to std::copy for random access iterators. * include/bits/stl_algobase.h (__copy_move): Remove. (__nothrow_contiguous_iterator, __memcpyable_iterators): New concepts. (__assign_one, _GLIBCXX_TO_ADDR, _GLIBCXX_ADVANCE): New helpers. (__copy_move_a2): Inline __copy_move logic and conditional memmove optimization into the most generic overload. (__copy_n_a): Likewise. (__copy_move_backward): Remove. (__copy_move_backward_a2): Inline __copy_move_backward logic and memmove optimization into the most generic overload. * testsuite/20_util/specialized_algorithms/uninitialized_copy/114817.cc: New test. * testsuite/20_util/specialized_algorithms/uninitialized_copy_n/114817.cc: New test. * testsuite/25_algorithms/copy/114817.cc: New test. * testsuite/25_algorithms/copy/115444.cc: New test. * testsuite/25_algorithms/copy_n/114817.cc: New test. Reviewed-by:
Patrick Palka <ppalka@redhat.com>
-
Jonathan Wakely authored
The __gnu_cxx::__normal_iterator type we use for std::vector::iterator is not specified by the standard, it's an implementation detail. This means it's not constrained by the rule that forbids strengthening constexpr. We can make it meet the constexpr iterator requirements for older standards, not only when it's required to be for C++20. For the non-const member functions they can't be constexpr in C++11, so use _GLIBCXX14_CONSTEXPR for those. For all constructors, const members and non-member operator overloads, use _GLIBCXX_CONSTEXPR or just constexpr. We can also liberally add [[nodiscard]] and [[gnu::always_inline]] attributes to those functions. Also change some internal helpers for std::move_iterator which can be unconditionally constexpr and marked nodiscard. libstdc++-v3/ChangeLog: * include/bits/stl_iterator.h (__normal_iterator): Make all members and overloaded operators constexpr before C++20, and add always_inline attribute (__to_address): Add nodiscard and always_inline attributes. (__make_move_if_noexcept_iterator): Add nodiscard and make unconditionally constexpr. (__niter_base(__normal_iterator), __niter_base(Iter)): Add nodiscard and always_inline attributes. (__niter_base(reverse_iterator), __niter_base(move_iterator)) (__miter_base): Add inline. (__niter_wrap(From, To)): Add nodiscard attribute. (__niter_wrap(const Iter&, Iter)): Add nodiscard and always_inline attributes. Reviewed-by:
Patrick Palka <ppalka@redhat.com>
-
Jonathan Wakely authored
This refactors the std::uninitialized_copy, std::uninitialized_fill and std::uninitialized_fill_n algorithms to directly perform memcpy/memset optimizations instead of dispatching to std::copy/std::fill/std::fill_n. The reasons for this are: - Use 'if constexpr' to simplify and optimize compilation throughput, so dispatching to specialized class templates is only needed for C++98 mode. - Use memcpy instead of memmove, because the conditions on non-overlapping ranges are stronger for std::uninitialized_copy than for std::copy. Using memcpy might be a minor optimization. - No special case for creating a range of one element, which std::copy needs to deal with (see PR libstdc++/108846). The uninitialized algos create new objects, which reuses storage and is allowed to clobber tail padding. - Relax the conditions for using memcpy/memset, because the C++20 rules on implicit-lifetime types mean that we can rely on memcpy to begin lifetimes of trivially copyable types. We don't need to require trivially default constructible, so don't need to limit the optimization to trivial types. See PR 68350 for more details. - Remove the dependency on std::copy and std::fill. This should mean that stl_uninitialized.h no longer needs to include all of stl_algobase.h. This isn't quite true yet, because we still use std::fill in __uninitialized_default and still use std::fill_n in __uninitialized_default_n. That will be fixed later. Several tests need changes to the diagnostics matched by dg-error because we no longer use the __constructible() function that had a static assert in. Now we just get straightforward errors for attempting to use a deleted constructor. Two tests needed more signficant changes to the actual expected results of executing the tests, because they were checking for old behaviour which was incorrect according to the standard. 20_util/specialized_algorithms/uninitialized_copy/64476.cc was expecting std::copy to be used for a call to std::uninitialized_copy involving two trivially copyable types. That was incorrect behaviour, because a non-trivial constructor should have been used, but using std::copy used trivial default initialization followed by assignment. 20_util/specialized_algorithms/uninitialized_fill_n/sizes.cc was testing the behaviour with a non-integral Size passed to uninitialized_fill_n, but I wrote the test looking at the requirements of uninitialized_copy_n which are not the same as uninitialized_fill_n. The former uses --n and tests n > 0, but the latter just tests n-- (which will never be false for a floating-point value with a fractional part). libstdc++-v3/ChangeLog: PR libstdc++/68350 PR libstdc++/93059 * include/bits/stl_uninitialized.h (__check_constructible) (_GLIBCXX_USE_ASSIGN_FOR_INIT): Remove. [C++98] (__unwrappable_niter): New trait. (__uninitialized_copy<true>): Replace use of std::copy. (uninitialized_copy): Fix Doxygen comments. Open-code memcpy optimization for C++11 and later. (__uninitialized_fill<true>): Replace use of std::fill. (uninitialized_fill): Fix Doxygen comments. Open-code memset optimization for C++11 and later. (__uninitialized_fill_n<true>): Replace use of std::fill_n. (uninitialized_fill_n): Fix Doxygen comments. Open-code memset optimization for C++11 and later. * testsuite/20_util/specialized_algorithms/uninitialized_copy/64476.cc: Adjust expected behaviour to match what the standard specifies. * testsuite/20_util/specialized_algorithms/uninitialized_fill_n/sizes.cc: Likewise. * testsuite/20_util/specialized_algorithms/uninitialized_copy/1.cc: Adjust dg-error directives. * testsuite/20_util/specialized_algorithms/uninitialized_copy/89164.cc: Likewise. * testsuite/20_util/specialized_algorithms/uninitialized_copy_n/89164.cc: Likewise. * testsuite/20_util/specialized_algorithms/uninitialized_fill/89164.cc: Likewise. * testsuite/20_util/specialized_algorithms/uninitialized_fill_n/89164.cc: Likewise. * testsuite/23_containers/vector/cons/89164.cc: Likewise. * testsuite/23_containers/vector/cons/89164_c++17.cc: Likewise. Reviewed-by:
Patrick Palka <ppalka@redhat.com>
-
Jonathan Wakely authored
Move the functions for unwrapping and rewrapping __normal_iterator objects to the same file as the definition of __normal_iterator itself. This will allow a later commit to make use of std::__niter_base in other headers without having to include all of <bits/stl_algobase.h>. libstdc++-v3/ChangeLog: * include/bits/stl_algobase.h (__niter_base, __niter_wrap): Move to ... * include/bits/stl_iterator.h: ... here. (__niter_base, __miter_base): Move all overloads to the end of the header. * testsuite/24_iterators/normal_iterator/wrapping.cc: New test. Reviewed-by:
Patrick Palka <ppalka@redhat.com>
-
Jennifer Schmitz authored
As suggested in https://gcc.gnu.org/pipermail/gcc-patches/2024-September/663275.html , this patch adds the method gimple_folder::fold_active_lanes_to (tree X). This method folds active lanes to X and sets inactive lanes according to the predication, returning a new gimple statement. That makes folding of SVE intrinsics easier and reduces code duplication in the svxxx_impl::fold implementations. Using this new method, svdiv_impl::fold and svmul_impl::fold were refactored. Additionally, the method was used for two optimizations: 1) Fold svdiv to the dividend, if the divisor is all ones and 2) for svmul, if one of the operands is all ones, fold to the other operand. Both optimizations were previously applied to _x and _m predication on the RTL level, but not for _z, where svdiv/svmul were still being used. For both optimization, codegen was improved by this patch, for example by skipping sel instructions with all-same operands and replacing sel instructions by mov instructions. The patch was bootstrapped and regtested on aarch64-linux-gnu, no regression. OK for mainline? Signed-off-by:
Jennifer Schmitz <jschmitz@nvidia.com> gcc/ * config/aarch64/aarch64-sve-builtins-base.cc (svdiv_impl::fold): Refactor using fold_active_lanes_to and fold to dividend, is the divisor is all ones. (svmul_impl::fold): Refactor using fold_active_lanes_to and fold to the other operand, if one of the operands is all ones. * config/aarch64/aarch64-sve-builtins.h: Declare gimple_folder::fold_active_lanes_to (tree). * config/aarch64/aarch64-sve-builtins.cc (gimple_folder::fold_actives_lanes_to): Add new method to fold actives lanes to given argument and setting inactives lanes according to the predication. gcc/testsuite/ * gcc.target/aarch64/sve/acle/asm/div_s32.c: Adjust expected outcome. * gcc.target/aarch64/sve/acle/asm/div_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/div_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/div_u64.c: Likewise. * gcc.target/aarch64/sve/fold_div_zero.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_s16.c: New test. * gcc.target/aarch64/sve/acle/asm/mul_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u8.c: Likewise. * gcc.target/aarch64/sve/mul_const_run.c: Likewise.
-
Richard Biener authored
The following makes -ftrapv explicit. * gcc.dg/vect/vect.exp: Remove special-casing of tests named trapv-* * gcc.dg/vect/trapv-vect-reduc-4.c: Add dg-additional-options -ftrapv.
-
Richard Biener authored
The following makes -fwrapv explicit. * gcc.dg/vect/vect.exp: Remove special-casing of tests named wrapv-* * gcc.dg/vect/wrapv-vect-7.c: Add dg-additional-options -fwrapv. * gcc.dg/vect/wrapv-vect-reduc-2char.c: Likewise. * gcc.dg/vect/wrapv-vect-reduc-2short.c: Likewise. * gcc.dg/vect/wrapv-vect-reduc-dot-s8b.c: Likewise. * gcc.dg/vect/wrapv-vect-reduc-pattern-2c.c: Likewise.
-
Richard Biener authored
The following makes -ffast-math explicit. * gcc.dg/vect/vect.exp: Remove special-casing of tests named fast-math-* * gcc.dg/vect/fast-math-bb-slp-call-1.c: Add dg-additional-options -ffast-math. * gcc.dg/vect/fast-math-bb-slp-call-2.c: Likewise. * gcc.dg/vect/fast-math-bb-slp-call-3.c: Likewise. * gcc.dg/vect/fast-math-ifcvt-1.c: Likewise. * gcc.dg/vect/fast-math-pr35982.c: Likewise. * gcc.dg/vect/fast-math-pr43074.c: Likewise. * gcc.dg/vect/fast-math-pr44152.c: Likewise. * gcc.dg/vect/fast-math-pr55281.c: Likewise. * gcc.dg/vect/fast-math-slp-27.c: Likewise. * gcc.dg/vect/fast-math-slp-38.c: Likewise. * gcc.dg/vect/fast-math-vect-call-1.c: Likewise. * gcc.dg/vect/fast-math-vect-call-2.c: Likewise. * gcc.dg/vect/fast-math-vect-complex-3.c: Likewise. * gcc.dg/vect/fast-math-vect-outer-7.c: Likewise. * gcc.dg/vect/fast-math-vect-pow-1.c: Likewise. * gcc.dg/vect/fast-math-vect-pow-2.c: Likewise. * gcc.dg/vect/fast-math-vect-pr25911.c: Likewise. * gcc.dg/vect/fast-math-vect-pr29925.c: Likewise. * gcc.dg/vect/fast-math-vect-reduc-5.c: Likewise. * gcc.dg/vect/fast-math-vect-reduc-7.c: Likewise. * gcc.dg/vect/fast-math-vect-reduc-8.c: Likewise. * gcc.dg/vect/fast-math-vect-reduc-9.c: Likewise. * gcc.dg/vect/complex/fast-math-bb-slp-complex-add-double.c: Likewise. * gcc.dg/vect/complex/fast-math-bb-slp-complex-add-float.c: Likewise. * gcc.dg/vect/complex/fast-math-bb-slp-complex-add-half-float.c: Likewise. * gcc.dg/vect/complex/fast-math-bb-slp-complex-add-pattern-double.c: Likewise. * gcc.dg/vect/complex/fast-math-bb-slp-complex-add-pattern-float.c: Likewise. * gcc.dg/vect/complex/fast-math-bb-slp-complex-add-pattern-half-float.c: Likewise. * gcc.dg/vect/complex/fast-math-bb-slp-complex-mla-double.c: Likewise. * gcc.dg/vect/complex/fast-math-bb-slp-complex-mla-float.c: Likewise. * gcc.dg/vect/complex/fast-math-bb-slp-complex-mla-half-float.c: Likewise. * gcc.dg/vect/complex/fast-math-bb-slp-complex-mls-double.c: Likewise. * gcc.dg/vect/complex/fast-math-bb-slp-complex-mls-float.c: Likewise. * gcc.dg/vect/complex/fast-math-bb-slp-complex-mls-half-float.c: Likewise. * gcc.dg/vect/complex/fast-math-bb-slp-complex-mul-double.c: Likewise. * gcc.dg/vect/complex/fast-math-bb-slp-complex-mul-float.c: Likewise. * gcc.dg/vect/complex/fast-math-bb-slp-complex-mul-half-float.c: Likewise. * gcc.dg/vect/complex/fast-math-complex-add-double.c: Likewise. * gcc.dg/vect/complex/fast-math-complex-add-float.c: Likewise. * gcc.dg/vect/complex/fast-math-complex-add-half-float.c: Likewise. * gcc.dg/vect/complex/fast-math-complex-add-pattern-double.c: Likewise. * gcc.dg/vect/complex/fast-math-complex-add-pattern-float.c: Likewise. * gcc.dg/vect/complex/fast-math-complex-add-pattern-half-float.c: Likewise. * gcc.dg/vect/complex/fast-math-complex-mla-double.c: Likewise. * gcc.dg/vect/complex/fast-math-complex-mla-float.c: Likewise. * gcc.dg/vect/complex/fast-math-complex-mla-half-float.c: Likewise. * gcc.dg/vect/complex/fast-math-complex-mls-double.c: Likewise. * gcc.dg/vect/complex/fast-math-complex-mls-float.c: Likewise. * gcc.dg/vect/complex/fast-math-complex-mls-half-float.c: Likewise. * gcc.dg/vect/complex/fast-math-complex-mul-double.c: Likewise. * gcc.dg/vect/complex/fast-math-complex-mul-float.c: Likewise. * gcc.dg/vect/complex/fast-math-complex-mul-half-float.c: Likewise.
-
Richard Biener authored
The following makes --param vect-max-version-for-alias-checks=0 explicit. * gcc.dg/vect/vect.exp: Remove special-casing of tests named no-vfa-* * gcc.dg/vect/no-vfa-pr29145.c: Add dg-additional-options --param vect-max-version-for-alias-checks=0. * gcc.dg/vect/no-vfa-vect-101.c: Likewise. * gcc.dg/vect/no-vfa-vect-102.c: Likewise. * gcc.dg/vect/no-vfa-vect-102a.c: Likewise. * gcc.dg/vect/no-vfa-vect-37.c: Likewise. * gcc.dg/vect/no-vfa-vect-43.c: Likewise. * gcc.dg/vect/no-vfa-vect-45.c: Likewise. * gcc.dg/vect/no-vfa-vect-49.c: Likewise. * gcc.dg/vect/no-vfa-vect-51.c: Likewise. * gcc.dg/vect/no-vfa-vect-53.c: Likewise. * gcc.dg/vect/no-vfa-vect-57.c: Likewise. * gcc.dg/vect/no-vfa-vect-61.c: Likewise. * gcc.dg/vect/no-vfa-vect-79.c: Likewise. * gcc.dg/vect/no-vfa-vect-depend-1.c: Likewise. * gcc.dg/vect/no-vfa-vect-depend-2.c: Likewise. * gcc.dg/vect/no-vfa-vect-depend-3.c: Likewise. * gcc.dg/vect/no-vfa-vect-dv-2.c: Likewise.
-
Richard Biener authored
The assert in SLP discovery when we handle masked operations is confusingly wide - all gather variants should be catched by the earlier STMT_VINFO_GATHER_SCATTER_P. * tree-vect-slp.cc (vect_build_slp_tree_2): Only expect IFN_MASK_LOAD for masked loads that are not STMT_VINFO_GATHER_SCATTER_P.
-
Alex Coplan authored
ChangeLog: * MAINTAINERS (CPU Port Maintainers): Add myself as aarch64 ldp/stp maintainer. (Various Maintainers): Add myself as pair fusion maintainer.
-
Martin Jambor authored
I have received an email from the Linaro infrastructure that the test gcc.dg/lto/pr115815_0.c which I added is failing on arm-eabi and I realized that not only it is missing dg-require-effective-target global_constructor but actually any dejagnu directives at all, which means it is unnecessarily running both at -O0 and -O2 and there is an unnecesary run test too. All fixed by this patch. I have not actually verified that the failure goes away on arm-eabi but have very high hopes it will. I have verified that the test still checks for the bug and also that it passes by running: make -k check-gcc RUNTESTFLAGS="lto.exp=*pr115815*" gcc/testsuite/ChangeLog: 2024-10-14 Martin Jambor <mjambor@suse.cz> * gcc.dg/lto/pr115815_0.c: Add dejagu directives.
-
Tamar Christina authored
When finding the gsi to use for code of the root statements we should use the one of the original statement rather than the gcond which may be inside a pattern. Without this the emitted instructions may be discarded later. gcc/ChangeLog: PR tree-optimization/117140 * tree-vect-slp.cc (vectorize_slp_instance_root_stmt): Use gsi from original statement. gcc/testsuite/ChangeLog: PR tree-optimization/117140 * gcc.dg/vect/vect-early-break_129-pr117140.c: New test.
-
Tamar Christina authored
In GCC 14 VEC_PERM_EXPR was relaxed to be able to permute to a 2x larger vector than the size of the input vectors. However various passes and transformations were not updated to account for this. I have patches in these area that I will be upstreaming with individual patches that expose them. This one is that vectlower tries to lower based on the size of the input vectors rather than the size of the output. As a consequence it creates an invalid vector of half the size. Luckily we ICE because the resulting nunits doesn't match the vector size. gcc/ChangeLog: * tree-vect-generic.cc (lower_vec_perm): Use output vector size instead of input vector when determining output nunits. gcc/testsuite/ChangeLog: * gcc.dg/vec-perm-lower.c: New test.
-
Tamar Christina authored
This patch changes SVE to use Adv. SIMD movi 0 to clear SVE registers when not in SVE streaming mode. As the Neoverse Software Optimization guides indicate SVE mov #0 is not a zero cost move. When In streaming mode we continue to use SVE's mov to clear the registers. Tests have already been updated. gcc/ChangeLog: * config/aarch64/aarch64.cc (aarch64_output_sve_mov_immediate): Use fmov for SVE zeros.
-
Tamar Christina authored
This patch extends our immediate SIMD generation cases to support generating integer immediates using floating point operation if the integer immediate maps to an exact FP value. As an example: uint32x4_t f1() { return vdupq_n_u32(0x3f800000); } currently generates: f1: adrp x0, .LC0 ldr q0, [x0, #:lo12:.LC0] ret i.e. a load, but with this change: f1: fmov v0.4s, 1.0e+0 ret Such immediates are common in e.g. our Math routines in glibc because they are created to extract or mark part of an FP immediate as masks. gcc/ChangeLog: * config/aarch64/aarch64.cc (aarch64_sve_valid_immediate, aarch64_simd_valid_immediate): Refactor accepting modes and values. (aarch64_float_const_representable_p): Refactor and extract FP checks into ... (aarch64_real_float_const_representable_p): ...This and fix fail fallback from real_to_integer. (aarch64_advsimd_valid_immediate): Use it. gcc/testsuite/ChangeLog: * gcc.target/aarch64/const_create_using_fmov.c: New test.
-
Tamar Christina authored
The patch series will adjust how zeros are created. In principal it doesn't matter the exact lane size a zero gets created on but this makes the tests a bit fragile. This preparation patch will update the testsuite to accept multiple variants of ways to create vector zeros to accept both the current syntax and the one being transitioned to in the series. gcc/testsuite/ChangeLog: * gcc.target/aarch64/ldp_stp_18.c: Update zero regexpr. * gcc.target/aarch64/memset-corner-cases.c: Likewise. * gcc.target/aarch64/sme/acle-asm/revd_bf16.c: Likewise. * gcc.target/aarch64/sme/acle-asm/revd_f16.c: Likewise. * gcc.target/aarch64/sme/acle-asm/revd_f32.c: Likewise. * gcc.target/aarch64/sme/acle-asm/revd_f64.c: Likewise. * gcc.target/aarch64/sme/acle-asm/revd_s16.c: Likewise. * gcc.target/aarch64/sme/acle-asm/revd_s32.c: Likewise. * gcc.target/aarch64/sme/acle-asm/revd_s64.c: Likewise. * gcc.target/aarch64/sme/acle-asm/revd_s8.c: Likewise. * gcc.target/aarch64/sme/acle-asm/revd_u16.c: Likewise. * gcc.target/aarch64/sme/acle-asm/revd_u32.c: Likewise. * gcc.target/aarch64/sme/acle-asm/revd_u64.c: Likewise. * gcc.target/aarch64/sme/acle-asm/revd_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/acge_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/acge_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/acge_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/acgt_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/acgt_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/acgt_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/acle_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/acle_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/acle_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/aclt_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/aclt_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/aclt_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/bic_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/bic_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/cmpuo_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/cmpuo_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/cmpuo_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/dup_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/dup_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/dup_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/dup_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/dup_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/dup_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/dup_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/dup_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/dup_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/dup_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/dup_u8.c: Likewise. * gcc.target/aarch64/sve/const_fold_div_1.c: Likewise. * gcc.target/aarch64/sve/const_fold_mul_1.c: Likewise. * gcc.target/aarch64/sve/dup_imm_1.c: Likewise. * gcc.target/aarch64/sve/fdup_1.c: Likewise. * gcc.target/aarch64/sve/fold_div_zero.c: Likewise. * gcc.target/aarch64/sve/fold_mul_zero.c: Likewise. * gcc.target/aarch64/sve/pcs/args_2.c: Likewise. * gcc.target/aarch64/sve/pcs/args_3.c: Likewise. * gcc.target/aarch64/sve/pcs/args_4.c: Likewise. * gcc.target/aarch64/vect-fmovd-zero.c: Likewise.
-
Christophe Lyon authored
In several places we are looking for a type twice or half as large as the type suffix: this patch introduces helper functions to avoid code duplication. long_type_suffix is similar to the SVE counterpart, but adds an 'expected_tclass' parameter. half_type_suffix is similar to it, but does not exist in SVE. 2024-08-28 Christophe Lyon <christophe.lyon@linaro.org> gcc/ * config/arm/arm-mve-builtins-shapes.cc (long_type_suffix): New. (half_type_suffix): New. (struct binary_move_narrow_def): Use new helper. (struct binary_move_narrow_unsigned_def): Likewise. (struct binary_rshift_narrow_def): Likewise. (struct binary_rshift_narrow_unsigned_def): Likewise. (struct binary_widen_def): Likewise. (struct binary_widen_n_def): Likewise. (struct binary_widen_opt_n_def): Likewise. (struct unary_widen_def): Likewise.
-
Christophe Lyon authored
Implement vsbcq vsbciq using the new MVE builtins framework. We re-use most of the code introduced by the previous patches. 2024-08-28 Christophe Lyon <christophe.lyon@linaro.org> gcc/ * config/arm/arm-mve-builtins-base.cc (class vadc_vsbc_impl): Add support for vsbciq and vsbcq. (vadciq, vadcq): Add new parameter. (vsbciq): New. (vsbcq): New. * config/arm/arm-mve-builtins-base.def (vsbciq): New. (vsbcq): New. * config/arm/arm-mve-builtins-base.h (vsbciq): New. (vsbcq): New. * config/arm/arm_mve.h (vsbciq): Delete. (vsbciq_m): Delete. (vsbcq): Delete. (vsbcq_m): Delete. (vsbciq_s32): Delete. (vsbciq_u32): Delete. (vsbciq_m_s32): Delete. (vsbciq_m_u32): Delete. (vsbcq_s32): Delete. (vsbcq_u32): Delete. (vsbcq_m_s32): Delete. (vsbcq_m_u32): Delete. (__arm_vsbciq_s32): Delete. (__arm_vsbciq_u32): Delete. (__arm_vsbciq_m_s32): Delete. (__arm_vsbciq_m_u32): Delete. (__arm_vsbcq_s32): Delete. (__arm_vsbcq_u32): Delete. (__arm_vsbcq_m_s32): Delete. (__arm_vsbcq_m_u32): Delete. (__arm_vsbciq): Delete. (__arm_vsbciq_m): Delete. (__arm_vsbcq): Delete. (__arm_vsbcq_m): Delete.
-
Christophe Lyon authored
Implement vadcq using the new MVE builtins framework. We re-use most of the code introduced by the previous patch to support vadciq: we just need to initialize carry from the input parameter. 2024-08-28 Christophe Lyon <christophe.lyon@linaro.org> gcc/ * config/arm/arm-mve-builtins-base.cc (vadcq_vsbc): Add support for vadcq. * config/arm/arm-mve-builtins-base.def (vadcq): New. * config/arm/arm-mve-builtins-base.h (vadcq): New. * config/arm/arm_mve.h (vadcq): Delete. (vadcq_m): Delete. (vadcq_s32): Delete. (vadcq_u32): Delete. (vadcq_m_s32): Delete. (vadcq_m_u32): Delete. (__arm_vadcq_s32): Delete. (__arm_vadcq_u32): Delete. (__arm_vadcq_m_s32): Delete. (__arm_vadcq_m_u32): Delete. (__arm_vadcq): Delete. (__arm_vadcq_m): Delete.
-
Christophe Lyon authored
Implement vadciq using the new MVE builtins framework. 2024-08-28 Christophe Lyon <christophe.lyon@linaro.org> gcc/ * config/arm/arm-mve-builtins-base.cc (class vadc_vsbc_impl): New. (vadciq): New. * config/arm/arm-mve-builtins-base.def (vadciq): New. * config/arm/arm-mve-builtins-base.h (vadciq): New. * config/arm/arm_mve.h (vadciq): Delete. (vadciq_m): Delete. (vadciq_s32): Delete. (vadciq_u32): Delete. (vadciq_m_s32): Delete. (vadciq_m_u32): Delete. (__arm_vadciq_s32): Delete. (__arm_vadciq_u32): Delete. (__arm_vadciq_m_s32): Delete. (__arm_vadciq_m_u32): Delete. (__arm_vadciq): Delete. (__arm_vadciq_m): Delete.
-
Christophe Lyon authored
Factorize vadc/vsbc and vadci/vsbci so that they use the same parameterized names. 2024-08-28 Christophe Lyon <christophe.lyon@linaro.org> gcc/ * config/arm/iterators.md (mve_insn): Add VADCIQ_M_S, VADCIQ_M_U, VADCIQ_U, VADCIQ_S, VADCQ_M_S, VADCQ_M_U, VADCQ_S, VADCQ_U, VSBCIQ_M_S, VSBCIQ_M_U, VSBCIQ_S, VSBCIQ_U, VSBCQ_M_S, VSBCQ_M_U, VSBCQ_S, VSBCQ_U. (VADCIQ, VSBCIQ): Merge into ... (VxCIQ): ... this. (VADCIQ_M, VSBCIQ_M): Merge into ... (VxCIQ_M): ... this. (VSBCQ, VADCQ): Merge into ... (VxCQ): ... this. (VSBCQ_M, VADCQ_M): Merge into ... (VxCQ_M): ... this. * config/arm/mve.md (mve_vadciq_<supf>v4si, mve_vsbciq_<supf>v4si): Merge into ... (@mve_<mve_insn>q_<supf>v4si): ... this. (mve_vadciq_m_<supf>v4si, mve_vsbciq_m_<supf>v4si): Merge into ... (@mve_<mve_insn>q_m_<supf>v4si): ... this. (mve_vadcq_<supf>v4si, mve_vsbcq_<supf>v4si): Merge into ... (@mve_<mve_insn>q_<supf>v4si): ... this. (mve_vadcq_m_<supf>v4si, mve_vsbcq_m_<supf>v4si): Merge into ... (@mve_<mve_insn>q_m_<supf>v4si): ... this.
-
Christophe Lyon authored
This patch adds the vadc_vsbc shape description. 2024-08-28 Christophe Lyon <chrirstophe.lyon@linaro.org> gcc/ * config/arm/arm-mve-builtins-shapes.cc (vadc_vsbc): New. * config/arm/arm-mve-builtins-shapes.h (vadc_vsbc): New.
-
Christophe Lyon authored
Since we rewrote the implementation of vshlcq intrinsics, we no longer need these expanders. 2024-08-28 Christophe Lyon <christophe.lyon@linaro.org> gcc/ * config/arm/arm-builtins.cc (arm_ternop_unone_none_unone_imm_qualifiers) (-arm_ternop_none_none_unone_imm_qualifiers): Delete. * config/arm/arm_mve_builtins.def (vshlcq_m_vec_s) (vshlcq_m_carry_s, vshlcq_m_vec_u, vshlcq_m_carry_u): Delete. * config/arm/mve.md (mve_vshlcq_vec_<supf><mode>): Delete. (mve_vshlcq_carry_<supf><mode>): Delete. (mve_vshlcq_m_vec_<supf><mode>): Delete. (mve_vshlcq_m_carry_<supf><mode>): Delete.
-
Christophe Lyon authored
Implement vshlc using the new MVE builtins framework. 2024-08-28 Christophe Lyon <christophe.lyon@linaro.org> gcc/ * config/arm/arm-mve-builtins-base.cc (class vshlc_impl): New. (vshlc): New. * config/arm/arm-mve-builtins-base.def (vshlcq): New. * config/arm/arm-mve-builtins-base.h (vshlcq): New. * config/arm/arm-mve-builtins.cc (function_instance::has_inactive_argument): Handle vshlc. * config/arm/arm_mve.h (vshlcq): Delete. (vshlcq_m): Delete. (vshlcq_s8): Delete. (vshlcq_u8): Delete. (vshlcq_s16): Delete. (vshlcq_u16): Delete. (vshlcq_s32): Delete. (vshlcq_u32): Delete. (vshlcq_m_s8): Delete. (vshlcq_m_u8): Delete. (vshlcq_m_s16): Delete. (vshlcq_m_u16): Delete. (vshlcq_m_s32): Delete. (vshlcq_m_u32): Delete. (__arm_vshlcq_s8): Delete. (__arm_vshlcq_u8): Delete. (__arm_vshlcq_s16): Delete. (__arm_vshlcq_u16): Delete. (__arm_vshlcq_s32): Delete. (__arm_vshlcq_u32): Delete. (__arm_vshlcq_m_s8): Delete. (__arm_vshlcq_m_u8): Delete. (__arm_vshlcq_m_s16): Delete. (__arm_vshlcq_m_u16): Delete. (__arm_vshlcq_m_s32): Delete. (__arm_vshlcq_m_u32): Delete. (__arm_vshlcq): Delete. (__arm_vshlcq_m): Delete. * config/arm/mve.md (mve_vshlcq_<supf><mode>): Add '@' prefix. (mve_vshlcq_m_<supf><mode>): Likewise.
-
Christophe Lyon authored
This patch adds the vshlc shape description. 2024-08-28 Christophe Lyon <chrirstophe.lyon@linaro.org> gcc/ * config/arm/arm-mve-builtins-shapes.cc (vshlc): New. * config/arm/arm-mve-builtins-shapes.h (vshlc): New.
-