- Jan 16, 2021
-
-
GCC Administrator authored
-
- Jan 15, 2021
-
-
Carl Love authored
2021-01-15 Carl Love <cel@us.ibm.com> gcc/ChangeLog: * config/rs6000/altivec.h (vec_mulh, vec_div, vec_dive, vec_mod): New defines. * config/rs6000/altivec.md (VIlong): Move define to file vsx.md. * config/rs6000/rs6000-builtin.def (DIVES_V4SI, DIVES_V2DI, DIVEU_V4SI, DIVEU_V2DI, DIVS_V4SI, DIVS_V2DI, DIVU_V4SI, DIVU_V2DI, MODS_V2DI, MODS_V4SI, MODU_V2DI, MODU_V4SI, MULHS_V2DI, MULHS_V4SI, MULHU_V2DI, MULHU_V4SI, MULLD_V2DI): Add builtin define. (MULH, DIVE, MOD): Add new BU_P10_OVERLOAD_2 definitions. * config/rs6000/rs6000-call.c (VSX_BUILTIN_VEC_DIV, VSX_BUILTIN_VEC_DIVE, P10_BUILTIN_VEC_MOD, P10_BUILTIN_VEC_MULH): New overloaded definitions. (builtin_function_type) [P10V_BUILTIN_DIVEU_V4SI, P10V_BUILTIN_DIVEU_V2DI, P10V_BUILTIN_DIVU_V4SI, P10V_BUILTIN_DIVU_V2DI, P10V_BUILTIN_MODU_V2DI, P10V_BUILTIN_MODU_V4SI, P10V_BUILTIN_MULHU_V2DI, P10V_BUILTIN_MULHU_V4SI]: Add case statement for builtins. * config/rs6000/rs6000.md (bits): Add new attribute sizes V4SI, V2DI. * config/rs6000/vsx.md (VIlong): Moved from config/rs6000/altivec.md. (UNSPEC_VDIVES, UNSPEC_VDIVEU): New unspec definitions. (vsx_mul_v2di): Add if TARGET_POWER10 statement. (vsx_udiv_v2di): Add if TARGET_POWER10 statement. (dives_<mode>, diveu_<mode>, div<mode>3, uvdiv<mode>3, mods_<mode>, modu_<mode>, mulhs_<mode>, mulhu_<mode>, mulv2di3): Add define_insn, mode is VIlong. * doc/extend.texi (vec_mulh, vec_mul, vec_div, vec_dive, vec_mod): Add builtin descriptions. gcc/testsuite/ChangeLog: * gcc.target/powerpc/builtins-1-p10-runnable.c: New test file.
-
Eric Botcazou authored
Unlike the other global variables, it is not reset at the beginning of a function so can leak into the next one. gcc/ChangeLog: * final.c (final_start_function_1): Reset force_source_line.
-
Jerry DeLisle authored
libgfortran/ChangeLog: * runtime/ISO_Fortran_binding.c (CFI_establish): Fixed signed char arrays. Signed char or uint8_t arrays would cause crashes unless an element size is specified. gcc/testsuite/ChangeLog: * gfortran.dg/iso_fortran_binding_uint8_array.f90: New test. * gfortran.dg/iso_fortran_binding_uint8_array_driver.c: New test.
-
Nathan Sidwell authored
This was an assert that was too picky. The reason I had to alter array construction was that on stream in, we cannot dynamically determine a type's dependentness. Thus on stream out of the 'problematic' types, we save the dependentness for reconstruction. Fortunately the paths into cp_build_qualified_type_real from streamin with arrays do have the array's dependentess set as needed. PR c++/98538 gcc/cp/ * tree.c (cp_build_qualified_type_real): Propagate an array's dependentness to the copy, if known. gcc/testsuite/ * g++.dg/template/pr98538.C: New.
-
Nathan Sidwell authored
I missed some testsuite fall out with my patch to fix mkdeps file mangling. PR preprocessor/95253 gcc/testsuite/ * g++.dg/modules/dep-1_a.C: Adjust expected output. * g++.dg/modules/dep-1_b.C: Likewise. * g++.dg/modules/dep-2.C: Likewise.
-
Jakub Jelinek authored
The following patch generalizes the PR64309 simplifications, so that instead of working only with constants 1 and 1 it works with any two power of two constants, and works also for right shift (in that case it rules out the first one being negative, as it is arithmetic shift then). 2021-01-15 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/96669 * match.pd (((1 << A) & 1) != 0 -> A == 0, ((1 << A) & 1) == 0 -> A != 0): Generalize for 1s replaced by possibly different power of two constants and to right shift too. * gcc.dg/tree-ssa/pr96669-1.c: New test.
-
Jakub Jelinek authored
This patch simplifies comparisons that test the sign bit xored together. If the comparisons are both < 0 or both >= 0, then we should xor the operands together and compare the result to < 0, if the comparisons are different, we should compare to >= 0. 2021-01-15 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/96681 * match.pd ((x < 0) ^ (y < 0) to (x ^ y) < 0): New simplification. ((x >= 0) ^ (y >= 0) to (x ^ y) < 0): Likewise. ((x < 0) ^ (y >= 0) to (x ^ y) >= 0): Likewise. ((x >= 0) ^ (y < 0) to (x ^ y) >= 0): Likewise. * gcc.dg/tree-ssa/pr96681.c: New test.
-
Alexandre Oliva authored
The -dumpbase and -dumpdir options are excluded from the producer string output in debug information, but -dumpbase-ext was not. This patch excludes it as well. for gcc/ChangeLog * opts.c (gen_command_line_string): Exclude -dumpbase-ext.
-
Jason Merrill authored
Here, initializing from { } implies a call to the default constructor for base. We were then seeing that we're initializing a base subobject, so we tried to copy the result of that call. This is clearly wrong; we should initialize the base directly from its default constructor. This patch does a lot of refactoring of unsafe_copy_elision_p and adds make_safe_copy_elision that will also try to do the base constructor rewriting from the last patch. gcc/cp/ChangeLog: PR c++/98642 * call.c (unsafe_return_slot_p): Return int. (init_by_return_slot_p): Split out from... (unsafe_copy_elision_p): ...here. (unsafe_copy_elision_p_opt): New name for old meaning. (build_over_call): Adjust. (make_safe_copy_elision): New. * typeck2.c (split_nonconstant_init_1): Elide copy from safe list-initialization. * cp-tree.h: Adjust. gcc/testsuite/ChangeLog: PR c++/98642 * g++.dg/cpp1z/elide5.C: New test.
-
Jason Merrill authored
While working on PR98642 I noticed that in this testcase we were eliding the copy, calling the complete default constructor to initialize the B base subobject, and therefore wrongly initializing the non-existent A subobject of B. The test doesn't care whether the copy is elided or not, but checks that we are actually calling a base constructor for B. The patch preserves the elision, but changes the initializer to call the base constructor instead of the complete constructor. gcc/cp/ChangeLog: * call.c (base_ctor_for, make_base_init_ok): New. (build_over_call): Use make_base_init_ok. gcc/testsuite/ChangeLog: * g++.dg/cpp1z/elide4.C: New test.
-
Tamar Christina authored
This adds implementation for the optabs for complex operations. With this the following C code: void g (float complex a[restrict N], float complex b[restrict N], float complex c[restrict N]) { for (int i=0; i < N; i++) c[i] = a[i] * b[i]; } generates NEON: g: movi v3.4s, 0 mov x3, 0 .p2align 3,,7 .L2: mov v0.16b, v3.16b ldr q2, [x1, x3] ldr q1, [x0, x3] fcmla v0.4s, v1.4s, v2.4s, #0 fcmla v0.4s, v1.4s, v2.4s, #90 str q0, [x2, x3] add x3, x3, 16 cmp x3, 1600 bne .L2 ret SVE: g: mov x3, 0 mov x4, 400 ptrue p1.b, all whilelo p0.s, xzr, x4 mov z3.s, #0 .p2align 3,,7 .L2: ld1w z1.s, p0/z, [x0, x3, lsl 2] ld1w z2.s, p0/z, [x1, x3, lsl 2] movprfx z0, z3 fcmla z0.s, p1/m, z1.s, z2.s, #0 fcmla z0.s, p1/m, z1.s, z2.s, #90 st1w z0.s, p0, [x2, x3, lsl 2] incw x3 whilelo p0.s, x3, x4 b.any .L2 ret SVE2 (with int instead of float) g: mov x3, 0 mov x4, 400 mov z3.b, #0 whilelo p0.s, xzr, x4 .p2align 3,,7 .L2: ld1w z1.s, p0/z, [x0, x3, lsl 2] ld1w z2.s, p0/z, [x1, x3, lsl 2] movprfx z0, z3 cmla z0.s, z1.s, z2.s, #0 cmla z0.s, z1.s, z2.s, #90 st1w z0.s, p0, [x2, x3, lsl 2] incw x3 whilelo p0.s, x3, x4 b.any .L2 ret gcc/ChangeLog: * config/aarch64/aarch64-simd.md (cml<fcmac1><conj_op><mode>4, cmul<conj_op><mode>3): New. * config/aarch64/iterators.md (UNSPEC_FCMUL, UNSPEC_FCMUL180, UNSPEC_FCMLA_CONJ, UNSPEC_FCMLA180_CONJ, UNSPEC_CMLA_CONJ, UNSPEC_CMLA180_CONJ, UNSPEC_CMUL, UNSPEC_CMUL180, FCMLA_OP, FCMUL_OP, conj_op, rotsplit1, rotsplit2, fcmac1, sve_rot1, sve_rot2, SVE2_INT_CMLA_OP, SVE2_INT_CMUL_OP, SVE2_INT_CADD_OP): New. (rot): Add UNSPEC_FCMUL, UNSPEC_FCMUL180. (rot_op): Renamed to conj_op. * config/aarch64/aarch64-sve.md (cml<fcmac1><conj_op><mode>4, cmul<conj_op><mode>3): New. * config/aarch64/aarch64-sve2.md (cml<fcmac1><conj_op><mode>4, cmul<conj_op><mode>3): New.
-
Jason Merrill authored
build_vec_init_elt models initialization from some arbitrary object of the type, i.e. copy, but in the case of list-initialization we don't do a copy from the elements, we initialize them directly. gcc/cp/ChangeLog: PR c++/63707 * tree.c (build_vec_init_expr): Don't call build_vec_init_elt if we got a CONSTRUCTOR. gcc/testsuite/ChangeLog: PR c++/63707 * g++.dg/cpp0x/initlist-array13.C: New test.
-
Alexandre Oliva authored
Use __builtin_alloca. Some systems don't have alloca.h or alloca. Co-Authored-By:
Olivier Hainque <hainque@adacore.com> for gcc/testsuite/ChangeLog * gcc.dg/analyzer/alloca-leak.c: Drop alloca.h, use builtin. * gcc.dg/analyzer/data-model-1.c: Likewise. * gcc.dg/analyzer/malloc-1.c: Likewise. * gcc.dg/analyzer/malloc-paths-8.c: Likewise.
-
Jakub Jelinek authored
The fix for this PR didn't come with any test coverage, I've added tests that make sure we optimize it no matter what order of the x ^ y ^ z operands is used. 2021-01-15 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/96671 * gcc.dg/tree-ssa/pr96671-1.c: New test. * gcc.dg/tree-ssa/pr96671-2.c: New test.
-
David Malcolm authored
In one of the selftests in g:f1096055 I didn't consider that paths can contain backslashes, which happens for the tempfiles on Windows hosts. gcc/ChangeLog: PR bootstrap/98696 * diagnostic.c (selftest::test_print_parseable_fixits_bytes_vs_display_columns): Escape the tempfile name when constructing the expected output.
-
Jakub Jelinek authored
Ok, here is an updated patch which fixes what I found, and implements what has been discussed on the mailing list and on IRC, i.e. if the types are compatible as well as alias sets are same, then it prints what c_fold_indirect_ref_for_warn managed to create, otherwise it uses that info for printing offsets using offsetof (except when it starts with ARRAY_REFs, because one can't have offsetof (struct T[2][2], [1][0].x.y) The uninit-38.c test (which was the only one I believe which had tests on the exact spelling of MEM_REF printing) contains mainly changes to have space before * for pointer types (as that is how the C pretty-printers normally print types, int * rather than int*), plus what might be considered a regression from what Martin printed, but it is actually a correctness fix. When the arg is a pointer with type pointer to VLA with char element type (let's say the pointer is p), which is what happens in several of the uninit-38.c tests, omitting the (char *) cast is incorrect, as p + 1 is not the 1 byte after p, but pointer to the end of the VLA. It only happened to work because of the hacks (which I don't like at all and are dangerous, DECL_ARTIFICIAL var names with dot inside can be pretty much anything, e.g. a lot of passes construct their helper vars from some prefix that designates intended use of the var plus numeric suffix), where the a.1 pointer to VLA is printed as a which if one is lucky happens to be a variable with VLA type (rather than pointer to it), and for such vars a + 1 is indeed &a[0] + 1 rather than &a + 1. But if we want to do this reliably, we'd need to make sure it comes from VLA (e.g. verify that the SSA_NAME is defined to __builtin_alloca_with_align and that there exists a corresponding VAR_DECL with DECL_VALUE_EXPR that has the a.1 variable in it). 2021-01-15 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/98597 * c-pretty-print.c: Include options.h. (c_fold_indirect_ref_for_warn): New function. (print_mem_ref): Use it. If it returns something that has compatible type and is TBAA compatible with zero offset, print it and return, otherwise print it using offsetof syntax or array ref syntax. Fix up printing if MEM_REFs first operand is ADDR_EXPR, or when the first argument has pointer to array type. Print pointers using the standard formatting. * gcc.dg/uninit-38.c: Expect a space in between type name and asterisk. Expect for now a (char *) cast for VLAs. * gcc.dg/uninit-40.c: New test.
-
Jakub Jelinek authored
The PR98597 patch regresses on _Atomic-3.c, as in the C FE building an array type with qualified elements results in a type incompatible with when an array type with unqualified elements is qualified afterwards. This patch adds a workaround for that. 2021-01-15 Jakub Jelinek <jakub@redhat.com> * c-typeck.c (c_finish_omp_clauses): For reduction build array with unqualified element type and then call c_build_qualified_type on the ARRAY_TYPE.
-
Kyrylo Tkachov authored
This patch reimplements some more intrinsics using RTL builtins in the straightforward way. Thankfully most of the RTL infrastructure is already in place for it. gcc/ * config/aarch64/aarch64-simd.md (*aarch64_<su>mlsl_hi<mode>): Rename to... (aarch64_<su>mlsl_hi<mode>): ... This. (aarch64_<su>mlsl_hi<mode>): Define. (*aarch64_<su>mlsl<mode): Rename to... (aarch64_<su>mlsl<mode): ... This. * config/aarch64/aarch64-simd-builtins.def (smlsl, umlsl, smlsl_hi, umlsl_hi): Define builtins. * config/aarch64/arm_neon.h (vmlsl_high_s8, vmlsl_high_s16, vmlsl_high_s32, vmlsl_high_u8, vmlsl_high_u16, vmlsl_high_u32, vmlsl_s8, vmlsl_s16, vmlsl_s32, vmlsl_u8, vmlsl_u16, vmlsl_u32): Reimplement with builtins.
-
Uros Bizjak authored
2021-01-15 Uroš Bizjak <ubizjak@gmail.com> gcc/ * config/i386/i386-c.c (ix86_target_macros): Use cpp_define_formatted for __SIZEOF_FLOAT80__ definition.
-
Nathan Sidwell authored
Make doesn't need ':' quoting (in a filename). PR preprocessor/95253 libcpp/ * mkdeps.c (munge): Do not escape ':'.
-
Nathan Sidwell authored
-fsyntax-only is handled specially in the driver and causes it to add '-o /dev/null' (or a suitable OS-specific variant thereof). PCH is handled in the language driver. I'd not sufficiently protected the -fmodule-only action of adding a dummy assembler from the actions of -fsyntax-only, so we ended up with two -o options. PR c++/98591 gcc/cp/ * lang-specs.h: Fix handling of -fmodule-only with -fsyntax-only.
-
Richard Sandiford authored
This patch adds a small target-specific pass to remove redundant SVE PTEST instructions. There are two important uses of this: - Removing PTESTs after WHILELOs (PR88836). The original testcase no longer exhibits the problem due to more recent optimisations, but it can still be seen in simple cases like the one in the patch. It also shows up in 450.soplex. - Removing PTESTs after RDFFRs in ACLE code. This is just an interim “solution” for GCC 11. I hope to replace it with something generic and target-independent for GCC 12. However, the use cases above are very important for performance, so I'd rather not leave the bug unfixed for yet another release cycle. Since the pass is intended to be short-lived, I've not added a command-line option for it. The pass can be disabled using -fdisable-rtl-cc_fusion if necessary. Although what the pass does is independent of SVE, it's motivated only by SVE cases and doesn't trigger for any non-SVE test I've seen. I've therefore gated it on TARGET_SVE and restricted it to PTEST patterns. gcc/ PR target/88836 * config.gcc (aarch64*-*-*): Add aarch64-cc-fusion.o to extra_objs. * Makefile.in (RTL_SSA_H): New variable. * config/aarch64/t-aarch64 (aarch64-cc-fusion.o): New rule. * config/aarch64/aarch64-protos.h (make_pass_cc_fusion): Declare. * config/aarch64/aarch64-passes.def: Add pass_cc_fusion after pass_combine. * config/aarch64/aarch64-cc-fusion.cc: New file. gcc/testsuite/ PR target/88836 * gcc.target/aarch64/sve/acle/general/ldff1_8.c: New test. * gcc.target/aarch64/sve/ptest_1.c: Likewise.
-
Richard Sandiford authored
Noticed while working on something else that the insn_change_watermark destructor could call cancel_changes for changes that no longer exist. The loop in cancel_changes is a nop in that case, but: num_changes = num; can mess things up. I think this would only affect nested uses of insn_change_watermark. gcc/ * recog.h (insn_change_watermark::~insn_change_watermark): Avoid calling cancel_changes for changes that no longer exist.
-
Richard Sandiford authored
s/ref/reg/ on a previously unused function name. gcc/ * rtl-ssa/functions.h (function_info::ref_defs): Rename to... (function_info::reg_defs): ...this. * rtl-ssa/member-fns.inl (function_info::ref_defs): Rename to... (function_info::reg_defs): ...this.
-
Marius Hillenbrand authored
One of the test cases failed to link because of missing paths to libatomic. Reuse procedures in lib/atomic-dg.exp to gather these paths. gcc/testsuite/ChangeLog: 2021-01-15 Marius Hillenbrand <mhillen@linux.ibm.com> * gcc.target/s390/s390.exp: Call lib atomic-dg.exp to link libatomic into testcases in gcc.target/s390/md. * gcc.target/s390/md/atomic_exchange-1.c: Remove no unnecessary -latomic.
-
Christophe Lyon authored
This patch adds implementations for vceqq_p64, vceqz_p64 and vceqzq_p64 intrinsics. vceqq_p64 uses the existing vceq_p64 after splitting the input vectors into their high and low halves. vceqz[q] simply call the vceq and vceqq with a second argument equal to zero. The added (executable) testcases make sure that the poly64x2_t variants have results with one element of all zeroes (false) and the other element with all bits set to one (true). 2021-01-15 Christophe Lyon <christophe.lyon@linaro.org> gcc/ PR target/71233 * config/arm/arm_neon.h (vceqz_p64, vceqq_p64, vceqzq_p64): New. gcc/testsuite/ PR target/71233 * gcc.target/aarch64/advsimd-intrinsics/p64_p128.c: Add tests for vceqz_p64, vceqq_p64 and vceqzq_p64. * gcc.target/arm/simd/vceqz_p64.c: New test. * gcc.target/arm/simd/vceqzq_p64.c: New test.
-
Christophe Lyon authored
This reverts commit 1a630642.
-
Richard Biener authored
The testcases show that we fail to disregard alignment for invariant loads. The patch handles them like we handle gather and scatter. 2021-01-15 Richard Biener <rguenther@suse.de> PR tree-optimization/96376 * tree-vect-stmts.c (get_load_store_type): Disregard alignment for VMAT_INVARIANT.
-
Martin Liska authored
gcc/ChangeLog: * doc/install.texi: Document that some tests need pytest module. * doc/sourcebuild.texi: Likewise. gcc/testsuite/ChangeLog: * lib/gcov.exp: Use 'env python3' for execution of pytests. Check that pytest accepts all needed options first. Improve formatting of PASS/FAIL lines.
-
Richard Biener authored
This aligns p so that the testcase is meaningful for targets without a hw misaligned access. 2021-01-15 Richard Biener <rguenther@suse.de> PR testsuite/96147 * gcc.dg/vect/bb-slp-32.c: Align p.
-
Richard Biener authored
This changes gcc.dg/vect/bb-slp-9.c to scan for a vectorized load instead of a vectorized BB which then correctly captures the unaligned load we try to test and not some intermediate built from scalar vector. 2021-01-15 Richard Biener <rguenther@suse.de> PR testsuite/96147 * gcc.dg/vect/bb-slp-9.c: Scan for a vector load transform.
-
Richard Biener authored
gcc.dg/vect/slp-45.c failed to key the vectorization capability scanning on vect_hw_misalign. Since the stores are strided they cannot be (all) analyzed to be aligned. 2021-01-15 Richard Biener <rguenther@suse.de> PR testsuite/96147 * gcc.dg/vect/slp-45.c: Key scanning on vect_hw_misalign.
-
Richard Biener authored
This removes scanning that's too difficult to get correct for all targets, leaving the correctness test for them and keeping the vectorization capability check to vect_hw_misalign targets. 2021-01-15 Richard Biener <rguenther@suse.de> PR testsuite/96147 * gcc.dg/vect/slp-43.c: Remove ! vect_hw_misalign scan.
-
Christophe Lyon authored
This patch adds implementations for vceqq_p64, vceqz_p64 and vceqzq_p64 intrinsics. vceqq_p64 uses the existing vceq_p64 after splitting the input vectors into their high and low halves. vceqz[q] simply call the vceq and vceqq with a second argument equal to zero. The added (executable) testcases make sure that the poly64x2_t variants have results with one element of all zeroes (false) and the other element with all bits set to one (true). 2021-01-15 Christophe Lyon <christophe.lyon@linaro.org> gcc/ PR target/71233 * config/arm/arm_neon.h (vceqz_p64, vceqq_p64, vceqzq_p64): New. gcc/testsuite/ PR target/71233 * gcc.target/aarch64/advsimd-intrinsics/p64_p128.c: Add tests for vceqz_p64, vceqq_p64 and vceqzq_p64.
-
Richard Biener authored
The testcase morphed in a way no longer testing what it was originally supposed to do and slightly altering it shows the original issue isn't fixed (anymore). The limit as set as result of PR91403 (and dups) prevents the issue for larger arrays but the testcase has double a[128][128]; which results in a group size of "just" 512 (the limit is 4096). Avoiding the 'BB vectorization with gaps at the end of a load is not supported' by altering it to do void foo(void) { b[0] = a[0][0]; b[1] = a[1][0]; b[2] = a[2][0]; b[3] = a[3][127]; } shows that costing has improved further to not account the dead loads making the previous test inefficient. In fact the underlying issue isn't fixed (we do code-generate dead loads). In fact the vector permute load is even profitable, just the excessive code-generation issue exists (and is "fixed" by capping it a constant boundary, just too high for this particular testcase). The testcase now has "dups", so I'll simply remove it. 2021-01-15 Richard Biener <rguenther@suse.de> PR testsuite/96098 * gcc.dg/vect/bb-slp-pr68892.c: Remove.
-
Jakub Jelinek authored
The recent changes to error on mixing -march=i386 and -fcf-protection broke bootstrap. This patch changes lib{atomic,gomp,itm} configury, so that it only adds -march=i486 to flags if really needed (i.e. when 486 or later isn't on by default already). Similarly, it will not use ifuncs if -mcx16 (or -march=i686 for 32-bit) is on by default. 2021-01-15 Jakub Jelinek <jakub@redhat.com> PR target/70454 libatomic/ * configure.tgt: For i?86 and x86_64 determine if -march=i486 needs to be added through preprocessor check on __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4. Determine if try_ifunc is needed based on preprocessor check on __GCC_HAVE_SYNC_COMPARE_AND_SWAP_16 or __GCC_HAVE_SYNC_COMPARE_AND_SWAP_8. libgomp/ * configure.tgt: For i?86 and x86_64 determine if -march=i486 needs to be added through preprocessor check on __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4. libitm/ * configure.tgt: For i?86 and x86_64 determine if -march=i486 needs to be added through preprocessor check on __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4.
-
Christophe Lyon authored
This patch enables MVE vshr instructions for auto-vectorization. New MVE patterns are introduced that take a vector of constants as second operand, all constants being equal. The existing mve_vshrq_n_<supf><mode> is kept, as it takes a single immediate as second operand, and is used by arm_mve.h. The vashr<mode>3 and vlshr<mode>3 expanders are moved fron neon.md to vec-common.md, updated to rely on the normal expansion scheme to generate shifts by immediate. 2020-12-03 Christophe Lyon <christophe.lyon@linaro.org> gcc/ * config/arm/mve.md (mve_vshrq_n_s<mode>_imm): New entry. (mve_vshrq_n_u<mode>_imm): Likewise. * config/arm/neon.md (vashr<mode>3, vlshr<mode>3): Move to ... * config/arm/vec-common.md: ... here. gcc/testsuite/ * gcc.target/arm/simd/mve-vshr.c: Add tests for vshr.
-
Christophe Lyon authored
This patch enables MVE vshlq instructions for auto-vectorization. The existing mve_vshlq_n_<supf><mode> is kept, as it takes a single immediate as second operand, and is used by arm_mve.h. We move the vashl<mode>3 insn from neon.md to an expander in vec-common.md, and the mve_vshlq_<supf><mode> insn from mve.md to vec-common.md, adding the second alternative fron neon.md. mve_vshlq_<supf><mode> will be used by a later patch enabling vectorization for vshr, as a unified version of ashl3<mode3>_[signed|unsigned] from neon.md. Keeping the use of unspec VSHLQ enables to generate both 's' and 'u' variants. It is not clear whether the neon_shift_[reg|imm]<q> attribute is still suitable, since this insn is also used for MVE. I kept the mve_vshlq_<supf><mode> naming instead of renaming it to ashl3_<supf>_<mode> as discussed because the reference in arm_mve_builtins.def automatically inserts the "mve_" prefix and I didn't want to make a special case for this. I haven't yet found why the v16qi and v8hi tests are not vectorized. With dest[i] = a[i] << b[i] and: { int i; unsigned int i.24_1; unsigned int _2; int16_t * _3; short int _4; int _5; int16_t * _6; short int _7; int _8; int _9; int16_t * _10; short int _11; unsigned int ivtmp_42; unsigned int ivtmp_43; <bb 2> [local count: 119292720]: <bb 3> [local count: 954449105]: i.24_1 = (unsigned int) i_23; _2 = i.24_1 * 2; _3 = a_15(D) + _2; _4 = *_3; _5 = (int) _4; _6 = b_16(D) + _2; _7 = *_6; _8 = (int) _7; _9 = _5 << _8; _10 = dest_17(D) + _2; _11 = (short int) _9; *_10 = _11; i_19 = i_23 + 1; ivtmp_42 = ivtmp_43 - 1; if (ivtmp_42 != 0) goto <bb 5>; [87.50%] else goto <bb 4>; [12.50%] <bb 5> [local count: 835156386]: goto <bb 3>; [100.00%] <bb 4> [local count: 119292720]: return; } the vectorizer says: mve-vshl.c:37:96: note: ==> examining statement: _5 = (int) _4; mve-vshl.c:37:96: note: vect_is_simple_use: operand *_3, type of def: internal mve-vshl.c:37:96: note: vect_is_simple_use: vectype vector(8) short int mve-vshl.c:37:96: missed: conversion not supported by target. mve-vshl.c:37:96: note: vect_is_simple_use: operand *_3, type of def: internal mve-vshl.c:37:96: note: vect_is_simple_use: vectype vector(8) short int mve-vshl.c:37:96: note: vect_is_simple_use: operand *_3, type of def: internal mve-vshl.c:37:96: note: vect_is_simple_use: vectype vector(8) short int mve-vshl.c:37:117: missed: not vectorized: relevant stmt not supported: _5 = (int) _4; mve-vshl.c:37:96: missed: bad operation or unsupported loop bound. mve-vshl.c:37:96: note: ***** Analysis failed with vector mode V8HI 2020-12-03 Christophe Lyon <christophe.lyon@linaro.org> gcc/ * config/arm/mve.md (mve_vshlq_<supf><mode>): Move to vec-commond.md. * config/arm/neon.md (vashl<mode>3): Delete. * config/arm/vec-common.md (mve_vshlq_<supf><mode>): New. (vasl<mode>3): New expander. gcc/testsuite/ * gcc.target/arm/simd/mve-vshl.c: Add tests for vshl.
-
Richard Biener authored
Avoid advancing to the next stmt when inserting at region boundary and deal with a vector def being not the only child. 2021-01-15 Richard Biener <rguenther@suse.de> PR tree-optimization/98685 * tree-vect-slp.c (vect_schedule_slp_node): Refactor handling of vector extern defs. * gcc.dg/vect/bb-slp-pr98685.c: New testcase.
-