[pushed] aarch64: Extend PR100056 patterns to +
Commit Message
pr100056.c contains things like:
int
or_shift_u3a (unsigned i)
{
i &= 7;
return i | (i << 11);
}
After g:96146e61cd7aee62c21c2845916ec42152918ab7, the preferred
gimple representation of this is a multiplication:
i_2 = i_1(D) & 7;
_5 = i_2 * 2049;
Expand then open-codes the multiplication back to individual shifts,
but (of course) it uses + rather than | to combine the shifts.
This means that we end up with the RTL equivalent of:
i + (i << 11)
I wondered about canonicalising the + to | (*back* to | in this case)
when the operands have no set bits in common and when one of the
operands is &, | or ^, but that didn't seem to be a popular idea when
I asked on IRC. The feeling seemed to be that + is inherently simpler
than |, so we shouldn't be “simplifying” the other way.
This patch therefore adjusts the PR100056 patterns to handle +
as well as |, in cases where the operands are provably disjoint.
For:
int
or_shift_u8 (unsigned char i)
{
return i | (i << 11);
}
the instructions:
2: r95:SI=zero_extend(x0:QI)
REG_DEAD x0:QI
7: r98:SI=r95:SI<<0xb
are combined into:
(parallel [
(set (reg:SI 98)
(and:SI (ashift:SI (reg:SI 0 x0 [ i ])
(const_int 11 [0xb]))
(const_int 522240 [0x7f800])))
(set (reg/v:SI 95 [ i ])
(zero_extend:SI (reg:QI 0 x0 [ i ])))
])
which fails to match, but which is then split into its individual
(independent) sets. Later the zero_extend is combined with the add
to get an ADD UXTB:
(set (reg:SI 99)
(plus:SI (zero_extend:SI (reg:QI 0 x0 [ i ]))
(reg:SI 98)))
This means that there is never a 3-insn combo to match the split
against. The end result is therefore:
ubfiz w1, w0, 11, 8
add w0, w1, w0, uxtb
This is a bit redundant, since it's doing the zero_extend twice.
It is at least 2 instructions though, rather than the 3 that we
had before the original patch for PR100056. or_shift_u8_asm is
affected similarly.
The net effect is that we do still have 2 UBFIZs, but we're at
least back down to 2 instructions per function, as for GCC 11.
I think that's good enough for now.
There are probably other instructions that should be extended
to support + as well as | (e.g. the EXTR ones), but those aren't
regressions and so are GCC 13 material.
Tested on aarch64-linux-gnu & pushed.
Richard
gcc/
PR target/100056
* config/aarch64/iterators.md (LOGICAL_OR_PLUS): New iterator.
* config/aarch64/aarch64.md: Extend the PR100056 patterns
to handle plus in the same way as ior, if the operands have
no set bits in common.
gcc/testsuite/
PR target/100056
* gcc.target/aarch64/pr100056.c: XFAIL the original UBFIZ test
and instead expect two UBFIZs + two ADD UXTBs.
---
gcc/config/aarch64/aarch64.md | 33 ++++++++++++++-------
gcc/config/aarch64/iterators.md | 3 ++
gcc/testsuite/gcc.target/aarch64/pr100056.c | 4 ++-
3 files changed, 29 insertions(+), 11 deletions(-)
@@ -4558,7 +4558,7 @@ (define_insn "*<LOGICAL:optab>_<SHIFT:optab><mode>3"
(define_split
[(set (match_operand:GPI 0 "register_operand")
- (LOGICAL:GPI
+ (LOGICAL_OR_PLUS:GPI
(and:GPI (ashift:GPI (match_operand:GPI 1 "register_operand")
(match_operand:QI 2 "aarch64_shift_imm_<mode>"))
(match_operand:GPI 3 "const_int_operand"))
@@ -4571,16 +4571,23 @@ (define_split
&& REGNO (operands[1]) == REGNO (operands[4])))
&& (trunc_int_for_mode (GET_MODE_MASK (GET_MODE (operands[4]))
<< INTVAL (operands[2]), <MODE>mode)
- == INTVAL (operands[3]))"
+ == INTVAL (operands[3]))
+ && (<CODE> != PLUS
+ || (GET_MODE_MASK (GET_MODE (operands[4]))
+ & INTVAL (operands[3])) == 0)"
[(set (match_dup 5) (zero_extend:GPI (match_dup 4)))
- (set (match_dup 0) (LOGICAL:GPI (ashift:GPI (match_dup 5) (match_dup 2))
- (match_dup 5)))]
- "operands[5] = gen_reg_rtx (<MODE>mode);"
+ (set (match_dup 0) (match_dup 6))]
+ {
+ operands[5] = gen_reg_rtx (<MODE>mode);
+ rtx shift = gen_rtx_ASHIFT (<MODE>mode, operands[5], operands[2]);
+ rtx_code new_code = (<CODE> == PLUS ? IOR : <CODE>);
+ operands[6] = gen_rtx_fmt_ee (new_code, <MODE>mode, shift, operands[5]);
+ }
)
(define_split
[(set (match_operand:GPI 0 "register_operand")
- (LOGICAL:GPI
+ (LOGICAL_OR_PLUS:GPI
(and:GPI (ashift:GPI (match_operand:GPI 1 "register_operand")
(match_operand:QI 2 "aarch64_shift_imm_<mode>"))
(match_operand:GPI 4 "const_int_operand"))
@@ -4589,11 +4596,17 @@ (define_split
&& pow2_or_zerop (UINTVAL (operands[3]) + 1)
&& (trunc_int_for_mode (UINTVAL (operands[3])
<< INTVAL (operands[2]), <MODE>mode)
- == INTVAL (operands[4]))"
+ == INTVAL (operands[4]))
+ && (<CODE> != PLUS
+ || (INTVAL (operands[4]) & INTVAL (operands[3])) == 0)"
[(set (match_dup 5) (and:GPI (match_dup 1) (match_dup 3)))
- (set (match_dup 0) (LOGICAL:GPI (ashift:GPI (match_dup 5) (match_dup 2))
- (match_dup 5)))]
- "operands[5] = gen_reg_rtx (<MODE>mode);"
+ (set (match_dup 0) (match_dup 6))]
+ {
+ operands[5] = gen_reg_rtx (<MODE>mode);
+ rtx shift = gen_rtx_ASHIFT (<MODE>mode, operands[5], operands[2]);
+ rtx_code new_code = (<CODE> == PLUS ? IOR : <CODE>);
+ operands[6] = gen_rtx_fmt_ee (new_code, <MODE>mode, shift, operands[5]);
+ }
)
(define_split
@@ -2122,6 +2122,9 @@ (define_code_iterator SHIFTRT [ashiftrt lshiftrt])
;; Code iterator for logical operations
(define_code_iterator LOGICAL [and ior xor])
+;; LOGICAL with plus, for when | gets converted to +.
+(define_code_iterator LOGICAL_OR_PLUS [and ior xor plus])
+
;; LOGICAL without AND.
(define_code_iterator LOGICAL_OR [ior xor])
@@ -1,7 +1,9 @@
/* PR target/100056 */
/* { dg-do compile } */
/* { dg-options "-O2" } */
-/* { dg-final { scan-assembler-not {\t[us]bfiz\tw[0-9]+, w[0-9]+, 11} } } */
+/* { dg-final { scan-assembler-not {\t[us]bfiz\tw[0-9]+, w[0-9]+, 11} { xfail *-*-* } } } */
+/* { dg-final { scan-assembler-times {\t[us]bfiz\tw[0-9]+, w[0-9]+, 11} 2 } } */
+/* { dg-final { scan-assembler-times {\tadd\tw[0-9]+, w[0-9]+, w[0-9]+, uxtb\n} 2 } } */
int
or_shift_u8 (unsigned char i)