[x86] Peephole pand;pxor into pandn.

Message ID 020d01d84e8c$7fa90180$7efb0480$@nextmovesoftware.com
State New
Headers
Series [x86] Peephole pand;pxor into pandn. |

Commit Message

Roger Sayle April 12, 2022, 4:43 p.m. UTC
  As a side-effect, a patch I have for PR 70321 causes the failure of
gcc.target/i386/pr65105-5.c by generating the code:

        vmovq   (%eax), %xmm1
        vpand   %xmm1, %xmm2, %xmm0
        vpxor   %xmm0, %xmm2, %xmm0
        vpunpcklqdq     %xmm0, %xmm0, %xmm0
        vptest  %xmm0, %xmm0

instead of the pandn sequence that the test is expecting:

        vmovq   (%eax), %xmm1
        vpandn  %xmm2, %xmm1, %xmm0
        vpunpcklqdq     %xmm0, %xmm0, %xmm0
        vptest  %xmm0, %xmm0

This patch prevents the above FAIL by providing a peephole2 to convert
a suitable pand followed by pxor, i.e. (X & Y) ^ X, into the equivalent
pandn, i.e. X & ~Y.  For GCC 13, the above sequence can actually be
implemented in just two instructions (neither the pandn nor the
punpcklqdq are necessary if vptest %xmm1, %xmm2 is used), but for
now, this preserves the sequence that the test case is expecting.

This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
and make -k check, both with and without --target_board=unix{-m32},
with no new failures.  Alas there's no new test case as this optimization
is normally caught by (or before) combine, and therefore tricky to trigger.
Ok for mainline?


2022-04-12  Roger Sayle  <roger@nextmovesoftware.com>

gcc/ChangeLog
	* config/i386/sse.md (peephole2): Convert suitable pand followed
	by pxor into pandn, i.e. (X&Y)^X into X & ~Y.


Thanks in advance,
Roger
--
  

Patch

diff --git a/gcc/config/i386/sse.md b/gcc/config/i386/sse.md
index a852c16..f7ef81a 100644
--- a/gcc/config/i386/sse.md
+++ b/gcc/config/i386/sse.md
@@ -16887,6 +16887,44 @@ 
 			(match_dup 2)))]
   "operands[3] = gen_reg_rtx (<MODE>mode);")
 
+;; Combine pand;pxor into pandn.  (X&Y)^X -> X & ~Y.
+(define_peephole2
+  [(set (match_operand:VMOVE 0 "register_operand")
+	(and:VMOVE (match_operand:VMOVE 1 "register_operand")
+		   (match_operand:VMOVE 2 "register_operand")))
+   (set (match_operand:VMOVE 3 "register_operand")
+	(xor:VMOVE (match_operand:VMOVE 4 "register_operand")
+		   (match_operand:VMOVE 5 "register_operand")))]
+  "TARGET_SSE
+   && REGNO (operands[1]) != REGNO (operands[2])
+   && REGNO (operands[4]) != REGNO (operands[5])
+   && (REGNO (operands[0]) == REGNO (operands[3])
+       || peep2_reg_dead_p (2, operands[0]))"
+  [(set (match_dup 3)
+	(and:VMOVE (not:VMOVE (match_dup 6)) (match_dup 7)))]
+{
+  if (REGNO (operands[0]) != REGNO (operands[1])
+      && ((REGNO (operands[4]) == REGNO (operands[0])
+	   && REGNO (operands[5]) == REGNO (operands[1]))
+	  || (REGNO (operands[4]) == REGNO (operands[1])
+	      && REGNO (operands[5]) == REGNO (operands[0]))))
+    {
+      operands[6] = operands[2];
+      operands[7] = operands[1];
+    }
+  else if (REGNO (operands[0]) != REGNO (operands[2])
+	   && ((REGNO (operands[4]) == REGNO (operands[0])
+		&& REGNO (operands[5]) == REGNO (operands[2]))
+	       || (REGNO (operands[4]) == REGNO (operands[2])
+		   && REGNO (operands[5]) == REGNO (operands[0]))))
+    {
+      operands[6] = operands[1];
+      operands[7] = operands[2];
+    }
+  else
+    FAIL;
+})
+
 (define_insn "*andnot<mode>3_mask"
   [(set (match_operand:VI48_AVX512VL 0 "register_operand" "=v")
 	(vec_merge:VI48_AVX512VL