Message ID | 002f01d7c715$1cc96400$565c2c00$@nextmovesoftware.com |
---|---|
State | New |
Headers |
Return-Path: <gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org> X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 7A1CC385783E for <patchwork@sourceware.org>; Fri, 22 Oct 2021 07:19:33 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from server.nextmovesoftware.com (server.nextmovesoftware.com [162.254.253.69]) by sourceware.org (Postfix) with ESMTPS id 779CA3858416 for <gcc-patches@gcc.gnu.org>; Fri, 22 Oct 2021 07:19:16 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 779CA3858416 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=nextmovesoftware.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=nextmovesoftware.com DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nextmovesoftware.com; s=default; h=Content-Type:MIME-Version:Message-ID: Date:Subject:Cc:To:From:Sender:Reply-To:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=ViZOsEI6jJhTJXFGXL7KAj6u1zK70h1LJ6nNJ/S3daE=; b=nFwp4Sk7ddZXB7efLTYx28kZc5 c0LTT0VLuEsAt9LGf8KQxntB3Knxoi78oVBpQU6Oi3bkBSK11C7oMa0Cnpzt+OqqVb4pIrAa85/KZ pO6uI5n9CQitOXbMYhDyarowpbTjqQFajK1ZQOg29c5RCeCsmTMPl9TNFhNaApbJ9jn1QvpyO3cpr s0gACP7HCx71RQe2VflB6P3CKZYDO1MkEtNCrPs3kPLoSIqQyNOr4IK+aUOLY0gUkzeiiCnxnTkn3 KgdLkMZ9fnAuXo2MpFCBxj6r53qcwoKI2ptBKDigsudgaSI/QOYxiIY7k0rCTzLE8Sk+cFWzUafTl SJaDJ3bQ==; Received: from host86-163-35-115.range86-163.btcentralplus.com ([86.163.35.115]:51454 helo=Dell) by server.nextmovesoftware.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from <roger@nextmovesoftware.com>) id 1mdopd-0005n6-TE; Fri, 22 Oct 2021 03:19:14 -0400 From: "Roger Sayle" <roger@nextmovesoftware.com> To: "'GCC Patches'" <gcc-patches@gcc.gnu.org> Subject: [PATCH] x86_64: Add insn patterns for V1TI mode logic operations. Date: Fri, 22 Oct 2021 08:19:10 +0100 Message-ID: <002f01d7c715$1cc96400$565c2c00$@nextmovesoftware.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_NextPart_000_0030_01D7C71D.7E9015F0" X-Mailer: Microsoft Outlook 16.0 Thread-Index: AdfHFFoya7hYWHs0SFmGK5ZMxSkfDA== Content-Language: en-gb X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - server.nextmovesoftware.com X-AntiAbuse: Original Domain - gcc.gnu.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - nextmovesoftware.com X-Get-Message-Sender-Via: server.nextmovesoftware.com: authenticated_id: roger@nextmovesoftware.com X-Authenticated-Sender: server.nextmovesoftware.com: roger@nextmovesoftware.com X-Source: X-Source-Args: X-Source-Dir: X-Spam-Status: No, score=-12.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, SCC_5_SHORT_WORD_LINES, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list <gcc-patches.gcc.gnu.org> List-Unsubscribe: <https://gcc.gnu.org/mailman/options/gcc-patches>, <mailto:gcc-patches-request@gcc.gnu.org?subject=unsubscribe> List-Archive: <https://gcc.gnu.org/pipermail/gcc-patches/> List-Post: <mailto:gcc-patches@gcc.gnu.org> List-Help: <mailto:gcc-patches-request@gcc.gnu.org?subject=help> List-Subscribe: <https://gcc.gnu.org/mailman/listinfo/gcc-patches>, <mailto:gcc-patches-request@gcc.gnu.org?subject=subscribe> Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" <gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org> |
Series |
x86_64: Add insn patterns for V1TI mode logic operations.
|
|
Commit Message
Roger Sayle
Oct. 22, 2021, 7:19 a.m. UTC
On x86_64, V1TI mode holds a 128-bit integer value in a (vector) SSE register (where regular TI mode uses a pair of 64-bit general purpose scalar registers). This patch improves the implementation of AND, IOR, XOR and NOT on these values. The benefit is demonstrated by the following simple test program: typedef unsigned __int128 v1ti __attribute__ ((__vector_size__ (16))); v1ti and(v1ti x, v1ti y) { return x & y; } v1ti ior(v1ti x, v1ti y) { return x | y; } v1ti xor(v1ti x, v1ti y) { return x ^ y; } v1ti not(v1ti x) { return ~x; } For which GCC currently generates the rather large: and: movdqa %xmm0, %xmm2 movq %xmm1, %rdx movq %xmm0, %rax andq %rdx, %rax movhlps %xmm2, %xmm3 movhlps %xmm1, %xmm4 movq %rax, %xmm0 movq %xmm4, %rdx movq %xmm3, %rax andq %rdx, %rax movq %rax, %xmm5 punpcklqdq %xmm5, %xmm0 ret ior: movdqa %xmm0, %xmm2 movq %xmm1, %rdx movq %xmm0, %rax orq %rdx, %rax movhlps %xmm2, %xmm3 movhlps %xmm1, %xmm4 movq %rax, %xmm0 movq %xmm4, %rdx movq %xmm3, %rax orq %rdx, %rax movq %rax, %xmm5 punpcklqdq %xmm5, %xmm0 ret xor: movdqa %xmm0, %xmm2 movq %xmm1, %rdx movq %xmm0, %rax xorq %rdx, %rax movhlps %xmm2, %xmm3 movhlps %xmm1, %xmm4 movq %rax, %xmm0 movq %xmm4, %rdx movq %xmm3, %rax xorq %rdx, %rax movq %rax, %xmm5 punpcklqdq %xmm5, %xmm0 ret not: movdqa %xmm0, %xmm1 movq %xmm0, %rax notq %rax movhlps %xmm1, %xmm2 movq %rax, %xmm0 movq %xmm2, %rax notq %rax movq %rax, %xmm3 punpcklqdq %xmm3, %xmm0 ret with this patch we now generate the much more efficient: and: pand %xmm1, %xmm0 ret ior: por %xmm1, %xmm0 ret xor: pxor %xmm1, %xmm0 ret not: pcmpeqd %xmm1, %xmm1 pxor %xmm1, %xmm0 ret For my first few attempts at this patch I tried adding V1TI to the existing VI and VI12_AVX_512F mode iterators, but these then have dependencies on other iterators (and attributes), and so on until everything ties itself into a knot, as V1TI mode isn't really a first-class vector mode on x86_64. Hence I ultimately opted to use simple stand-alone patterns (as used by the existing TF mode support). This patch has been tested on x86_64-pc-linux-gnu with "make bootstrap" and "make -k check" with no new failures. Ok for mainline? 2021-10-22 Roger Sayle <roger@nextmovesoftware.com> gcc/ChangeLog * config/i386/sse.md (<any_logic>v1ti3): New define_insn to implement V1TImode AND, IOR and XOR on TARGET_SSE2 (and above). (one_cmplv1ti2): New define expand. gcc/testsuite/ChangeLog * gcc.target/i386/sse2-v1ti-logic.c: New test case. * gcc.target/i386/sse2-v1ti-logic-2.c: New test case. Thanks in advance, Roger -- /* { dg-do compile { target int128 } } */ /* { dg-options "-O2 -msse2" } */ /* { dg-require-effective-target sse2 } */ typedef unsigned __int128 v1ti __attribute__ ((__vector_size__ (16))); v1ti and(v1ti x, v1ti y) { return x & y; } v1ti ior(v1ti x, v1ti y) { return x | y; } v1ti xor(v1ti x, v1ti y) { return x ^ y; } v1ti not(v1ti x) { return ~x; } /* { dg-final { scan-assembler "pand" } } */ /* { dg-final { scan-assembler "por" } } */ /* { dg-final { scan-assembler-times "pxor" 2 } } */ /* { dg-do compile { target int128 } } */ /* { dg-options "-O2 -msse2" } */ /* { dg-require-effective-target sse2 } */ typedef unsigned __int128 v1ti __attribute__ ((__vector_size__ (16))); v1ti x; v1ti y; v1ti z; void and2() { x &= y; } void and3() { x = y & z; } void ior2() { x |= y; } void ior3() { x = y | z; } void xor2() { x ^= y; } void xor3() { x = y ^ z; } void not1() { x = ~x; } void not2() { x = ~y; } /* { dg-final { scan-assembler-times "pand" 2 } } */ /* { dg-final { scan-assembler-times "por" 2 } } */ /* { dg-final { scan-assembler-times "pxor" 4 } } */
Comments
On Fri, Oct 22, 2021 at 9:19 AM Roger Sayle <roger@nextmovesoftware.com> wrote: > > > On x86_64, V1TI mode holds a 128-bit integer value in a (vector) SSE > register (where regular TI mode uses a pair of 64-bit general purpose > scalar registers). This patch improves the implementation of AND, IOR, > XOR and NOT on these values. > > The benefit is demonstrated by the following simple test program: > > typedef unsigned __int128 v1ti __attribute__ ((__vector_size__ (16))); > v1ti and(v1ti x, v1ti y) { return x & y; } > v1ti ior(v1ti x, v1ti y) { return x | y; } > v1ti xor(v1ti x, v1ti y) { return x ^ y; } > v1ti not(v1ti x) { return ~x; } > > For which GCC currently generates the rather large: > > and: movdqa %xmm0, %xmm2 > movq %xmm1, %rdx > movq %xmm0, %rax > andq %rdx, %rax > movhlps %xmm2, %xmm3 > movhlps %xmm1, %xmm4 > movq %rax, %xmm0 > movq %xmm4, %rdx > movq %xmm3, %rax > andq %rdx, %rax > movq %rax, %xmm5 > punpcklqdq %xmm5, %xmm0 > ret > > ior: movdqa %xmm0, %xmm2 > movq %xmm1, %rdx > movq %xmm0, %rax > orq %rdx, %rax > movhlps %xmm2, %xmm3 > movhlps %xmm1, %xmm4 > movq %rax, %xmm0 > movq %xmm4, %rdx > movq %xmm3, %rax > orq %rdx, %rax > movq %rax, %xmm5 > punpcklqdq %xmm5, %xmm0 > ret > > xor: movdqa %xmm0, %xmm2 > movq %xmm1, %rdx > movq %xmm0, %rax > xorq %rdx, %rax > movhlps %xmm2, %xmm3 > movhlps %xmm1, %xmm4 > movq %rax, %xmm0 > movq %xmm4, %rdx > movq %xmm3, %rax > xorq %rdx, %rax > movq %rax, %xmm5 > punpcklqdq %xmm5, %xmm0 > ret > > not: movdqa %xmm0, %xmm1 > movq %xmm0, %rax > notq %rax > movhlps %xmm1, %xmm2 > movq %rax, %xmm0 > movq %xmm2, %rax > notq %rax > movq %rax, %xmm3 > punpcklqdq %xmm3, %xmm0 > ret > > > with this patch we now generate the much more efficient: > > and: pand %xmm1, %xmm0 > ret > > ior: por %xmm1, %xmm0 > ret > > xor: pxor %xmm1, %xmm0 > ret > > not: pcmpeqd %xmm1, %xmm1 > pxor %xmm1, %xmm0 > ret > > > For my first few attempts at this patch I tried adding V1TI to the > existing VI and VI12_AVX_512F mode iterators, but these then have > dependencies on other iterators (and attributes), and so on until > everything ties itself into a knot, as V1TI mode isn't really a > first-class vector mode on x86_64. Hence I ultimately opted to use > simple stand-alone patterns (as used by the existing TF mode support). > > This patch has been tested on x86_64-pc-linux-gnu with "make bootstrap" > and "make -k check" with no new failures. Ok for mainline? > > > 2021-10-22 Roger Sayle <roger@nextmovesoftware.com> > > gcc/ChangeLog > * config/i386/sse.md (<any_logic>v1ti3): New define_insn to > implement V1TImode AND, IOR and XOR on TARGET_SSE2 (and above). > (one_cmplv1ti2): New define expand. > > gcc/testsuite/ChangeLog > * gcc.target/i386/sse2-v1ti-logic.c: New test case. > * gcc.target/i386/sse2-v1ti-logic-2.c: New test case. There is no need for /* { dg-require-effective-target sse2 } */ for compile tests. The compilation does not reach the assembler. OK with the above change. BTW: You can add testcases to the main patch with "git add <filename>" and then create the patch with "git diff HEAD". Thanks, Uros.
diff --git a/gcc/config/i386/sse.md b/gcc/config/i386/sse.md index fbf056b..f37c5c0 100644 --- a/gcc/config/i386/sse.md +++ b/gcc/config/i386/sse.md @@ -16268,6 +16268,31 @@ ] (const_string "<sseinsnmode>")))]) +(define_insn "<code>v1ti3" + [(set (match_operand:V1TI 0 "register_operand" "=x,x,v") + (any_logic:V1TI + (match_operand:V1TI 1 "register_operand" "%0,x,v") + (match_operand:V1TI 2 "vector_operand" "xBm,xm,vm")))] + "TARGET_SSE2" + "@ + p<logic>\t{%2, %0|%0, %2} + vp<logic>\t{%2, %1, %0|%0, %1, %2} + vp<logic>\t{%2, %1, %0|%0, %1, %2}" + [(set_attr "isa" "noavx,avx,avx") + (set_attr "prefix" "orig,vex,evex") + (set_attr "prefix_data16" "1,*,*") + (set_attr "type" "sselog") + (set_attr "mode" "TI")]) + +(define_expand "one_cmplv1ti2" + [(set (match_operand:V1TI 0 "register_operand") + (xor:V1TI (match_operand:V1TI 1 "register_operand") + (match_dup 2)))] + "TARGET_SSE2" +{ + operands[2] = force_reg (V1TImode, CONSTM1_RTX (V1TImode)); +}) + (define_mode_iterator AVX512ZEXTMASK [(DI "TARGET_AVX512BW") (SI "TARGET_AVX512BW") HI])