From patchwork Wed Sep 29 15:25:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Roger Sayle X-Patchwork-Id: 45557 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 10E943857C56 for ; Wed, 29 Sep 2021 15:25:47 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from server.nextmovesoftware.com (server.nextmovesoftware.com [162.254.253.69]) by sourceware.org (Postfix) with ESMTPS id D14D73857C48 for ; Wed, 29 Sep 2021 15:25:14 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org D14D73857C48 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=nextmovesoftware.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=nextmovesoftware.com DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nextmovesoftware.com; s=default; h=Content-Type:MIME-Version:Message-ID: Date:Subject:Cc:To:From:Sender:Reply-To:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=R/TR85c1g4uMm1Mj1A8X+9uR7tYlJsnMRZj9Iy/MnS8=; b=KCnxUSMFGnNLBhK+mhEMe5b6Ta xd5nPHk2e/Bd597KG3kYToABlawwV9W5L3Vs4Jq0s8UeEoDBlfuwNGKjig/dGkFuewsVXLqdjBPWU xSYdQmN1PZHLJspW3gGsOnyiciOx/LC1L6rdB7W8s4HXYWv3jQ4ptdUJUQ5FM1IrX481shq/gDSYK n6iWBSBQls7FISO/N4Qe0jT9OYTPTUVkvamXyR1I26Sl20CYWRvFAufZslDYh2MmAwb3TLqax2aDJ jly7qx3yTVmzsaiEUuKOgRUGzOgTYYDslU/iaZiPoyPP1fHw+Qe/u610EhBkZjqQSbF/o+qlzANE/ O9sqlmhA==; Received: from [185.62.158.67] (port=57108 helo=Dell) by server.nextmovesoftware.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1mVbSM-0007ro-4f; Wed, 29 Sep 2021 11:25:14 -0400 From: "Roger Sayle" To: "'Richard Sandiford'" Subject: [PATCH #2] Introduce smul_highpart and umul_highpart RTX for high-part multiplications Date: Wed, 29 Sep 2021 16:25:12 +0100 Message-ID: <022c01d7b546$33034730$9909d590$@nextmovesoftware.com> MIME-Version: 1.0 X-Mailer: Microsoft Outlook 16.0 Thread-Index: Ade1ROPJ5Daa9qHGQYmdQBrDt2ulWA== Content-Language: en-gb X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - server.nextmovesoftware.com X-AntiAbuse: Original Domain - gcc.gnu.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - nextmovesoftware.com X-Get-Message-Sender-Via: server.nextmovesoftware.com: authenticated_id: roger@nextmovesoftware.com X-Authenticated-Sender: server.nextmovesoftware.com: roger@nextmovesoftware.com X-Source: X-Source-Args: X-Source-Dir: X-Spam-Status: No, score=-12.0 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: 'GCC Patches' Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" Hi Richard, All excellent suggestions. The revised patch below implements all of your (and Andreas') recommendations. I'm happy to restrict GCC's support for saturating arithmetic to integer types, even though I do know of one target (nvptx) that supports saturating floating point math, where results are clamped to [0.0, 1.0], but I've not investigated how NaNs or signed zeros are handled. Good catch on my min/max typo. It convinced me to work harder to come up with some test cases for these simplifications, which I've managed to trigger on x86_64-pc-linux-gnu in the four new attached test cases. This patch has been tested on x86_64-pc-linux-gnu with "make bootstrap" and "make -k check" with no new failures. Ok for mainline? 2021-09-29 Roger Sayle Richard Sandiford gcc/ChangeLog * gcc/rtl.def (SMUL_HIGHPART, UMUL_HIGHPART): New RTX codes for representing signed and unsigned high-part multiplication resp. * gcc/simplify-rtx.c (simplify_binary_operation_1) [SMUL_HIGHPART, UMUL_HIGHPART]: Simplify high-part multiplications by zero. [SS_PLUS, US_PLUS, SS_MINUS, US_MINUS, SS_MULT, US_MULT, SS_DIV, US_DIV]: Similar simplifications for saturating arithmetic. (simplify_const_binary_operation) [SS_PLUS, US_PLUS, SS_MINUS, US_MINUS, SS_MULT, US_MULT, SMUL_HIGHPART, UMUL_HIGHPART]: Implement compile-time evaluation for constant operands. * gcc/dwarf2out.c (mem_loc_descriptor): Skip SMUL_HIGHPART and UMUL_HIGHPART. * doc/rtl.texi (smul_highpart, umul_highpart): Document RTX codes. * doc/md.texi (smul@var{m}3_highpart, umul@var{m3}_highpart): Mention the new smul_highpart and umul_highpart RTX codes. * doc/invoke.texi: Silence @xref "compilation" warnings. gcc/testsuite/ChangeLog * gcc.target/i386/sse2-mmx-paddsb-2.c: New test case. * gcc.target/i386/sse2-mmx-paddusb-2.c: New test case. * gcc.target/i386/sse2-mmx-subsb-2.c: New test case. * gcc.target/i386/sse2-mmx-subusb-2.c: New test case. Roger --- -----Original Message----- From: Richard Sandiford Sent: 27 September 2021 16:44 To: Roger Sayle Cc: 'GCC Patches' Subject: Re: [PATCH] Introduce sh_mul and uh_mul RTX codes for high-part multiplications "Roger Sayle" writes: > This patch introduces new RTX codes to allow the RTL passes and > backends to consistently represent high-part multiplications. > Currently, the RTL used by different backends for expanding > smul3_highpart and umul3_highpart varies greatly, with > many but not all choosing to express this something like: > > (define_insn "smuldi3_highpart" > [(set (match_operand:DI 0 "nvptx_register_operand" "=R") > (truncate:DI > (lshiftrt:TI > (mult:TI (sign_extend:TI > (match_operand:DI 1 "nvptx_register_operand" "R")) > (sign_extend:TI > (match_operand:DI 2 "nvptx_register_operand" "R"))) > (const_int 64))))] > "" > "%.\\tmul.hi.s64\\t%0, %1, %2;") > > One complication with using this "widening multiplication" > representation is that it requires an intermediate in a wider mode, > making it difficult or impossible to encode a high-part multiplication > of the widest supported integer mode. Yeah. It's also a problem when representing vector ops. > A second is that it can interfere with optimization; for example > simplify-rtx.c contains the comment: > > case TRUNCATE: > /* Don't optimize (lshiftrt (mult ...)) as it would interfere > with the umulXi3_highpart patterns. */ > > Hopefully these problems are solved (or reduced) by introducing a new > canonical form for high-part multiplications in RTL passes. > This also simplifies insn patterns when one operand is constant. > > Whilst implementing some constant folding simplifications and > compile-time evaluation of these new RTX codes, I noticed that this > functionality could also be added for the existing saturating > arithmetic RTX codes. Then likewise when documenting these new RTX > codes, I also took the opportunity to silence the @xref warnings in > invoke.texi. > > This patch has been tested on x86_64-pc-linux-gnu with "make bootstrap" > and "make -k check" with no new failures. Ok for mainline? > > > 2021-09-25 Roger Sayle > > gcc/ChangeLog > * gcc/rtl.def (SH_MULT, UH_MULT): New RTX codes for representing > signed and unsigned high-part multiplication respectively. > * gcc/simplify-rtx.c (simplify_binary_operation_1) [SH_MULT, > UH_MULT]: Simplify high-part multiplications by zero. > [SS_PLUS, US_PLUS, SS_MINUS, US_MINUS, SS_MULT, US_MULT, > SS_DIV, US_DIV]: Similar simplifications for saturating > arithmetic. > (simplify_const_binary_operation) [SS_PLUS, US_PLUS, SS_MINUS, > US_MINUS, SS_MULT, US_MULT, SH_MULT, UH_MULT]: Implement > compile-time evaluation for constant operands. > * gcc/dwarf2out.c (mem_loc_descriptor): Skip SH_MULT and UH_MULT. > * doc/rtl.texi (sh_mult, uhmult): Document new RTX codes. > * doc/md.texi (smul@var{m}3_highpart, umul@var{m3}_highpart): > Mention the new sh_mul and uh_mul RTX codes. > * doc/invoke.texi: Silence @xref "compilation" warnings. Look like a good idea to me. Only real comment is on the naming: if possible, I think we should try to avoid introducing yet more differences between optab names and rtl codes. How about umul_highpart for the unsigned code, to match both the optab and the existing convention of adding “u” directly to the front of non-saturating operations? Things are more inconsistent for signed rtx codes: sometimes the “s” is present and sometimes it isn't. But since “smin” and “smax” have it, I think we can justify having it here too. So I think we should use smul_highpart and umul_highpart. It's a bit more wordy than sh_mul, but still a lot shorter than the status quo ;-) > diff --git a/gcc/simplify-rtx.c b/gcc/simplify-rtx.c index > ebad5cb..b4b04b9 100644 > --- a/gcc/simplify-rtx.c > +++ b/gcc/simplify-rtx.c > @@ -4142,11 +4142,40 @@ simplify_context::simplify_binary_operation_1 (rtx_code code, > case US_PLUS: > case SS_MINUS: > case US_MINUS: > + /* Simplify x + 0 to x, if possible. */ Nit: +/- > + if (trueop1 == CONST0_RTX (mode) && !HONOR_SIGNED_ZEROS (mode)) The HONOR_SIGNED_ZEROS check is redundant, since these ops don't support modes with signed zero. Same for the other HONOR_* macros in the patch. E.g. I don't think we should try to guess how infinities and saturation work together. > + return op0; > + return 0; > + > case SS_MULT: > case US_MULT: > + /* Simplify x * 0 to 0, if possible. */ > + if (trueop1 == CONST0_RTX (mode) > + && !HONOR_NANS (mode) > + && !HONOR_SIGNED_ZEROS (mode) > + && !side_effects_p (op0)) > + return op1; > + > + /* Simplify x * 1 to x, if possible. */ > + if (trueop1 == CONST1_RTX (mode) && !HONOR_SNANS (mode)) > + return op0; > + return 0; > + > + case SH_MULT: > + case UH_MULT: > + /* Simplify x * 0 to 0, if possible. */ > + if (trueop1 == CONST0_RTX (mode) > + && !HONOR_NANS (mode) > + && !HONOR_SIGNED_ZEROS (mode) > + && !side_effects_p (op0)) > + return op1; > + return 0; > + > case SS_DIV: > case US_DIV: > - /* ??? There are simplifications that can be done. */ > + /* Simplify x / 1 to x, if possible. */ > + if (trueop1 == CONST1_RTX (mode) && !HONOR_SNANS (mode)) > + return op0; > return 0; > > case VEC_SERIES: > @@ -5011,6 +5040,63 @@ simplify_const_binary_operation (enum rtx_code code, machine_mode mode, > } > break; > } > + > + case SS_PLUS: > + result = wi::add (pop0, pop1, SIGNED, &overflow); I think a goto label would be good here, so that later signed ops can reuse this code instead of having to repeat it. Same idea for the unsigned case. > + if (overflow == wi::OVF_OVERFLOW) > + result = wi::max_value (GET_MODE_PRECISION (int_mode), SIGNED); > + else if (overflow == wi::OVF_UNDERFLOW) > + result = wi::max_value (GET_MODE_PRECISION (int_mode), SIGNED); Should be min_value. Same for the other underflow handlers. Like Andreas said, @pxref would be better where applicable. Thanks, Richard > + else if (overflow != wi::OVF_NONE) > + return NULL_RTX; > + break; > + > + case US_PLUS: > + result = wi::add (pop0, pop1, UNSIGNED, &overflow); > + if (overflow != wi::OVF_NONE) > + result = wi::max_value (GET_MODE_PRECISION (int_mode), UNSIGNED); > + break; > + > + case SS_MINUS: > + result = wi::sub (pop0, pop1, SIGNED, &overflow); > + if (overflow == wi::OVF_OVERFLOW) > + result = wi::max_value (GET_MODE_PRECISION (int_mode), SIGNED); > + else if (overflow == wi::OVF_UNDERFLOW) > + result = wi::max_value (GET_MODE_PRECISION (int_mode), SIGNED); > + else if (overflow != wi::OVF_NONE) > + return NULL_RTX; > + break; > + > + case US_MINUS: > + result = wi::sub (pop0, pop1, UNSIGNED, &overflow); > + if (overflow != wi::OVF_NONE) > + result = wi::min_value (GET_MODE_PRECISION (int_mode), UNSIGNED); > + break; > + > + case SS_MULT: > + result = wi::mul (pop0, pop1, SIGNED, &overflow); > + if (overflow == wi::OVF_OVERFLOW) > + result = wi::max_value (GET_MODE_PRECISION (int_mode), SIGNED); > + else if (overflow == wi::OVF_UNDERFLOW) > + result = wi::max_value (GET_MODE_PRECISION (int_mode), SIGNED); > + else if (overflow != wi::OVF_NONE) > + return NULL_RTX; > + break; > + > + case US_MULT: > + result = wi::mul (pop0, pop1, UNSIGNED, &overflow); > + if (overflow != wi::OVF_NONE) > + result = wi::max_value (GET_MODE_PRECISION (int_mode), UNSIGNED); > + break; > + > + case SH_MULT: > + result = wi::mul_high (pop0, pop1, SIGNED); > + break; > + > + case UH_MULT: > + result = wi::mul_high (pop0, pop1, UNSIGNED); > + break; > + > default: > return NULL_RTX; > } /* { dg-do compile } */ /* { dg-options "-O2" } */ typedef char v8qi __attribute__ ((vector_size (8))); char foo() { v8qi tx = { 1, 0, 0, 0, 0, 0, 0, 0 }; v8qi ty = { 2, 0, 0, 0, 0, 0, 0, 0 }; v8qi t = __builtin_ia32_paddsb(tx, ty); return t[0]; } char bar() { v8qi tx = { 100, 0, 0, 0, 0, 0, 0, 0 }; v8qi ty = { 100, 0, 0, 0, 0, 0, 0, 0 }; v8qi t = __builtin_ia32_paddsb(tx, ty); return t[0]; } char baz() { v8qi tx = { -100, 0, 0, 0, 0, 0, 0, 0 }; v8qi ty = { -100, 0, 0, 0, 0, 0, 0, 0 }; v8qi t = __builtin_ia32_paddsb(tx, ty); return t[0]; } /* { dg-final { scan-assembler-times "movl\[ \\t\]+\\\$3," 1 } } */ /* { dg-final { scan-assembler-times "movl\[ \\t\]+\\\$127," 1 } } */ /* { dg-final { scan-assembler-times "movl\[ \\t\]+\\\$-128," 1 } } */ /* { dg-final { scan-assembler-not "paddsb\[ \\t\]+%xmm\[0-9\]+" } } */ /* { dg-do compile } */ /* { dg-options "-O2" } */ typedef char v8qi __attribute__ ((vector_size (8))); char foo() { v8qi tx = { 1, 0, 0, 0, 0, 0, 0, 0 }; v8qi ty = { 2, 0, 0, 0, 0, 0, 0, 0 }; v8qi t = __builtin_ia32_paddusb(tx, ty); return t[0]; } char bar() { v8qi tx = { 200, 0, 0, 0, 0, 0, 0, 0 }; v8qi ty = { 200, 0, 0, 0, 0, 0, 0, 0 }; v8qi t = __builtin_ia32_paddusb(tx, ty); return t[0]; } /* { dg-final { scan-assembler-times "movl\[ \\t\]+\\\$3," 1 } } */ /* { dg-final { scan-assembler-times "movl\[ \\t\]+\\\$-1," 1 } } */ /* { dg-final { scan-assembler-not "paddusb\[ \\t\]+%xmm\[0-9\]+" } } */ /* { dg-do compile } */ /* { dg-options "-O2" } */ typedef char v8qi __attribute__ ((vector_size (8))); char foo() { v8qi tx = { 5, 0, 0, 0, 0, 0, 0, 0 }; v8qi ty = { 2, 0, 0, 0, 0, 0, 0, 0 }; v8qi t = __builtin_ia32_psubsb(tx, ty); return t[0]; } char bar() { v8qi tx = { -100, 0, 0, 0, 0, 0, 0, 0 }; v8qi ty = { 100, 0, 0, 0, 0, 0, 0, 0 }; v8qi t = __builtin_ia32_psubsb(tx, ty); return t[0]; } char baz() { v8qi tx = { 100, 0, 0, 0, 0, 0, 0, 0 }; v8qi ty = { -100, 0, 0, 0, 0, 0, 0, 0 }; v8qi t = __builtin_ia32_psubsb(tx, ty); return t[0]; } /* { dg-final { scan-assembler-times "movl\[ \\t\]+\\\$3," 1 } } */ /* { dg-final { scan-assembler-times "movl\[ \\t\]+\\\$-128," 1 } } */ /* { dg-final { scan-assembler-times "movl\[ \\t\]+\\\$127," 1 } } */ /* { dg-final { scan-assembler-not "paddsb\[ \\t\]+%xmm\[0-9\]+" } } */ /* { dg-do compile } */ /* { dg-options "-O2" } */ typedef char v8qi __attribute__ ((vector_size (8))); char foo() { v8qi tx = { 5, 0, 0, 0, 0, 0, 0, 0 }; v8qi ty = { 2, 0, 0, 0, 0, 0, 0, 0 }; v8qi t = __builtin_ia32_psubusb(tx, ty); return t[0]; } char bar() { v8qi tx = { 100, 0, 0, 0, 0, 0, 0, 0 }; v8qi ty = { 200, 0, 0, 0, 0, 0, 0, 0 }; v8qi t = __builtin_ia32_psubusb(tx, ty); return t[0]; } /* { dg-final { scan-assembler-times "movl\[ \\t\]+\\\$3," 1 } } */ /* { dg-final { scan-assembler-times "xorl\[ \\t\]+" 1 } } */ /* { dg-final { scan-assembler-not "psubusb\[ \\t\]+%xmm\[0-9\]+" } } */ diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi index 4acb941..7ed0c69 100644 --- a/gcc/doc/invoke.texi +++ b/gcc/doc/invoke.texi @@ -3125,7 +3125,7 @@ errors if these functions are not inlined everywhere they are called. @itemx -fno-modules-ts @opindex fmodules-ts @opindex fno-modules-ts -Enable support for C++20 modules (@xref{C++ Modules}). The +Enable support for C++20 modules (@pxref{C++ Modules}). The @option{-fno-modules-ts} is usually not needed, as that is the default. Even though this is a C++20 feature, it is not currently implicitly enabled by selecting that standard version. @@ -33553,7 +33553,7 @@ version selected, although in pre-C++20 versions, it is of course an extension. No new source file suffixes are required or supported. If you wish to -use a non-standard suffix (@xref{Overall Options}), you also need +use a non-standard suffix (@pxref{Overall Options}), you also need to provide a @option{-x c++} option too.@footnote{Some users like to distinguish module interface files with a new suffix, such as naming the source @code{module.cppm}, which involves @@ -33615,8 +33615,8 @@ to be resolved at the end of compilation. Without this, imported macros are only resolved when expanded or (re)defined. This option detects conflicting import definitions for all macros. -@xref{C++ Module Mapper} for details of the @option{-fmodule-mapper} -family of options. +For details of the @option{-fmodule-mapper} family of options, +@pxref{C++ Module Mapper}. @menu * C++ Module Mapper:: Module Mapper @@ -33833,8 +33833,8 @@ dialect used and imports of the module.@footnote{The precise contents of this output may change.} The timestamp is the same value as that provided by the @code{__DATE__} & @code{__TIME__} macros, and may be explicitly specified with the environment variable -@code{SOURCE_DATE_EPOCH}. @xref{Environment Variables} for further -details. +@code{SOURCE_DATE_EPOCH}. For further details +@pxref{Environment Variables}. A set of related CMIs may be copied, provided the relative pathnames are preserved. diff --git a/gcc/doc/md.texi b/gcc/doc/md.texi index 2b41cb7..ed35b8f 100644 --- a/gcc/doc/md.texi +++ b/gcc/doc/md.texi @@ -5776,11 +5776,13 @@ multiplication. @item @samp{smul@var{m}3_highpart} Perform a signed multiplication of operands 1 and 2, which have mode @var{m}, and store the most significant half of the product in operand 0. -The least significant half of the product is discarded. +The least significant half of the product is discarded. This may be +represented in RTL using a @code{smul_highpart} RTX expression. @cindex @code{umul@var{m}3_highpart} instruction pattern @item @samp{umul@var{m}3_highpart} -Similar, but the multiplication is unsigned. +Similar, but the multiplication is unsigned. This may be represented +in RTL using an @code{umul_highpart} RTX expression. @cindex @code{madd@var{m}@var{n}4} instruction pattern @item @samp{madd@var{m}@var{n}4} diff --git a/gcc/doc/rtl.texi b/gcc/doc/rtl.texi index e1e76a9..2058997 100644 --- a/gcc/doc/rtl.texi +++ b/gcc/doc/rtl.texi @@ -2524,7 +2524,19 @@ not be the same. For unsigned widening multiplication, use the same idiom, but with @code{zero_extend} instead of @code{sign_extend}. +@findex smul_highpart +@findex umul_highpart +@cindex high-part multiplication +@cindex multiplication high part +@item (smul_highpart:@var{m} @var{x} @var{y}) +@itemx (umul_highpart:@var{m} @var{x} @var{y}) +Represents the high-part multiplication of @var{x} and @var{y} carried +out in machine mode @var{m}. @code{smul_highpart} returns the high part +of a signed multiplication, @code{umul_highpart} returns the high part +of an unsigned multiplication. + @findex fma +@cindex fused multiply-add @item (fma:@var{m} @var{x} @var{y} @var{z}) Represents the @code{fma}, @code{fmaf}, and @code{fmal} builtin functions, which compute @samp{@var{x} * @var{y} + @var{z}} diff --git a/gcc/dwarf2out.c b/gcc/dwarf2out.c index 9876750..20f2c5d 100644 --- a/gcc/dwarf2out.c +++ b/gcc/dwarf2out.c @@ -16809,6 +16809,8 @@ mem_loc_descriptor (rtx rtl, machine_mode mode, case CONST_FIXED: case CLRSB: case CLOBBER: + case SMUL_HIGHPART: + case UMUL_HIGHPART: break; case CONST_STRING: diff --git a/gcc/rtl.def b/gcc/rtl.def index c80144b..5710a2e 100644 --- a/gcc/rtl.def +++ b/gcc/rtl.def @@ -467,6 +467,11 @@ DEF_RTL_EXPR(SS_MULT, "ss_mult", "ee", RTX_COMM_ARITH) /* Multiplication with unsigned saturation */ DEF_RTL_EXPR(US_MULT, "us_mult", "ee", RTX_COMM_ARITH) +/* Signed high-part multiplication. */ +DEF_RTL_EXPR(SMUL_HIGHPART, "smul_highpart", "ee", RTX_COMM_ARITH) +/* Unsigned high-part multiplication. */ +DEF_RTL_EXPR(UMUL_HIGHPART, "umul_highpart", "ee", RTX_COMM_ARITH) + /* Operand 0 divided by operand 1. */ DEF_RTL_EXPR(DIV, "div", "ee", RTX_BIN_ARITH) /* Division with signed saturation */ diff --git a/gcc/simplify-rtx.c b/gcc/simplify-rtx.c index ebad5cb..7e8e2c3 100644 --- a/gcc/simplify-rtx.c +++ b/gcc/simplify-rtx.c @@ -4142,11 +4142,36 @@ simplify_context::simplify_binary_operation_1 (rtx_code code, case US_PLUS: case SS_MINUS: case US_MINUS: + /* Simplify x +/- 0 to x, if possible. */ + if (trueop1 == CONST0_RTX (mode)) + return op0; + return 0; + case SS_MULT: case US_MULT: + /* Simplify x * 0 to 0, if possible. */ + if (trueop1 == CONST0_RTX (mode) + && !side_effects_p (op0)) + return op1; + + /* Simplify x * 1 to x, if possible. */ + if (trueop1 == CONST1_RTX (mode)) + return op0; + return 0; + + case SMUL_HIGHPART: + case UMUL_HIGHPART: + /* Simplify x * 0 to 0, if possible. */ + if (trueop1 == CONST0_RTX (mode) + && !side_effects_p (op0)) + return op1; + return 0; + case SS_DIV: case US_DIV: - /* ??? There are simplifications that can be done. */ + /* Simplify x / 1 to x, if possible. */ + if (trueop1 == CONST1_RTX (mode)) + return op0; return 0; case VEC_SERIES: @@ -5011,6 +5036,51 @@ simplify_const_binary_operation (enum rtx_code code, machine_mode mode, } break; } + + case SS_PLUS: + result = wi::add (pop0, pop1, SIGNED, &overflow); + clamp_signed_saturation: + if (overflow == wi::OVF_OVERFLOW) + result = wi::max_value (GET_MODE_PRECISION (int_mode), SIGNED); + else if (overflow == wi::OVF_UNDERFLOW) + result = wi::min_value (GET_MODE_PRECISION (int_mode), SIGNED); + else if (overflow != wi::OVF_NONE) + return NULL_RTX; + break; + + case US_PLUS: + result = wi::add (pop0, pop1, UNSIGNED, &overflow); + clamp_unsigned_saturation: + if (overflow != wi::OVF_NONE) + result = wi::max_value (GET_MODE_PRECISION (int_mode), UNSIGNED); + break; + + case SS_MINUS: + result = wi::sub (pop0, pop1, SIGNED, &overflow); + goto clamp_signed_saturation; + + case US_MINUS: + result = wi::sub (pop0, pop1, UNSIGNED, &overflow); + if (overflow != wi::OVF_NONE) + result = wi::min_value (GET_MODE_PRECISION (int_mode), UNSIGNED); + break; + + case SS_MULT: + result = wi::mul (pop0, pop1, SIGNED, &overflow); + goto clamp_signed_saturation; + + case US_MULT: + result = wi::mul (pop0, pop1, UNSIGNED, &overflow); + goto clamp_unsigned_saturation; + + case SMUL_HIGHPART: + result = wi::mul_high (pop0, pop1, SIGNED); + break; + + case UMUL_HIGHPART: + result = wi::mul_high (pop0, pop1, UNSIGNED); + break; + default: return NULL_RTX; }