From patchwork Mon Jun 20 14:23:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arjun Shankar X-Patchwork-Id: 55188 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 62B2A385C316 for ; Mon, 20 Jun 2022 14:24:21 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 62B2A385C316 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1655735061; bh=tbtDt+id0SKQ0E9HO3EPUHo1WUrq2L/0krdTBG4qKSY=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:Cc:From; b=WeWQc7bX6Rt+bOc3sdqtkm8Y874ew84scxgMC/G3ThgDMnlSJzhkDLSukpLRfYHZx gfX50181q0unYNunUfcIX+r7P5MVNTo7dfDY6WSIsPgdc/Y8K0DJxar17dgoOIMyaC V30onOkR8Eq4nTdocmWLvNvuZM4cLsaUOFqk2ZWo= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by sourceware.org (Postfix) with ESMTPS id 2696C3857362 for ; Mon, 20 Jun 2022 14:23:52 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 2696C3857362 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-658-wGoGP7ulM1qqWffTLzTaDg-1; Mon, 20 Jun 2022 10:23:48 -0400 X-MC-Unique: wGoGP7ulM1qqWffTLzTaDg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 68C2A102F08C; Mon, 20 Jun 2022 14:23:48 +0000 (UTC) Received: from x1carbon.redhat.com (unknown [10.40.194.249]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 436F340D2962; Mon, 20 Jun 2022 14:23:46 +0000 (UTC) To: GCC Patches Subject: [PATCH v4] tree-optimization/94899: Remove "+ 0x80000000" in int comparisons Date: Mon, 20 Jun 2022 16:23:33 +0200 Message-Id: <20220620142333.33065-1-arjun@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-12.3 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_LOW, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Arjun Shankar via Gcc-patches From: Arjun Shankar Reply-To: Arjun Shankar Cc: Jakub Jelinek Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" Expressions of the form "X + CST < Y + CST" where: * CST is an unsigned integer constant with only the MSB set, and * X and Y's types have integer conversion ranks <= CST's can be simplified to "(signed) X < (signed) Y". This is because, assuming a 32-bit signed numbers, (unsigned) INT_MIN + 0x80000000 is 0, and (unsigned) INT_MAX + 0x80000000 is UINT_MAX. i.e. the result increases monotonically with signed input. This means: ((signed) X < (signed) Y) iff (X + 0x80000000 < Y + 0x80000000) gcc/ * match.pd (X + C < Y + C -> (signed) X < (signed) Y, if C is 0x80000000): New simplification. gcc/testsuite/ * gcc.dg/pr94899.c: New test. --- gcc/match.pd | 13 +++++++++ gcc/testsuite/gcc.dg/pr94899.c | 49 ++++++++++++++++++++++++++++++++++ 2 files changed, 62 insertions(+) create mode 100644 gcc/testsuite/gcc.dg/pr94899.c --- v3: https://gcc.gnu.org/pipermail/gcc-patches/2022-June/596785.html Notes on v4, based on Richard and Jakub's review comments: Richard wrote: > It might be possible to test for zero + or - operations instead? OK. That seems more fool-proof. I've made the change. Jakub wrote: > Can't one just omit the INTEGER_CST part on the second @0? I hadn't thought of that. Done! > As a follow-up, it might be useful to make it work for vector integral types > too, > typedef unsigned V __attribute__((vector_size (4 * sizeof (int)))); > #define M __INT_MAX__ + 1U > V foo (V x, V y) > { > return x + (V) { M, M, M, M } < y + (V) { M, M, M, M }; > } > using uniform_integer_cst_p. OK. This syntax is unfamiliar to me. I'll read a bit and then try to work on a follow-up. Thanks! diff --git a/gcc/match.pd b/gcc/match.pd index a63b649841b..4a570894b2e 100644 --- a/gcc/match.pd +++ b/gcc/match.pd @@ -2089,6 +2089,19 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) (if (ANY_INTEGRAL_TYPE_P (TREE_TYPE (@0)) && TYPE_OVERFLOW_UNDEFINED (TREE_TYPE (@0))) (op @0 @1)))) + +/* As a special case, X + C < Y + C is the same as (signed) X < (signed) Y + when C is an unsigned integer constant with only the MSB set, and X and + Y have types of equal or lower integer conversion rank than C's. */ +(for op (lt le ge gt) + (simplify + (op (plus @1 INTEGER_CST@0) (plus @2 @0)) + (if (INTEGRAL_TYPE_P (TREE_TYPE (@0)) + && TYPE_UNSIGNED (TREE_TYPE (@0)) + && wi::only_sign_bit_p (wi::to_wide (@0))) + (with { tree stype = signed_type_for (TREE_TYPE (@0)); } + (op (convert:stype @1) (convert:stype @2)))))) + /* For equality and subtraction, this is also true with wrapping overflow. */ (for op (eq ne minus) (simplify diff --git a/gcc/testsuite/gcc.dg/pr94899.c b/gcc/testsuite/gcc.dg/pr94899.c new file mode 100644 index 00000000000..2fc7009a2e7 --- /dev/null +++ b/gcc/testsuite/gcc.dg/pr94899.c @@ -0,0 +1,49 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -fdump-tree-optimized" } */ + +typedef __INT16_TYPE__ int16_t; +typedef __INT32_TYPE__ int32_t; +typedef __UINT16_TYPE__ uint16_t; +typedef __UINT32_TYPE__ uint32_t; + +#define MAGIC (~ (uint32_t) 0 / 2 + 1) + +int +f_i16_i16 (int16_t x, int16_t y) +{ + return x + MAGIC < y + MAGIC; +} + +int +f_i16_i32 (int16_t x, int32_t y) +{ + return x + MAGIC < y + MAGIC; +} + +int +f_i32_i32 (int32_t x, int32_t y) +{ + return x + MAGIC < y + MAGIC; +} + +int +f_u32_i32 (uint32_t x, int32_t y) +{ + return x + MAGIC < y + MAGIC; +} + +int +f_u32_u32 (uint32_t x, uint32_t y) +{ + return x + MAGIC < y + MAGIC; +} + +int +f_i32_i32_sub (int32_t x, int32_t y) +{ + return x - MAGIC < y - MAGIC; +} + +/* The addition/subtraction of constants should be optimized away. */ +/* { dg-final { scan-tree-dump-not "\\+" "optimized"} } */ +/* { dg-final { scan-tree-dump-not "\\-" "optimized"} } */