From patchwork Tue Oct 29 20:06:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Pinski X-Patchwork-Id: 99790 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 75D77385843D for ; Tue, 29 Oct 2024 20:07:28 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by sourceware.org (Postfix) with ESMTPS id A84BA3858C98 for ; Tue, 29 Oct 2024 20:06:59 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org A84BA3858C98 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=quicinc.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org A84BA3858C98 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1730232421; cv=none; b=ZGZqFQvK3GsL5P7pOeJUMnAhReNnJVdFRwOCgXcwSp4/+30bcBOG3/Jm0X1IumbizEFx1oxjmCvJjFbHl67WQX/toBX5EVpJ0BGZ8rAAigbspkqTEwdykotthq9kuuvdyVH9fQj7NfCpiBkIywbvb5Pjzi0VWWKG93bZai2k39I= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1730232421; c=relaxed/simple; bh=2VdsCsOgeq3RH06ZurdUaEdtFDL7NPppzOHKhjcA6JU=; h=DKIM-Signature:From:To:Subject:Date:Message-ID:MIME-Version; b=fLVhSKwuJjqE9XMQMZ5pdOMe3KrY6xjtwoeXmcMm+sQyHVWyOBooh3G4fqHiZpzBoXQlolSWMrMw8JiipF1OUQgmBsTaiDPGNBdBAc9/3DCSNJfxj52KDAm5gPjOkehRhXdJx4iguyh5cN2S3TO6gu7WqRr94fr189QuksT5BvM= ARC-Authentication-Results: i=1; server2.sourceware.org Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49TK4LAq020425 for ; Tue, 29 Oct 2024 20:06:58 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:message-id :mime-version:subject:to; s=qcppdkim1; bh=qCun4u9xtvvPYgKhEw2rk4 wxRQFRvRPirfsvMI3iK4c=; b=PpYaH1YZXs1PzFwouI+L2cGwgpKUjQGWIZ/+8P g+y62uf4+t7aVvSbSKcDrXGzvMlrdv7WQLuHuymXXWJSSaN7nm0liQxgSVG34xco 07M/uKw6HofJa2Xg/zof8p59LHWVaco9PPa0yTW204cuDtEFiRLEWqk0Q0qgir2Y XVhllz/Hv9fhMwRL+WU3Y65gVPnnsd3QJSRRntazg0MkGHHZ7xRZmujWDpY/3Btl acaJ2lEfhKN4wVVeEdXgCP6mBeo7Z3B3XCfYwAZ3ITYmMaPi3SbB3MiREnCWCJF5 bvKVHsDlEr9Me/7IuQT3GIZA4K/C2V/H522MTzwuf/7P+2sA== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 42grguhnj1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 29 Oct 2024 20:06:57 +0000 (GMT) Received: from nasanex01c.na.qualcomm.com (nasanex01c.na.qualcomm.com [10.45.79.139]) by NASANPPMTA04.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 49TK6uEA025253 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 29 Oct 2024 20:06:56 GMT Received: from hu-apinski-lv.qualcomm.com (10.49.16.6) by nasanex01c.na.qualcomm.com (10.45.79.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 29 Oct 2024 13:06:56 -0700 From: Andrew Pinski To: CC: Andrew Pinski Subject: [PUSHED] aarch64: Remove unnecessary casts to rtx_code [PR117349] Date: Tue, 29 Oct 2024 13:06:43 -0700 Message-ID: <20241029200643.1238575-1-quic_apinski@quicinc.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01b.na.qualcomm.com (10.47.209.197) To nasanex01c.na.qualcomm.com (10.45.79.139) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: QGAA-bmfSrUrqJa3KwDnPFkk-YTabexU X-Proofpoint-GUID: QGAA-bmfSrUrqJa3KwDnPFkk-YTabexU X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 clxscore=1015 lowpriorityscore=0 adultscore=0 mlxscore=0 bulkscore=0 suspectscore=0 phishscore=0 spamscore=0 malwarescore=0 mlxlogscore=999 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2409260000 definitions=main-2410290150 X-Spam-Status: No, score=-14.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces~patchwork=sourceware.org@gcc.gnu.org In aarch64_gen_ccmp_first/aarch64_gen_ccmp_next, the casts were no longer needed after r14-3412-gbf64392d66f291 which changed the type of the arguments to rtx_code. In aarch64_rtx_costs, they were no longer needed since r12-4828-g1d5c43db79b7ea which changed the type of code to rtx_code. Pushed as obvious after a build/test for aarch64-linux-gnu. gcc/ChangeLog: PR target/117349 * config/aarch64/aarch64.cc (aarch64_rtx_costs): Remove unnecessary casts to rtx_code. (aarch64_gen_ccmp_first): Likewise. (aarch64_gen_ccmp_next): Likewise. Signed-off-by: Andrew Pinski --- gcc/config/aarch64/aarch64.cc | 51 +++++++++++++++-------------------- 1 file changed, 21 insertions(+), 30 deletions(-) diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc index a6cc00e74ab..b2dd23ccb26 100644 --- a/gcc/config/aarch64/aarch64.cc +++ b/gcc/config/aarch64/aarch64.cc @@ -14286,7 +14286,7 @@ aarch64_rtx_costs (rtx x, machine_mode mode, int outer ATTRIBUTE_UNUSED, /* BFM. */ if (speed) *cost += extra_cost->alu.bfi; - *cost += rtx_cost (op1, VOIDmode, (enum rtx_code) code, 1, speed); + *cost += rtx_cost (op1, VOIDmode, code, 1, speed); } return true; @@ -14666,8 +14666,7 @@ cost_minus: *cost += extra_cost->alu.extend_arith; op1 = aarch64_strip_extend (op1, true); - *cost += rtx_cost (op1, VOIDmode, - (enum rtx_code) GET_CODE (op1), 0, speed); + *cost += rtx_cost (op1, VOIDmode, GET_CODE (op1), 0, speed); return true; } @@ -14678,9 +14677,7 @@ cost_minus: || aarch64_shift_p (GET_CODE (new_op1))) && code != COMPARE) { - *cost += aarch64_rtx_mult_cost (new_op1, MULT, - (enum rtx_code) code, - speed); + *cost += aarch64_rtx_mult_cost (new_op1, MULT, code, speed); return true; } @@ -14781,8 +14778,7 @@ cost_plus: *cost += extra_cost->alu.extend_arith; op0 = aarch64_strip_extend (op0, true); - *cost += rtx_cost (op0, VOIDmode, - (enum rtx_code) GET_CODE (op0), 0, speed); + *cost += rtx_cost (op0, VOIDmode, GET_CODE (op0), 0, speed); return true; } @@ -14896,8 +14892,7 @@ cost_plus: && aarch64_mask_and_shift_for_ubfiz_p (int_mode, op1, XEXP (op0, 1))) { - *cost += rtx_cost (XEXP (op0, 0), int_mode, - (enum rtx_code) code, 0, speed); + *cost += rtx_cost (XEXP (op0, 0), int_mode, code, 0, speed); if (speed) *cost += extra_cost->alu.bfx; @@ -14907,8 +14902,7 @@ cost_plus: { /* We possibly get the immediate for free, this is not modelled. */ - *cost += rtx_cost (op0, int_mode, - (enum rtx_code) code, 0, speed); + *cost += rtx_cost (op0, int_mode, code, 0, speed); if (speed) *cost += extra_cost->alu.logical; @@ -14943,10 +14937,8 @@ cost_plus: } /* In both cases we want to cost both operands. */ - *cost += rtx_cost (new_op0, int_mode, (enum rtx_code) code, - 0, speed); - *cost += rtx_cost (op1, int_mode, (enum rtx_code) code, - 1, speed); + *cost += rtx_cost (new_op0, int_mode, code, 0, speed); + *cost += rtx_cost (op1, int_mode, code, 1, speed); return true; } @@ -14967,7 +14959,7 @@ cost_plus: /* MVN-shifted-reg. */ if (op0 != x) { - *cost += rtx_cost (op0, mode, (enum rtx_code) code, 0, speed); + *cost += rtx_cost (op0, mode, code, 0, speed); if (speed) *cost += extra_cost->alu.log_shift; @@ -14983,7 +14975,7 @@ cost_plus: rtx newop1 = XEXP (op0, 1); rtx op0_stripped = aarch64_strip_shift (newop0); - *cost += rtx_cost (newop1, mode, (enum rtx_code) code, 1, speed); + *cost += rtx_cost (newop1, mode, code, 1, speed); *cost += rtx_cost (op0_stripped, mode, XOR, 0, speed); if (speed) @@ -15149,7 +15141,7 @@ cost_plus: && known_eq (INTVAL (XEXP (op1, 1)), GET_MODE_BITSIZE (mode) - 1)) { - *cost += rtx_cost (op0, mode, (rtx_code) code, 0, speed); + *cost += rtx_cost (op0, mode, code, 0, speed); /* We already demanded XEXP (op1, 0) to be REG_P, so don't recurse into it. */ return true; @@ -15212,7 +15204,7 @@ cost_plus: /* We can trust that the immediates used will be correct (there are no by-register forms), so we need only cost op0. */ - *cost += rtx_cost (XEXP (x, 0), VOIDmode, (enum rtx_code) code, 0, speed); + *cost += rtx_cost (XEXP (x, 0), VOIDmode, code, 0, speed); return true; case MULT: @@ -15402,12 +15394,11 @@ cost_plus: && aarch64_vec_fpconst_pow_of_2 (XEXP (x, 1)) > 0) || aarch64_fpconst_pow_of_2 (XEXP (x, 1)) > 0)) { - *cost += rtx_cost (XEXP (x, 0), VOIDmode, (rtx_code) code, - 0, speed); + *cost += rtx_cost (XEXP (x, 0), VOIDmode, code, 0, speed); return true; } - *cost += rtx_cost (x, VOIDmode, (enum rtx_code) code, 0, speed); + *cost += rtx_cost (x, VOIDmode, code, 0, speed); return true; case ABS: @@ -27369,13 +27360,13 @@ aarch64_gen_ccmp_first (rtx_insn **prep_seq, rtx_insn **gen_seq, case E_SFmode: cmp_mode = SFmode; - cc_mode = aarch64_select_cc_mode ((rtx_code) code, op0, op1); + cc_mode = aarch64_select_cc_mode (code, op0, op1); icode = cc_mode == CCFPEmode ? CODE_FOR_fcmpesf : CODE_FOR_fcmpsf; break; case E_DFmode: cmp_mode = DFmode; - cc_mode = aarch64_select_cc_mode ((rtx_code) code, op0, op1); + cc_mode = aarch64_select_cc_mode (code, op0, op1); icode = cc_mode == CCFPEmode ? CODE_FOR_fcmpedf : CODE_FOR_fcmpdf; break; @@ -27406,7 +27397,7 @@ aarch64_gen_ccmp_first (rtx_insn **prep_seq, rtx_insn **gen_seq, *gen_seq = get_insns (); end_sequence (); - return gen_rtx_fmt_ee ((rtx_code) code, cc_mode, + return gen_rtx_fmt_ee (code, cc_mode, gen_rtx_REG (cc_mode, CC_REGNUM), const0_rtx); } @@ -27443,12 +27434,12 @@ aarch64_gen_ccmp_next (rtx_insn **prep_seq, rtx_insn **gen_seq, rtx prev, case E_SFmode: cmp_mode = SFmode; - cc_mode = aarch64_select_cc_mode ((rtx_code) cmp_code, op0, op1); + cc_mode = aarch64_select_cc_mode (cmp_code, op0, op1); break; case E_DFmode: cmp_mode = DFmode; - cc_mode = aarch64_select_cc_mode ((rtx_code) cmp_code, op0, op1); + cc_mode = aarch64_select_cc_mode (cmp_code, op0, op1); break; default: @@ -27469,7 +27460,7 @@ aarch64_gen_ccmp_next (rtx_insn **prep_seq, rtx_insn **gen_seq, rtx prev, end_sequence (); target = gen_rtx_REG (cc_mode, CC_REGNUM); - aarch64_cond = aarch64_get_condition_code_1 (cc_mode, (rtx_code) cmp_code); + aarch64_cond = aarch64_get_condition_code_1 (cc_mode, cmp_code); if (bit_code != AND) { @@ -27508,7 +27499,7 @@ aarch64_gen_ccmp_next (rtx_insn **prep_seq, rtx_insn **gen_seq, rtx prev, *gen_seq = get_insns (); end_sequence (); - return gen_rtx_fmt_ee ((rtx_code) cmp_code, VOIDmode, target, const0_rtx); + return gen_rtx_fmt_ee (cmp_code, VOIDmode, target, const0_rtx); } #undef TARGET_GEN_CCMP_FIRST