Message ID | AS4PR08MB7901CEA2D310CDB76A47600C835A9@AS4PR08MB7901.eurprd08.prod.outlook.com |
---|---|
State | New |
Headers |
Return-Path: <gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org> X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id D369D385828B for <patchwork@sourceware.org>; Tue, 4 Oct 2022 17:23:31 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org D369D385828B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1664904211; bh=A+ZVUohPnZvg3RNJECm8OXrRmf+K02BIC+sdTb7ecsk=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:Cc:From; b=rLX6wHpzepik8s4yXnY2xEHHMdyZS6eUKHRD5MNmXjMJ881lZxsvkhiBt9br1oVaZ RXBXmvnNZoX36ySdBJsU0adR/PL+GfxFqJY9hISUP6FdVZt7EQGEUfI7lMgYJXxGY0 4UhoSPbUvM33MIO9n3iEQhtYYg5ZcsWiKQc7SwE8= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR04-HE1-obe.outbound.protection.outlook.com (mail-eopbgr70049.outbound.protection.outlook.com [40.107.7.49]) by sourceware.org (Postfix) with ESMTPS id 866803858D32 for <gcc-patches@gcc.gnu.org>; Tue, 4 Oct 2022 17:22:58 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 866803858D32 ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=nW5AlpFYhsABEEvO3l64LVTbS3HcRrvXqr3EkXaPZujSmOvTSE/u5iVsX/4UgupadRW8DGDZMHTk3zmfEc/V4U+wWkb+1Q7KDlgPaVI9ktfQOFZNkaenk7/JIJaSps2R12MpHc0p9oKf+ZouQCQRQnhmJgYp7c5L7cxdBfOssg+kttXTmx2RWMQeY/7NkwEY8BCt6xb2aCJH2vzCXOpAF8nE3PY8WCBiMSUvO2c65Ezia2OEeDs4DPkPzRG3fCDvIWBv2xfBloqiE7DYsjcnW+PxazYGd6gHH0SS1JN7fUp1ds/OLbrFuv6EMQqKq6taT+2nyNsgR5Usv3peZv9Kmg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=A+ZVUohPnZvg3RNJECm8OXrRmf+K02BIC+sdTb7ecsk=; b=PMMK6w9dLcb/zP1RotuY1CaDaF14NLML7Bve2yVBQhIXoceO5M7t2QTd3ZC9p7KvPfU5HX8XczXvUnFdK46LcQ23Pv6aE1PpTZm4DwMU9/CvUlB9P2QTmlb5tZysmMoZtes3zGfRKNy12R5sV+65hD/T2gKMsnHtSz2q0FSn8tuhCe0D23vZvE9GEaKslt4z8G4QZgvOG51MDaw7O/g3sCzMnOSvyNh2RgJld8qyHo8RqmND9hWCqIaoRFI1iN04Xbe+gGKDwW6DYjqJszsLWxPXCQjF0V46H9A7YKKvypcsJQRLRzjTZZS4Gwy7/D6okRJeINB2d+OMxsLt9AgFnw== ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com]) Received: from DB6P192CA0002.EURP192.PROD.OUTLOOK.COM (2603:10a6:4:b8::12) by AS2PR08MB8364.eurprd08.prod.outlook.com (2603:10a6:20b:547::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.19; Tue, 4 Oct 2022 17:22:54 +0000 Received: from DBAEUR03FT035.eop-EUR03.prod.protection.outlook.com (2603:10a6:4:b8:cafe::3b) by DB6P192CA0002.outlook.office365.com (2603:10a6:4:b8::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.28 via Frontend Transport; Tue, 4 Oct 2022 17:22:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DBAEUR03FT035.mail.protection.outlook.com (100.127.142.136) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17 via Frontend Transport; Tue, 4 Oct 2022 17:22:49 +0000 Received: ("Tessian outbound 99ee3885c6d5:v128"); Tue, 04 Oct 2022 17:22:49 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: f061bc64677b0076 X-CR-MTA-TID: 64aa7808 Received: from c19730adf1c5.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id E79540DB-D414-4C9E-8F57-851D819F3E0D.1; Tue, 04 Oct 2022 17:22:40 +0000 Received: from EUR03-DBA-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c19730adf1c5.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Tue, 04 Oct 2022 17:22:40 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EL6jzFDiPe7WzZaju6K/08oa9Qlw7Sb9eGF1fRT0BRDAcbKbCZvx6LJWgVrlXOFzElbLpvxBU5bvcS5CxDXcRA0rTM808uxvBE1TeJPeaBFLOgGhcodGp6Z/Hm7C/m/czgTaEIZQrCgvcLgfwamq7F/GJKDVcTs3nEGrSJXET2bOhWeaUDPI7TIoPUOYEJoyW7qM0g28spZn01pGkyFXO8elq0uq6GxuG9qVUgAIRT1/LiftdqYilccKOXZ+jkoXXjNkvPpwsnRc5rGXigXC5BaD3ZeZnpB7t7jBs3ZIqZjE+gXbMPhzDAiKPTg8vWw7mMx16uzH+nhTV3+N6jMSTw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=A+ZVUohPnZvg3RNJECm8OXrRmf+K02BIC+sdTb7ecsk=; b=SZpNoR7LH10iYGbNqMBe5DeDfUukp2faSO089ss5BzI6YQtKjjnkTP6a9cJOwWLd6RDZeRTpQIZlz2eqcuUalmifhm9rUBWak0T7TEPA6ZDqLd0AE/Ml8hfmRW9RIKi1cu/rCtMj9xe/ZgRsxby15pIs2AaIcnitZd/bBCYujM4MOHSihcvAx+22Frho0a/FHPVZoCdlWY11C2HZsVg3cbyrx7PY6sVd4OFqwiflW9ckOXM2hNAG88nz7P9XLe9yTCrbYKWDSN3McVZzc9h45KEGbK4HTbjjnVNxUzL/X2TaqTCC+4TxJZF53LXRAZPWby4+KtAiBQKlO9+Tg7+J4w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none Received: from AS4PR08MB7901.eurprd08.prod.outlook.com (2603:10a6:20b:51c::16) by AS4PR08MB7553.eurprd08.prod.outlook.com (2603:10a6:20b:4fb::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.28; Tue, 4 Oct 2022 17:22:35 +0000 Received: from AS4PR08MB7901.eurprd08.prod.outlook.com ([fe80::4d64:ef01:4d4c:6ba1]) by AS4PR08MB7901.eurprd08.prod.outlook.com ([fe80::4d64:ef01:4d4c:6ba1%8]) with mapi id 15.20.5676.031; Tue, 4 Oct 2022 17:22:35 +0000 To: GCC Patches <gcc-patches@gcc.gnu.org> Subject: [PATCH][AArch64] Improve immediate expansion [PR106583] Thread-Topic: [PATCH][AArch64] Improve immediate expansion [PR106583] Thread-Index: AQHY2A60YLzBQMfh40K/8+uaM+WmLw== Date: Tue, 4 Oct 2022 17:22:35 +0000 Message-ID: <AS4PR08MB7901CEA2D310CDB76A47600C835A9@AS4PR08MB7901.eurprd08.prod.outlook.com> Accept-Language: en-GB, en-US Content-Language: en-GB X-MS-Has-Attach: X-MS-TNEF-Correlator: msip_labels: Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; x-ms-traffictypediagnostic: AS4PR08MB7901:EE_|AS4PR08MB7553:EE_|DBAEUR03FT035:EE_|AS2PR08MB8364:EE_ X-MS-Office365-Filtering-Correlation-Id: 787b060a-3e5b-4979-959d-08daa62d0fb6 x-checkrecipientrouted: true nodisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: olp3g/0B8TyQ8t0kIqMg3/C38YuTwqEB1xKDDmWBLPgElj8F4rAmwKa5Sb09LkKGTmykdMEsFujZGFQJPEqIJUVXlnOvjU9sj1sMVgO7ANTaOtSrzxv0BQGpJkHZ+LomKQKzNn/pC61BChgt7ne/xWKv2SiimaQIijaWje4En+VeP+MrnuswkkpsgG4RGoXOVvHh3i41xpINyXwRvp0u1lX0tbCKS0MzmALqM9haVU6GFpkBAbTNiCFtP5oBMtwX50qOA8COJFFbys95nmUl3ZBKCTgs9+4WKoX1ffntzU4fAdjyR7VEQav8NPg4twekWmta76+bGUSt1Z6r3DjzTBz6AhMfgBQN4TJa71ma1qjtn77FpBYzUWiDAKEklIIl0XL27D/+LGRy4/TNuwAWxuPl9B17EQUlXXyj/gHlr0Yj3RuTILJZoP0PgqdoRzDhqYBfxHsPkpKaZQDXjQmL4fzmDu8obBvJ2PdDMXFZkoxBJSEjqHon6iLHsVVOxAAbeXidv1SU8bvC5ip6a2Ma9ShSao+3lfz4xpTwN8lK44cHdoEFRLqY8bgkXzrcIA3z+6WM/Ol78Siju+Z5Mo6TlAKaQHZCMrLdKK4j5zsIrkB/49JGJP7zGVKlmzCFBBU258em5VNGM30zfeXSJDfjeE4iw5W7ZrmUpp03Z8jhSH8za+i8v86mT56CRO55V6Of3vCCTucGv5WNLLV6TTX5TMYRqPzlZdAcgMe41/qdLnDCYgEKzCG+p28VqBQXdZSPcP9y3WfDzry30y4cA+5Y3BuzYUqXyRPq4vpS9uFSnsJO2amkciLjPynSITU2WT53 X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AS4PR08MB7901.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230022)(4636009)(376002)(39860400002)(396003)(346002)(366004)(136003)(451199015)(71200400001)(9686003)(478600001)(6916009)(316002)(84970400001)(122000001)(38100700002)(66476007)(41300700001)(186003)(8676002)(66446008)(66946007)(64756008)(4326008)(76116006)(66556008)(91956017)(8936002)(6506007)(7696005)(38070700005)(5660300002)(86362001)(33656002)(55016003)(52536014)(26005)(2906002)(14773001); DIR:OUT; SFP:1101; Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7553 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DBAEUR03FT035.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 62ca445c-5a09-48df-2460-08daa62d0760 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: v/yAPon8rtAhHZ//owSNz0QlXfpckACgm4i3eU29GQqcilk0UA0wrs03O41DmMWdItjE7DMCr2w7UL7iSiF9cVTq8nmmcJwCcJ4+JHWWDhQitPUqXR1fTY5047GIwBsUbRSSNJEWGinoNgcd3wUKornh1Sv1yVwIF7n7gJZI/jCHgsTFX0VgYCbVZ8LO3ch+dYtx8peIbM5YDDAJmFArcjaJfTDoD9GhMV9gpZVAbMsX5VPVQjBgOg8EojwISz6YX1TZZ4BKFFjuQLLwDYlyzwLNZTeHLCjlPVMMH0NNYycXWSpIcKRHHHURnaYtz8ZWxlP67xtOZm7FUGVGE076aBrTGOXyKth3zevTqrDqQt7CLzCjHiR1N6k0xUO8UuLOan7flVGEhzYK7mjUBA1mpNcea848DLWu+kZgAuFUH+qHMSb/2PG7s1AxDMdq8tp4bpSW7fSDA1bSZv8GdY+g8ygPRisPcdjW+Crv1Vqsjgky+Zi0JQuFy/2hBjI95GLoLt4v3usOKgmNvdxYMmUU+S2gJfCMj2Di38UvdmcoCA0urJr1UzJoFo8R6gaBZFmaEYY8BxtTOfC9hStvXkmXEIQoOpG3z120DoqdAVNbpnfvif6jdK7eZwy+6XgNkqhryseQOrElXM+Wv3YnnO+LqEd4nh8EV/XTF5X8O0Ke6kI221+5UNVuLeTvv4M/xFA13XbSHbtxbN4DYV0nFExMnQfDbEFi12Vjb4v2Cu9HT1HkmvCrsTJ14XjFXGg3wxvf6Sz9MXtdtQkJAmVUxbsL5uXJG7028oQQJQvSpz/nfIUeg8Ggiu0XkbIoFTVsDHh0 X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(39860400002)(346002)(376002)(136003)(451199015)(46966006)(36840700001)(40470700004)(186003)(81166007)(356005)(84970400001)(82740400003)(55016003)(40460700003)(86362001)(33656002)(82310400005)(7696005)(6506007)(40480700001)(41300700001)(4326008)(70206006)(70586007)(8936002)(5660300002)(52536014)(9686003)(26005)(478600001)(8676002)(316002)(6916009)(36860700001)(2906002)(336012)(47076005)(14773001); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Oct 2022 17:22:49.3553 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 787b060a-3e5b-4979-959d-08daa62d0fb6 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DBAEUR03FT035.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8364 X-Spam-Status: No, score=-11.2 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list <gcc-patches.gcc.gnu.org> List-Unsubscribe: <https://gcc.gnu.org/mailman/options/gcc-patches>, <mailto:gcc-patches-request@gcc.gnu.org?subject=unsubscribe> List-Archive: <https://gcc.gnu.org/pipermail/gcc-patches/> List-Post: <mailto:gcc-patches@gcc.gnu.org> List-Help: <mailto:gcc-patches-request@gcc.gnu.org?subject=help> List-Subscribe: <https://gcc.gnu.org/mailman/listinfo/gcc-patches>, <mailto:gcc-patches-request@gcc.gnu.org?subject=subscribe> From: Wilco Dijkstra via Gcc-patches <gcc-patches@gcc.gnu.org> Reply-To: Wilco Dijkstra <Wilco.Dijkstra@arm.com> Cc: Richard Sandiford <Richard.Sandiford@arm.com> Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" <gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org> |
Series |
[AArch64] Improve immediate expansion [PR106583]
|
|
Commit Message
Wilco Dijkstra
Oct. 4, 2022, 5:22 p.m. UTC
Improve immediate expansion of immediates which can be created from a bitmask immediate and 2 MOVKs. This reduces the number of 4-instruction immediates in SPECINT/FP by 10-15%. Passes regress, OK for commit? gcc/ChangeLog: PR target/106583 * config/aarch64/aarch64.cc (aarch64_internal_mov_immediate) Add support for a bitmask immediate with 2 MOVKs. gcc/testsuite: PR target/106583 * gcc.target/aarch64/pr106583.c: Add new test. ---
Comments
Wilco Dijkstra <Wilco.Dijkstra@arm.com> writes: > Improve immediate expansion of immediates which can be created from a > bitmask immediate and 2 MOVKs. This reduces the number of 4-instruction > immediates in SPECINT/FP by 10-15%. > > Passes regress, OK for commit? > > gcc/ChangeLog: > > PR target/106583 > * config/aarch64/aarch64.cc (aarch64_internal_mov_immediate) > Add support for a bitmask immediate with 2 MOVKs. > > gcc/testsuite: > PR target/106583 > * gcc.target/aarch64/pr106583.c: Add new test. Nice. Did you consider handling the case where the movks aren't for consecutive bitranges? E.g. the patch handles: 0x12345678aaaaaaaa and: 0x1234cccccccc5678 but it looks like it would be fairly easy to extend it to: 0x1234cccc5678cccc too. Also, could you commonise: val2 = val & ~mask; if (val2 != val && aarch64_bitmask_imm (val2, mode)) break; val2 = val | mask; if (val2 != val && aarch64_bitmask_imm (val2, mode)) break; val2 = val2 & ~mask; val2 = val2 | (((val2 >> 32) | (val2 << 32)) & mask); if (val2 != val && aarch64_bitmask_imm (val2, mode)) break; ? It's subtle enough that IMO it'd be better not to cut-&-paste it. Thanks, Richard > --- > > diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc > index 926e81f028c82aac9a5fecc18f921f84399c24ae..1601d11710cb6132c80a77bb4fe2f8429519aa5a 100644 > --- a/gcc/config/aarch64/aarch64.cc > +++ b/gcc/config/aarch64/aarch64.cc > @@ -5568,7 +5568,7 @@ aarch64_internal_mov_immediate (rtx dest, rtx imm, bool generate, > one_match = ((~val & mask) == 0) + ((~val & (mask << 16)) == 0) + > ((~val & (mask << 32)) == 0) + ((~val & (mask << 48)) == 0); > > - if (zero_match != 2 && one_match != 2) > + if (zero_match < 2 && one_match < 2) > { > /* Try emitting a bitmask immediate with a movk replacing 16 bits. > For a 64-bit bitmask try whether changing 16 bits to all ones or > @@ -5600,6 +5600,43 @@ aarch64_internal_mov_immediate (rtx dest, rtx imm, bool generate, > } > } > > + /* Try a bitmask plus 2 movk to generate the immediate in 3 instructions. */ > + if (zero_match + one_match == 0) > + { > + mask = 0xffffffff; > + > + for (i = 0; i < 64; i += 16) > + { > + val2 = val & ~mask; > + if (aarch64_bitmask_imm (val2, mode)) > + break; > + val2 = val | mask; > + if (aarch64_bitmask_imm (val2, mode)) > + break; > + val2 = val2 & ~mask; > + val2 = val2 | (((val2 >> 32) | (val2 << 32)) & mask); > + if (aarch64_bitmask_imm (val2, mode)) > + break; > + > + mask = (mask << 16) | (mask >> 48); > + } > + > + if (i != 64) > + { > + if (generate) > + { > + emit_insn (gen_rtx_SET (dest, GEN_INT (val2))); > + emit_insn (gen_insv_immdi (dest, GEN_INT (i), > + GEN_INT ((val >> i) & 0xffff))); > + i = (i + 16) & 63; > + emit_insn (gen_insv_immdi (dest, GEN_INT (i), > + GEN_INT ((val >> i) & 0xffff))); > + } > + > + return 3; > + } > + } > + > /* Generate 2-4 instructions, skipping 16 bits of all zeroes or ones which > are emitted by the initial mov. If one_match > zero_match, skip set bits, > otherwise skip zero bits. */ > diff --git a/gcc/testsuite/gcc.target/aarch64/pr106583.c b/gcc/testsuite/gcc.target/aarch64/pr106583.c > new file mode 100644 > index 0000000000000000000000000000000000000000..f0a027a0950e506d4ddaacce5e151f57070948dc > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/pr106583.c > @@ -0,0 +1,30 @@ > +/* { dg-do assemble } */ > +/* { dg-options "-O2 --save-temps" } */ > + > +long f1 (void) > +{ > + return 0x7efefefefefefeff; > +} > + > +long f2 (void) > +{ > + return 0x12345678aaaaaaaa; > +} > + > +long f3 (void) > +{ > + return 0x1234cccccccc5678; > +} > + > +long f4 (void) > +{ > + return 0x7777123456787777; > +} > + > +long f5 (void) > +{ > + return 0x5555555512345678; > +} > + > +/* { dg-final { scan-assembler-times {\tmovk\t} 10 } } */ > +/* { dg-final { scan-assembler-times {\tmov\t} 5 } } */
Hi Richard, > Did you consider handling the case where the movks aren't for > consecutive bitranges? E.g. the patch handles: > but it looks like it would be fairly easy to extend it to: > > 0x1234cccc5678cccc Yes, with a more general search loop we can get that case too - it doesn't trigger much though. The code that checks for this is now refactored into a new function. Given there are now many more calls to aarch64_bitmask_imm, I added a streamlined internal entry point that assumes the input is 64-bit. Cheers, Wilco [PATCH v2][AArch64] Improve immediate expansion [PR106583] Improve immediate expansion of immediates which can be created from a bitmask immediate and 2 MOVKs. Simplify, refactor and improve efficiency of bitmask checks. This reduces the number of 4-instruction immediates in SPECINT/FP by 10-15%. Passes regress, OK for commit? gcc/ChangeLog: PR target/106583 * config/aarch64/aarch64.cc (aarch64_internal_mov_immediate) Add support for a bitmask immediate with 2 MOVKs. (aarch64_check_bitmask): New function after refactorization. (aarch64_replicate_bitmask_imm): Remove function, merge into... (aarch64_bitmask_imm): Simplify replication of small modes. Split function into 64-bit only version for efficiency. gcc/testsuite: PR target/106583 * gcc.target/aarch64/pr106583.c: Add new test. --- diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc index 926e81f028c82aac9a5fecc18f921f84399c24ae..b2d9c7380975028131d0fe731a97b3909874b87b 100644 --- a/gcc/config/aarch64/aarch64.cc +++ b/gcc/config/aarch64/aarch64.cc @@ -306,6 +306,7 @@ static machine_mode aarch64_simd_container_mode (scalar_mode, poly_int64); static bool aarch64_print_address_internal (FILE*, machine_mode, rtx, aarch64_addr_query_type); static HOST_WIDE_INT aarch64_clamp_to_uimm12_shift (HOST_WIDE_INT val); +static bool aarch64_bitmask_imm (unsigned HOST_WIDE_INT); /* The processor for which instructions should be scheduled. */ enum aarch64_processor aarch64_tune = cortexa53; @@ -5502,6 +5503,30 @@ aarch64_output_sve_vector_inc_dec (const char *operands, rtx x) factor, nelts_per_vq); } +/* Return true if the immediate VAL can be a bitfield immediate + by changing the given MASK bits in VAL to zeroes, ones or bits + from the other half of VAL. Return the new immediate in VAL2. */ +static inline bool +aarch64_check_bitmask (unsigned HOST_WIDE_INT val, + unsigned HOST_WIDE_INT &val2, + unsigned HOST_WIDE_INT mask) +{ + val2 = val & ~mask; + if (val2 != val && aarch64_bitmask_imm (val2)) + return true; + val2 = val | mask; + if (val2 != val && aarch64_bitmask_imm (val2)) + return true; + val = val & ~mask; + val2 = val | (((val >> 32) | (val << 32)) & mask); + if (val2 != val && aarch64_bitmask_imm (val2)) + return true; + val2 = val | (((val >> 16) | (val << 48)) & mask); + if (val2 != val && aarch64_bitmask_imm (val2)) + return true; + return false; +} + static int aarch64_internal_mov_immediate (rtx dest, rtx imm, bool generate, scalar_int_mode mode) @@ -5568,36 +5593,43 @@ aarch64_internal_mov_immediate (rtx dest, rtx imm, bool generate, one_match = ((~val & mask) == 0) + ((~val & (mask << 16)) == 0) + ((~val & (mask << 32)) == 0) + ((~val & (mask << 48)) == 0); - if (zero_match != 2 && one_match != 2) + if (zero_match < 2 && one_match < 2) { /* Try emitting a bitmask immediate with a movk replacing 16 bits. For a 64-bit bitmask try whether changing 16 bits to all ones or zeroes creates a valid bitmask. To check any repeated bitmask, try using 16 bits from the other 32-bit half of val. */ - for (i = 0; i < 64; i += 16, mask <<= 16) - { - val2 = val & ~mask; - if (val2 != val && aarch64_bitmask_imm (val2, mode)) - break; - val2 = val | mask; - if (val2 != val && aarch64_bitmask_imm (val2, mode)) - break; - val2 = val2 & ~mask; - val2 = val2 | (((val2 >> 32) | (val2 << 32)) & mask); - if (val2 != val && aarch64_bitmask_imm (val2, mode)) - break; - } - if (i != 64) - { - if (generate) + for (i = 0; i < 64; i += 16) + if (aarch64_check_bitmask (val, val2, mask << i)) + { + if (generate) + { + emit_insn (gen_rtx_SET (dest, GEN_INT (val2))); + emit_insn (gen_insv_immdi (dest, GEN_INT (i), + GEN_INT ((val >> i) & 0xffff))); + } + return 2; + } + } + + /* Try a bitmask plus 2 movk to generate the immediate in 3 instructions. */ + if (zero_match + one_match == 0) + { + for (i = 0; i < 48; i += 16) + for (int j = i + 16; j < 64; j += 16) + if (aarch64_check_bitmask (val, val2, (mask << i) | (mask << j))) { - emit_insn (gen_rtx_SET (dest, GEN_INT (val2))); - emit_insn (gen_insv_immdi (dest, GEN_INT (i), - GEN_INT ((val >> i) & 0xffff))); + if (generate) + { + emit_insn (gen_rtx_SET (dest, GEN_INT (val2))); + emit_insn (gen_insv_immdi (dest, GEN_INT (i), + GEN_INT ((val >> i) & 0xffff))); + emit_insn (gen_insv_immdi (dest, GEN_INT (j), + GEN_INT ((val >> j) & 0xffff))); + } + return 3; } - return 2; - } } /* Generate 2-4 instructions, skipping 16 bits of all zeroes or ones which @@ -10168,22 +10200,6 @@ aarch64_movk_shift (const wide_int_ref &and_val, return -1; } -/* VAL is a value with the inner mode of MODE. Replicate it to fill a - 64-bit (DImode) integer. */ - -static unsigned HOST_WIDE_INT -aarch64_replicate_bitmask_imm (unsigned HOST_WIDE_INT val, machine_mode mode) -{ - unsigned int size = GET_MODE_UNIT_PRECISION (mode); - while (size < 64) - { - val &= (HOST_WIDE_INT_1U << size) - 1; - val |= val << size; - size *= 2; - } - return val; -} - /* Multipliers for repeating bitmasks of width 32, 16, 8, 4, and 2. */ static const unsigned HOST_WIDE_INT bitmask_imm_mul[] = @@ -10196,26 +10212,42 @@ static const unsigned HOST_WIDE_INT bitmask_imm_mul[] = }; -/* Return true if val is a valid bitmask immediate. */ - +/* Return true if val is a valid bitmask immediate for any mode. */ bool aarch64_bitmask_imm (HOST_WIDE_INT val_in, machine_mode mode) { - unsigned HOST_WIDE_INT val, tmp, mask, first_one, next_one; + if (mode == DImode) + return aarch64_bitmask_imm (val_in); + + unsigned HOST_WIDE_INT val = val_in; + + if (mode == SImode) + return aarch64_bitmask_imm ((val & 0xffffffff) | (val << 32)); + + /* Replicate small immediates to fit 64 bits. */ + int size = GET_MODE_UNIT_PRECISION (mode); + val &= (HOST_WIDE_INT_1U << size) - 1; + val *= bitmask_imm_mul[__builtin_clz (size) - 26]; + + return aarch64_bitmask_imm (val); +} + + +/* Return true if 64-bit val is a valid bitmask immediate. */ + +static bool +aarch64_bitmask_imm (unsigned HOST_WIDE_INT val) +{ + unsigned HOST_WIDE_INT tmp, mask, first_one, next_one; int bits; /* Check for a single sequence of one bits and return quickly if so. The special cases of all ones and all zeroes returns false. */ - val = aarch64_replicate_bitmask_imm (val_in, mode); tmp = val + (val & -val); if (tmp == (tmp & -tmp)) return (val + 1) > 1; - /* Replicate 32-bit immediates so we can treat them as 64-bit. */ - if (mode == SImode) - val = (val << 32) | (val & 0xffffffff); - /* Invert if the immediate doesn't start with a zero bit - this means we only need to search for sequences of one bits. */ if (val & 1) diff --git a/gcc/testsuite/gcc.target/aarch64/pr106583.c b/gcc/testsuite/gcc.target/aarch64/pr106583.c new file mode 100644 index 0000000000000000000000000000000000000000..0f931580817d78dc1cc58f03b251bd21bec71f59 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/pr106583.c @@ -0,0 +1,41 @@ +/* { dg-do assemble } */ +/* { dg-options "-O2 --save-temps" } */ + +long f1 (void) +{ + return 0x7efefefefefefeff; +} + +long f2 (void) +{ + return 0x12345678aaaaaaaa; +} + +long f3 (void) +{ + return 0x1234cccccccc5678; +} + +long f4 (void) +{ + return 0x7777123456787777; +} + +long f5 (void) +{ + return 0x5555555512345678; +} + +long f6 (void) +{ + return 0x1234bbbb5678bbbb; +} + +long f7 (void) +{ + return 0x4444123444445678; +} + + +/* { dg-final { scan-assembler-times {\tmovk\t} 14 } } */ +/* { dg-final { scan-assembler-times {\tmov\t} 7 } } */
Wilco Dijkstra via Gcc-patches <gcc-patches@gcc.gnu.org> writes: > Hi Richard, > >> Did you consider handling the case where the movks aren't for >> consecutive bitranges? E.g. the patch handles: > >> but it looks like it would be fairly easy to extend it to: >> >> 0x1234cccc5678cccc > > Yes, with a more general search loop we can get that case too - > it doesn't trigger much though. The code that checks for this is > now refactored into a new function. Given there are now many > more calls to aarch64_bitmask_imm, I added a streamlined internal > entry point that assumes the input is 64-bit. Sounds good, but could you put it before the mode version, to avoid the forward declaration? OK with that change, thanks. Richard > Cheers, > Wilco > > [PATCH v2][AArch64] Improve immediate expansion [PR106583] > > Improve immediate expansion of immediates which can be created from a > bitmask immediate and 2 MOVKs. Simplify, refactor and improve > efficiency of bitmask checks. This reduces the number of 4-instruction > immediates in SPECINT/FP by 10-15%. > > Passes regress, OK for commit? > > gcc/ChangeLog: > > PR target/106583 > * config/aarch64/aarch64.cc (aarch64_internal_mov_immediate) > Add support for a bitmask immediate with 2 MOVKs. > (aarch64_check_bitmask): New function after refactorization. > (aarch64_replicate_bitmask_imm): Remove function, merge into... > (aarch64_bitmask_imm): Simplify replication of small modes. > Split function into 64-bit only version for efficiency. > > gcc/testsuite: > PR target/106583 > * gcc.target/aarch64/pr106583.c: Add new test. > > --- > > diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc > index 926e81f028c82aac9a5fecc18f921f84399c24ae..b2d9c7380975028131d0fe731a97b3909874b87b 100644 > --- a/gcc/config/aarch64/aarch64.cc > +++ b/gcc/config/aarch64/aarch64.cc > @@ -306,6 +306,7 @@ static machine_mode aarch64_simd_container_mode (scalar_mode, poly_int64); > static bool aarch64_print_address_internal (FILE*, machine_mode, rtx, > aarch64_addr_query_type); > static HOST_WIDE_INT aarch64_clamp_to_uimm12_shift (HOST_WIDE_INT val); > +static bool aarch64_bitmask_imm (unsigned HOST_WIDE_INT); > > /* The processor for which instructions should be scheduled. */ > enum aarch64_processor aarch64_tune = cortexa53; > @@ -5502,6 +5503,30 @@ aarch64_output_sve_vector_inc_dec (const char *operands, rtx x) > factor, nelts_per_vq); > } > > +/* Return true if the immediate VAL can be a bitfield immediate > + by changing the given MASK bits in VAL to zeroes, ones or bits > + from the other half of VAL. Return the new immediate in VAL2. */ > +static inline bool > +aarch64_check_bitmask (unsigned HOST_WIDE_INT val, > + unsigned HOST_WIDE_INT &val2, > + unsigned HOST_WIDE_INT mask) > +{ > + val2 = val & ~mask; > + if (val2 != val && aarch64_bitmask_imm (val2)) > + return true; > + val2 = val | mask; > + if (val2 != val && aarch64_bitmask_imm (val2)) > + return true; > + val = val & ~mask; > + val2 = val | (((val >> 32) | (val << 32)) & mask); > + if (val2 != val && aarch64_bitmask_imm (val2)) > + return true; > + val2 = val | (((val >> 16) | (val << 48)) & mask); > + if (val2 != val && aarch64_bitmask_imm (val2)) > + return true; > + return false; > +} > + > static int > aarch64_internal_mov_immediate (rtx dest, rtx imm, bool generate, > scalar_int_mode mode) > @@ -5568,36 +5593,43 @@ aarch64_internal_mov_immediate (rtx dest, rtx imm, bool generate, > one_match = ((~val & mask) == 0) + ((~val & (mask << 16)) == 0) + > ((~val & (mask << 32)) == 0) + ((~val & (mask << 48)) == 0); > > - if (zero_match != 2 && one_match != 2) > + if (zero_match < 2 && one_match < 2) > { > /* Try emitting a bitmask immediate with a movk replacing 16 bits. > For a 64-bit bitmask try whether changing 16 bits to all ones or > zeroes creates a valid bitmask. To check any repeated bitmask, > try using 16 bits from the other 32-bit half of val. */ > > - for (i = 0; i < 64; i += 16, mask <<= 16) > - { > - val2 = val & ~mask; > - if (val2 != val && aarch64_bitmask_imm (val2, mode)) > - break; > - val2 = val | mask; > - if (val2 != val && aarch64_bitmask_imm (val2, mode)) > - break; > - val2 = val2 & ~mask; > - val2 = val2 | (((val2 >> 32) | (val2 << 32)) & mask); > - if (val2 != val && aarch64_bitmask_imm (val2, mode)) > - break; > - } > - if (i != 64) > - { > - if (generate) > + for (i = 0; i < 64; i += 16) > + if (aarch64_check_bitmask (val, val2, mask << i)) > + { > + if (generate) > + { > + emit_insn (gen_rtx_SET (dest, GEN_INT (val2))); > + emit_insn (gen_insv_immdi (dest, GEN_INT (i), > + GEN_INT ((val >> i) & 0xffff))); > + } > + return 2; > + } > + } > + > + /* Try a bitmask plus 2 movk to generate the immediate in 3 instructions. */ > + if (zero_match + one_match == 0) > + { > + for (i = 0; i < 48; i += 16) > + for (int j = i + 16; j < 64; j += 16) > + if (aarch64_check_bitmask (val, val2, (mask << i) | (mask << j))) > { > - emit_insn (gen_rtx_SET (dest, GEN_INT (val2))); > - emit_insn (gen_insv_immdi (dest, GEN_INT (i), > - GEN_INT ((val >> i) & 0xffff))); > + if (generate) > + { > + emit_insn (gen_rtx_SET (dest, GEN_INT (val2))); > + emit_insn (gen_insv_immdi (dest, GEN_INT (i), > + GEN_INT ((val >> i) & 0xffff))); > + emit_insn (gen_insv_immdi (dest, GEN_INT (j), > + GEN_INT ((val >> j) & 0xffff))); > + } > + return 3; > } > - return 2; > - } > } > > /* Generate 2-4 instructions, skipping 16 bits of all zeroes or ones which > @@ -10168,22 +10200,6 @@ aarch64_movk_shift (const wide_int_ref &and_val, > return -1; > } > > -/* VAL is a value with the inner mode of MODE. Replicate it to fill a > - 64-bit (DImode) integer. */ > - > -static unsigned HOST_WIDE_INT > -aarch64_replicate_bitmask_imm (unsigned HOST_WIDE_INT val, machine_mode mode) > -{ > - unsigned int size = GET_MODE_UNIT_PRECISION (mode); > - while (size < 64) > - { > - val &= (HOST_WIDE_INT_1U << size) - 1; > - val |= val << size; > - size *= 2; > - } > - return val; > -} > - > /* Multipliers for repeating bitmasks of width 32, 16, 8, 4, and 2. */ > > static const unsigned HOST_WIDE_INT bitmask_imm_mul[] = > @@ -10196,26 +10212,42 @@ static const unsigned HOST_WIDE_INT bitmask_imm_mul[] = > }; > > > -/* Return true if val is a valid bitmask immediate. */ > - > +/* Return true if val is a valid bitmask immediate for any mode. */ > bool > aarch64_bitmask_imm (HOST_WIDE_INT val_in, machine_mode mode) > { > - unsigned HOST_WIDE_INT val, tmp, mask, first_one, next_one; > + if (mode == DImode) > + return aarch64_bitmask_imm (val_in); > + > + unsigned HOST_WIDE_INT val = val_in; > + > + if (mode == SImode) > + return aarch64_bitmask_imm ((val & 0xffffffff) | (val << 32)); > + > + /* Replicate small immediates to fit 64 bits. */ > + int size = GET_MODE_UNIT_PRECISION (mode); > + val &= (HOST_WIDE_INT_1U << size) - 1; > + val *= bitmask_imm_mul[__builtin_clz (size) - 26]; > + > + return aarch64_bitmask_imm (val); > +} > + > + > +/* Return true if 64-bit val is a valid bitmask immediate. */ > + > +static bool > +aarch64_bitmask_imm (unsigned HOST_WIDE_INT val) > +{ > + unsigned HOST_WIDE_INT tmp, mask, first_one, next_one; > int bits; > > /* Check for a single sequence of one bits and return quickly if so. > The special cases of all ones and all zeroes returns false. */ > - val = aarch64_replicate_bitmask_imm (val_in, mode); > tmp = val + (val & -val); > > if (tmp == (tmp & -tmp)) > return (val + 1) > 1; > > - /* Replicate 32-bit immediates so we can treat them as 64-bit. */ > - if (mode == SImode) > - val = (val << 32) | (val & 0xffffffff); > - > /* Invert if the immediate doesn't start with a zero bit - this means we > only need to search for sequences of one bits. */ > if (val & 1) > diff --git a/gcc/testsuite/gcc.target/aarch64/pr106583.c b/gcc/testsuite/gcc.target/aarch64/pr106583.c > new file mode 100644 > index 0000000000000000000000000000000000000000..0f931580817d78dc1cc58f03b251bd21bec71f59 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/pr106583.c > @@ -0,0 +1,41 @@ > +/* { dg-do assemble } */ > +/* { dg-options "-O2 --save-temps" } */ > + > +long f1 (void) > +{ > + return 0x7efefefefefefeff; > +} > + > +long f2 (void) > +{ > + return 0x12345678aaaaaaaa; > +} > + > +long f3 (void) > +{ > + return 0x1234cccccccc5678; > +} > + > +long f4 (void) > +{ > + return 0x7777123456787777; > +} > + > +long f5 (void) > +{ > + return 0x5555555512345678; > +} > + > +long f6 (void) > +{ > + return 0x1234bbbb5678bbbb; > +} > + > +long f7 (void) > +{ > + return 0x4444123444445678; > +} > + > + > +/* { dg-final { scan-assembler-times {\tmovk\t} 14 } } */ > +/* { dg-final { scan-assembler-times {\tmov\t} 7 } } */
Hi Richard, >> Yes, with a more general search loop we can get that case too - >> it doesn't trigger much though. The code that checks for this is >> now refactored into a new function. Given there are now many >> more calls to aarch64_bitmask_imm, I added a streamlined internal >> entry point that assumes the input is 64-bit. > > Sounds good, but could you put it before the mode version, > to avoid the forward declaration? I can swap them around but the forward declaration is still required as aarch64_check_bitmask is 5000 lines before aarch64_bitmask_imm. Cheers, Wilco
Wilco Dijkstra <Wilco.Dijkstra@arm.com> writes: > Hi Richard, > >>> Yes, with a more general search loop we can get that case too - >>> it doesn't trigger much though. The code that checks for this is >>> now refactored into a new function. Given there are now many >>> more calls to aarch64_bitmask_imm, I added a streamlined internal >>> entry point that assumes the input is 64-bit. >> >> Sounds good, but could you put it before the mode version, >> to avoid the forward declaration? > > I can swap them around but the forward declaration is still required as > aarch64_check_bitmask is 5000 lines before aarch64_bitmask_imm. OK, how about moving them both above aarch64_check_bitmask? Richard
Hi Richard, >>> Sounds good, but could you put it before the mode version, >>> to avoid the forward declaration? >> >> I can swap them around but the forward declaration is still required as >> aarch64_check_bitmask is 5000 lines before aarch64_bitmask_imm. > > OK, how about moving them both above aarch64_check_bitmask? Sure I've moved them as well as all related helper functions - it makes the diff quite large but they are all together now which makes sense. I also refactored aarch64_mov_imm to handle the case of a 64-bit immediate being generated by a 32-bit MOVZ/MOVN - this simplifies aarch64_internal_move_immediate and movdi patterns even further. Cheers, Wilco v3: move immediate code together and avoid forward declarations, further cleanups and simplifications. Improve immediate expansion of immediates which can be created from a bitmask immediate and 2 MOVKs. Simplify, refactor and improve efficiency of bitmask checks and move immediate. Move various immediate handling functions together to avoid forward declarations. Include 32-bit MOVZ/N as valid 64-bit immediates. Add new constraint so the movdi pattern only needs a single alternative for move immediate. This reduces the number of 4-instruction immediates in SPECINT/FP by 10-15%. Passes bootstrap & regress, OK for commit? gcc/ChangeLog: PR target/106583 * config/aarch64/aarch64.cc (aarch64_internal_mov_immediate) Add support for a bitmask immediate with 2 MOVKs. (aarch64_check_bitmask): New function after refactorization. (aarch64_replicate_bitmask_imm): Remove function, merge into... (aarch64_bitmask_imm): Simplify replication of small modes. Split function into 64-bit only version for efficiency. (aarch64_zeroextended_move_imm): New function. (aarch64_move_imm): Refactor code. (aarch64_uimm12_shift): Move near other immediate functions. (aarch64_clamp_to_uimm12_shift): Likewise. (aarch64_movk_shift): Likewise. (aarch64_replicate_bitmask_imm): Likewise. (aarch64_and_split_imm1): Likewise. (aarch64_and_split_imm2): Likewise. (aarch64_and_bitmask_imm): Likewise. (aarch64_movw_imm): Remove. * config/aarch64/aarch64.md (movdi_aarch64): Merge 'N' and 'M' constraints into single 'O'. (mov<mode>_aarch64): Likewise. * config/aarch64/aarch64-protos.h (aarch64_move_imm): Use unsigned. (aarch64_bitmask_imm): Likewise. (aarch64_uimm12_shift): Likewise. (aarch64_zeroextended_move_imm): New prototype. * config/aarch64/constraints.md: Add 'O' for 32/64-bit immediates, limit 'N' to 64-bit only moves. gcc/testsuite: PR target/106583 * gcc.target/aarch64/pr106583.c: Add new test. --- diff --git a/gcc/config/aarch64/aarch64-protos.h b/gcc/config/aarch64/aarch64-protos.h index 3e4005c9f4ff1f999f1811c6fb0b2252878dc4ae..b82f9ba7c2bb4cffa16abbf45f87061f72015083 100644 --- a/gcc/config/aarch64/aarch64-protos.h +++ b/gcc/config/aarch64/aarch64-protos.h @@ -755,7 +755,7 @@ void aarch64_post_cfi_startproc (void); poly_int64 aarch64_initial_elimination_offset (unsigned, unsigned); int aarch64_get_condition_code (rtx); bool aarch64_address_valid_for_prefetch_p (rtx, bool); -bool aarch64_bitmask_imm (HOST_WIDE_INT val, machine_mode); +bool aarch64_bitmask_imm (unsigned HOST_WIDE_INT val, machine_mode); unsigned HOST_WIDE_INT aarch64_and_split_imm1 (HOST_WIDE_INT val_in); unsigned HOST_WIDE_INT aarch64_and_split_imm2 (HOST_WIDE_INT val_in); bool aarch64_and_bitmask_imm (unsigned HOST_WIDE_INT val_in, machine_mode mode); @@ -792,7 +792,7 @@ bool aarch64_masks_and_shift_for_bfi_p (scalar_int_mode, unsigned HOST_WIDE_INT, unsigned HOST_WIDE_INT, unsigned HOST_WIDE_INT); bool aarch64_zero_extend_const_eq (machine_mode, rtx, machine_mode, rtx); -bool aarch64_move_imm (HOST_WIDE_INT, machine_mode); +bool aarch64_move_imm (unsigned HOST_WIDE_INT, machine_mode); machine_mode aarch64_sve_int_mode (machine_mode); opt_machine_mode aarch64_sve_pred_mode (unsigned int); machine_mode aarch64_sve_pred_mode (machine_mode); @@ -842,8 +842,9 @@ bool aarch64_sve_float_arith_immediate_p (rtx, bool); bool aarch64_sve_float_mul_immediate_p (rtx); bool aarch64_split_dimode_const_store (rtx, rtx); bool aarch64_symbolic_address_p (rtx); -bool aarch64_uimm12_shift (HOST_WIDE_INT); +bool aarch64_uimm12_shift (unsigned HOST_WIDE_INT); int aarch64_movk_shift (const wide_int_ref &, const wide_int_ref &); +bool aarch64_zeroextended_move_imm (unsigned HOST_WIDE_INT); bool aarch64_use_return_insn_p (void); const char *aarch64_output_casesi (rtx *); diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc index 4de55beb067ea8f0be0a90060a785c94bdee708b..785ec07692981d423582051ac0897e5dbc3a001f 100644 --- a/gcc/config/aarch64/aarch64.cc +++ b/gcc/config/aarch64/aarch64.cc @@ -305,7 +305,6 @@ static bool aarch64_builtin_support_vector_misalignment (machine_mode mode, static machine_mode aarch64_simd_container_mode (scalar_mode, poly_int64); static bool aarch64_print_address_internal (FILE*, machine_mode, rtx, aarch64_addr_query_type); -static HOST_WIDE_INT aarch64_clamp_to_uimm12_shift (HOST_WIDE_INT val); /* The processor for which instructions should be scheduled. */ enum aarch64_processor aarch64_tune = cortexa53; @@ -5502,6 +5501,142 @@ aarch64_output_sve_vector_inc_dec (const char *operands, rtx x) factor, nelts_per_vq); } +/* Multipliers for repeating bitmasks of width 32, 16, 8, 4, and 2. */ + +static const unsigned HOST_WIDE_INT bitmask_imm_mul[] = + { + 0x0000000100000001ull, + 0x0001000100010001ull, + 0x0101010101010101ull, + 0x1111111111111111ull, + 0x5555555555555555ull, + }; + + +/* Return true if 64-bit VAL is a valid bitmask immediate. */ +static bool +aarch64_bitmask_imm (unsigned HOST_WIDE_INT val) +{ + unsigned HOST_WIDE_INT tmp, mask, first_one, next_one; + int bits; + + /* Check for a single sequence of one bits and return quickly if so. + The special cases of all ones and all zeroes returns false. */ + tmp = val + (val & -val); + + if (tmp == (tmp & -tmp)) + return (val + 1) > 1; + + /* Invert if the immediate doesn't start with a zero bit - this means we + only need to search for sequences of one bits. */ + if (val & 1) + val = ~val; + + /* Find the first set bit and set tmp to val with the first sequence of one + bits removed. Return success if there is a single sequence of ones. */ + first_one = val & -val; + tmp = val & (val + first_one); + + if (tmp == 0) + return true; + + /* Find the next set bit and compute the difference in bit position. */ + next_one = tmp & -tmp; + bits = clz_hwi (first_one) - clz_hwi (next_one); + mask = val ^ tmp; + + /* Check the bit position difference is a power of 2, and that the first + sequence of one bits fits within 'bits' bits. */ + if ((mask >> bits) != 0 || bits != (bits & -bits)) + return false; + + /* Check the sequence of one bits is repeated 64/bits times. */ + return val == mask * bitmask_imm_mul[__builtin_clz (bits) - 26]; +} + + +/* Return true if VAL is a valid bitmask immediate for any mode. */ +bool +aarch64_bitmask_imm (unsigned HOST_WIDE_INT val, machine_mode mode) +{ + if (mode == DImode) + return aarch64_bitmask_imm (val); + + if (mode == SImode) + return aarch64_bitmask_imm ((val & 0xffffffff) | (val << 32)); + + /* Replicate small immediates to fit 64 bits. */ + int size = GET_MODE_UNIT_PRECISION (mode); + val &= (HOST_WIDE_INT_1U << size) - 1; + val *= bitmask_imm_mul[__builtin_clz (size) - 26]; + + return aarch64_bitmask_imm (val); +} + +/* Return true if the immediate VAL can be a bitfield immediate + by changing the given MASK bits in VAL to zeroes, ones or bits + from the other half of VAL. Return the new immediate in VAL2. */ +static inline bool +aarch64_check_bitmask (unsigned HOST_WIDE_INT val, + unsigned HOST_WIDE_INT &val2, + unsigned HOST_WIDE_INT mask) +{ + val2 = val & ~mask; + if (val2 != val && aarch64_bitmask_imm (val2)) + return true; + val2 = val | mask; + if (val2 != val && aarch64_bitmask_imm (val2)) + return true; + val = val & ~mask; + val2 = val | (((val >> 32) | (val << 32)) & mask); + if (val2 != val && aarch64_bitmask_imm (val2)) + return true; + val2 = val | (((val >> 16) | (val << 48)) & mask); + if (val2 != val && aarch64_bitmask_imm (val2)) + return true; + return false; +} + +/* Return true if immediate VAL can only be created by using a 32-bit + zero-extended move immediate, not by a 64-bit move. */ +bool +aarch64_zeroextended_move_imm (unsigned HOST_WIDE_INT val) +{ + if ((val >> 16) == 0 || (val >> 32) != 0 || (val & 0xffff) == 0) + return false; + return !aarch64_bitmask_imm (val); +} + +/* Return true if VAL is an immediate that can be created by a single + MOV instruction. */ +bool +aarch64_move_imm (unsigned HOST_WIDE_INT val, machine_mode mode) +{ + unsigned HOST_WIDE_INT val2; + + if (val < 65536) + return true; + + val2 = val ^ ((HOST_WIDE_INT) val >> 63); + if ((val2 >> (__builtin_ctzll (val2) & 48)) < 65536) + return true; + + /* Special case 0xyyyyffffffffffff. */ + if (((val2 + 1) << 16) == 0) + return true; + + /* Special case immediates 0xffffyyyy and 0xyyyyffff. */ + val2 = (mode == DImode) ? val : val2; + if (((val2 + 1) & ~(unsigned HOST_WIDE_INT) 0xffff0000) == 0 + || (val2 >> 16) == 0xffff) + return true; + + if (mode == SImode || (val >> 32) == 0) + val = (val & 0xffffffff) | (val << 32); + return aarch64_bitmask_imm (val); +} + + static int aarch64_internal_mov_immediate (rtx dest, rtx imm, bool generate, scalar_int_mode mode) @@ -5520,31 +5655,6 @@ aarch64_internal_mov_immediate (rtx dest, rtx imm, bool generate, return 1; } - /* Check to see if the low 32 bits are either 0xffffXXXX or 0xXXXXffff - (with XXXX non-zero). In that case check to see if the move can be done in - a smaller mode. */ - val2 = val & 0xffffffff; - if (mode == DImode - && aarch64_move_imm (val2, SImode) - && (((val >> 32) & 0xffff) == 0 || (val >> 48) == 0)) - { - if (generate) - emit_insn (gen_rtx_SET (dest, GEN_INT (val2))); - - /* Check if we have to emit a second instruction by checking to see - if any of the upper 32 bits of the original DI mode value is set. */ - if (val == val2) - return 1; - - i = (val >> 48) ? 48 : 32; - - if (generate) - emit_insn (gen_insv_immdi (dest, GEN_INT (i), - GEN_INT ((val >> i) & 0xffff))); - - return 2; - } - if ((val >> 32) == 0 || mode == SImode) { if (generate) @@ -5568,26 +5678,20 @@ aarch64_internal_mov_immediate (rtx dest, rtx imm, bool generate, one_match = ((~val & mask) == 0) + ((~val & (mask << 16)) == 0) + ((~val & (mask << 32)) == 0) + ((~val & (mask << 48)) == 0); - if (zero_match != 2 && one_match != 2) + /* Try a bitmask immediate and a movk to generate the immediate + in 2 instructions. */ + if (zero_match < 2 && one_match < 2) { - /* Try emitting a bitmask immediate with a movk replacing 16 bits. - For a 64-bit bitmask try whether changing 16 bits to all ones or - zeroes creates a valid bitmask. To check any repeated bitmask, - try using 16 bits from the other 32-bit half of val. */ - - for (i = 0; i < 64; i += 16, mask <<= 16) + for (i = 0; i < 64; i += 16) { - val2 = val & ~mask; - if (val2 != val && aarch64_bitmask_imm (val2, mode)) - break; - val2 = val | mask; - if (val2 != val && aarch64_bitmask_imm (val2, mode)) + if (aarch64_check_bitmask (val, val2, mask << i)) break; - val2 = val2 & ~mask; - val2 = val2 | (((val2 >> 32) | (val2 << 32)) & mask); - if (val2 != val && aarch64_bitmask_imm (val2, mode)) + + val2 = val & ~(mask << i); + if ((val2 >> 32) == 0 && aarch64_move_imm (val2, DImode)) break; } + if (i != 64) { if (generate) @@ -5600,6 +5704,25 @@ aarch64_internal_mov_immediate (rtx dest, rtx imm, bool generate, } } + /* Try a bitmask plus 2 movk to generate the immediate in 3 instructions. */ + if (zero_match + one_match == 0) + { + for (i = 0; i < 48; i += 16) + for (int j = i + 16; j < 64; j += 16) + if (aarch64_check_bitmask (val, val2, (mask << i) | (mask << j))) + { + if (generate) + { + emit_insn (gen_rtx_SET (dest, GEN_INT (val2))); + emit_insn (gen_insv_immdi (dest, GEN_INT (i), + GEN_INT ((val >> i) & 0xffff))); + emit_insn (gen_insv_immdi (dest, GEN_INT (j), + GEN_INT ((val >> j) & 0xffff))); + } + return 3; + } + } + /* Generate 2-4 instructions, skipping 16 bits of all zeroes or ones which are emitted by the initial mov. If one_match > zero_match, skip set bits, otherwise skip zero bits. */ @@ -5643,6 +5766,95 @@ aarch64_mov128_immediate (rtx imm) + aarch64_internal_mov_immediate (NULL_RTX, hi, false, DImode) <= 4; } +/* Return true if val can be encoded as a 12-bit unsigned immediate with + a left shift of 0 or 12 bits. */ +bool +aarch64_uimm12_shift (unsigned HOST_WIDE_INT val) +{ + return val < 4096 || (val & 0xfff000) == val; +} + +/* Returns the nearest value to VAL that will fit as a 12-bit unsigned immediate + that can be created with a left shift of 0 or 12. */ +static HOST_WIDE_INT +aarch64_clamp_to_uimm12_shift (unsigned HOST_WIDE_INT val) +{ + /* Check to see if the value fits in 24 bits, as that is the maximum we can + handle correctly. */ + gcc_assert (val < 0x1000000); + + if (val < 4096) + return val; + + return val & 0xfff000; +} + +/* Test whether: + + X = (X & AND_VAL) | IOR_VAL; + + can be implemented using: + + MOVK X, #(IOR_VAL >> shift), LSL #shift + + Return the shift if so, otherwise return -1. */ +int +aarch64_movk_shift (const wide_int_ref &and_val, + const wide_int_ref &ior_val) +{ + unsigned int precision = and_val.get_precision (); + unsigned HOST_WIDE_INT mask = 0xffff; + for (unsigned int shift = 0; shift < precision; shift += 16) + { + if (and_val == ~mask && (ior_val & mask) == ior_val) + return shift; + mask <<= 16; + } + return -1; +} + +/* Create mask of ones, covering the lowest to highest bits set in VAL_IN. + Assumed precondition: VAL_IN Is not zero. */ + +unsigned HOST_WIDE_INT +aarch64_and_split_imm1 (HOST_WIDE_INT val_in) +{ + int lowest_bit_set = ctz_hwi (val_in); + int highest_bit_set = floor_log2 (val_in); + gcc_assert (val_in != 0); + + return ((HOST_WIDE_INT_UC (2) << highest_bit_set) - + (HOST_WIDE_INT_1U << lowest_bit_set)); +} + +/* Create constant where bits outside of lowest bit set to highest bit set + are set to 1. */ + +unsigned HOST_WIDE_INT +aarch64_and_split_imm2 (HOST_WIDE_INT val_in) +{ + return val_in | ~aarch64_and_split_imm1 (val_in); +} + +/* Return true if VAL_IN is a valid 'and' bitmask immediate. */ + +bool +aarch64_and_bitmask_imm (unsigned HOST_WIDE_INT val_in, machine_mode mode) +{ + scalar_int_mode int_mode; + if (!is_a <scalar_int_mode> (mode, &int_mode)) + return false; + + if (aarch64_bitmask_imm (val_in, int_mode)) + return false; + + if (aarch64_move_imm (val_in, int_mode)) + return false; + + unsigned HOST_WIDE_INT imm2 = aarch64_and_split_imm2 (val_in); + + return aarch64_bitmask_imm (imm2, int_mode); +} /* Return the number of temporary registers that aarch64_add_offset_1 would need to add OFFSET to a register. */ @@ -10098,208 +10310,6 @@ aarch64_tls_referenced_p (rtx x) return false; } - -/* Return true if val can be encoded as a 12-bit unsigned immediate with - a left shift of 0 or 12 bits. */ -bool -aarch64_uimm12_shift (HOST_WIDE_INT val) -{ - return ((val & (((HOST_WIDE_INT) 0xfff) << 0)) == val - || (val & (((HOST_WIDE_INT) 0xfff) << 12)) == val - ); -} - -/* Returns the nearest value to VAL that will fit as a 12-bit unsigned immediate - that can be created with a left shift of 0 or 12. */ -static HOST_WIDE_INT -aarch64_clamp_to_uimm12_shift (HOST_WIDE_INT val) -{ - /* Check to see if the value fits in 24 bits, as that is the maximum we can - handle correctly. */ - gcc_assert ((val & 0xffffff) == val); - - if (((val & 0xfff) << 0) == val) - return val; - - return val & (0xfff << 12); -} - -/* Return true if val is an immediate that can be loaded into a - register by a MOVZ instruction. */ -static bool -aarch64_movw_imm (HOST_WIDE_INT val, scalar_int_mode mode) -{ - if (GET_MODE_SIZE (mode) > 4) - { - if ((val & (((HOST_WIDE_INT) 0xffff) << 32)) == val - || (val & (((HOST_WIDE_INT) 0xffff) << 48)) == val) - return 1; - } - else - { - /* Ignore sign extension. */ - val &= (HOST_WIDE_INT) 0xffffffff; - } - return ((val & (((HOST_WIDE_INT) 0xffff) << 0)) == val - || (val & (((HOST_WIDE_INT) 0xffff) << 16)) == val); -} - -/* Test whether: - - X = (X & AND_VAL) | IOR_VAL; - - can be implemented using: - - MOVK X, #(IOR_VAL >> shift), LSL #shift - - Return the shift if so, otherwise return -1. */ -int -aarch64_movk_shift (const wide_int_ref &and_val, - const wide_int_ref &ior_val) -{ - unsigned int precision = and_val.get_precision (); - unsigned HOST_WIDE_INT mask = 0xffff; - for (unsigned int shift = 0; shift < precision; shift += 16) - { - if (and_val == ~mask && (ior_val & mask) == ior_val) - return shift; - mask <<= 16; - } - return -1; -} - -/* VAL is a value with the inner mode of MODE. Replicate it to fill a - 64-bit (DImode) integer. */ - -static unsigned HOST_WIDE_INT -aarch64_replicate_bitmask_imm (unsigned HOST_WIDE_INT val, machine_mode mode) -{ - unsigned int size = GET_MODE_UNIT_PRECISION (mode); - while (size < 64) - { - val &= (HOST_WIDE_INT_1U << size) - 1; - val |= val << size; - size *= 2; - } - return val; -} - -/* Multipliers for repeating bitmasks of width 32, 16, 8, 4, and 2. */ - -static const unsigned HOST_WIDE_INT bitmask_imm_mul[] = - { - 0x0000000100000001ull, - 0x0001000100010001ull, - 0x0101010101010101ull, - 0x1111111111111111ull, - 0x5555555555555555ull, - }; - - -/* Return true if val is a valid bitmask immediate. */ - -bool -aarch64_bitmask_imm (HOST_WIDE_INT val_in, machine_mode mode) -{ - unsigned HOST_WIDE_INT val, tmp, mask, first_one, next_one; - int bits; - - /* Check for a single sequence of one bits and return quickly if so. - The special cases of all ones and all zeroes returns false. */ - val = aarch64_replicate_bitmask_imm (val_in, mode); - tmp = val + (val & -val); - - if (tmp == (tmp & -tmp)) - return (val + 1) > 1; - - /* Replicate 32-bit immediates so we can treat them as 64-bit. */ - if (mode == SImode) - val = (val << 32) | (val & 0xffffffff); - - /* Invert if the immediate doesn't start with a zero bit - this means we - only need to search for sequences of one bits. */ - if (val & 1) - val = ~val; - - /* Find the first set bit and set tmp to val with the first sequence of one - bits removed. Return success if there is a single sequence of ones. */ - first_one = val & -val; - tmp = val & (val + first_one); - - if (tmp == 0) - return true; - - /* Find the next set bit and compute the difference in bit position. */ - next_one = tmp & -tmp; - bits = clz_hwi (first_one) - clz_hwi (next_one); - mask = val ^ tmp; - - /* Check the bit position difference is a power of 2, and that the first - sequence of one bits fits within 'bits' bits. */ - if ((mask >> bits) != 0 || bits != (bits & -bits)) - return false; - - /* Check the sequence of one bits is repeated 64/bits times. */ - return val == mask * bitmask_imm_mul[__builtin_clz (bits) - 26]; -} - -/* Create mask of ones, covering the lowest to highest bits set in VAL_IN. - Assumed precondition: VAL_IN Is not zero. */ - -unsigned HOST_WIDE_INT -aarch64_and_split_imm1 (HOST_WIDE_INT val_in) -{ - int lowest_bit_set = ctz_hwi (val_in); - int highest_bit_set = floor_log2 (val_in); - gcc_assert (val_in != 0); - - return ((HOST_WIDE_INT_UC (2) << highest_bit_set) - - (HOST_WIDE_INT_1U << lowest_bit_set)); -} - -/* Create constant where bits outside of lowest bit set to highest bit set - are set to 1. */ - -unsigned HOST_WIDE_INT -aarch64_and_split_imm2 (HOST_WIDE_INT val_in) -{ - return val_in | ~aarch64_and_split_imm1 (val_in); -} - -/* Return true if VAL_IN is a valid 'and' bitmask immediate. */ - -bool -aarch64_and_bitmask_imm (unsigned HOST_WIDE_INT val_in, machine_mode mode) -{ - scalar_int_mode int_mode; - if (!is_a <scalar_int_mode> (mode, &int_mode)) - return false; - - if (aarch64_bitmask_imm (val_in, int_mode)) - return false; - - if (aarch64_move_imm (val_in, int_mode)) - return false; - - unsigned HOST_WIDE_INT imm2 = aarch64_and_split_imm2 (val_in); - - return aarch64_bitmask_imm (imm2, int_mode); -} - -/* Return true if val is an immediate that can be loaded into a - register in a single instruction. */ -bool -aarch64_move_imm (HOST_WIDE_INT val, machine_mode mode) -{ - scalar_int_mode int_mode; - if (!is_a <scalar_int_mode> (mode, &int_mode)) - return false; - - if (aarch64_movw_imm (val, int_mode) || aarch64_movw_imm (~val, int_mode)) - return 1; - return aarch64_bitmask_imm (val, int_mode); -} - static bool aarch64_cannot_force_const_mem (machine_mode mode ATTRIBUTE_UNUSED, rtx x) { diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md index 0a7633e5dd6d45282edd7a1088c14b555bc09b40..23ceca48543d23b85beea1f0bf98ef83051d80b6 100644 --- a/gcc/config/aarch64/aarch64.md +++ b/gcc/config/aarch64/aarch64.md @@ -1309,16 +1309,15 @@ (define_insn_and_split "*movsi_aarch64" ) (define_insn_and_split "*movdi_aarch64" - [(set (match_operand:DI 0 "nonimmediate_operand" "=r,k,r,r,r,r,r, r,w, m,m, r, r, r, w,r,w, w") - (match_operand:DI 1 "aarch64_mov_operand" " r,r,k,N,M,n,Usv,m,m,rZ,w,Usw,Usa,Ush,rZ,w,w,Dd"))] + [(set (match_operand:DI 0 "nonimmediate_operand" "=r,k,r,r,r,r, r,w, m,m, r, r, r, w,r,w, w") + (match_operand:DI 1 "aarch64_mov_operand" " r,r,k,O,n,Usv,m,m,rZ,w,Usw,Usa,Ush,rZ,w,w,Dd"))] "(register_operand (operands[0], DImode) || aarch64_reg_or_zero (operands[1], DImode))" "@ mov\\t%x0, %x1 mov\\t%0, %x1 mov\\t%x0, %1 - mov\\t%x0, %1 - mov\\t%w0, %1 + * return aarch64_zeroextended_move_imm (INTVAL (operands[1])) ? \"mov\\t%w0, %1\" : \"mov\\t%x0, %1\"; # * return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]); ldr\\t%x0, %1 @@ -1340,11 +1339,11 @@ (define_insn_and_split "*movdi_aarch64" DONE; }" ;; The "mov_imm" type for CNTD is just a placeholder. - [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm,mov_imm, + [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm, load_8,load_8,store_8,store_8,load_8,adr,adr,f_mcr,f_mrc, fmov,neon_move") - (set_attr "arch" "*,*,*,*,*,*,sve,*,fp,*,fp,*,*,*,fp,fp,fp,simd") - (set_attr "length" "4,4,4,4,4,*, 4,4, 4,4, 4,8,4,4, 4, 4, 4, 4")] + (set_attr "arch" "*,*,*,*,*,sve,*,fp,*,fp,*,*,*,fp,fp,fp,simd") + (set_attr "length" "4,4,4,4,*, 4,4, 4,4, 4,8,4,4, 4, 4, 4, 4")] ) (define_insn "insv_imm<mode>" @@ -1508,7 +1507,7 @@ (define_insn "*mov<mode>_aarch64" (define_insn "*mov<mode>_aarch64" [(set (match_operand:DFD 0 "nonimmediate_operand" "=w, w ,?r,w,w ,w ,w,m,r,m ,r,r") - (match_operand:DFD 1 "general_operand" "Y , ?rY, w,w,Ufc,Uvi,m,w,m,rY,r,N"))] + (match_operand:DFD 1 "general_operand" "Y , ?rY, w,w,Ufc,Uvi,m,w,m,rY,r,O"))] "TARGET_FLOAT && (register_operand (operands[0], <MODE>mode) || aarch64_reg_or_fp_zero (operands[1], <MODE>mode))" "@ @@ -1523,7 +1522,7 @@ (define_insn "*mov<mode>_aarch64" ldr\\t%x0, %1 str\\t%x1, %0 mov\\t%x0, %x1 - mov\\t%x0, %1" + * return aarch64_zeroextended_move_imm (INTVAL (operands[1])) ? \"mov\\t%w0, %1\" : \"mov\\t%x0, %1\";" [(set_attr "type" "neon_move,f_mcr,f_mrc,fmov,fconstd,neon_move,\ f_loadd,f_stored,load_8,store_8,mov_reg,\ fconstd") diff --git a/gcc/config/aarch64/constraints.md b/gcc/config/aarch64/constraints.md index ee7587cca1673208e2bfd6b503a21d0c8b69bf75..e91c7eab0b3674ca34ac2f790c38fcd27986c35f 100644 --- a/gcc/config/aarch64/constraints.md +++ b/gcc/config/aarch64/constraints.md @@ -106,6 +106,12 @@ (define_constraint "M" (define_constraint "N" "A constant that can be used with a 64-bit MOV immediate operation." + (and (match_code "const_int") + (match_test "aarch64_move_imm (ival, DImode)") + (match_test "!aarch64_zeroextended_move_imm (ival)"))) + +(define_constraint "O" + "A constant that can be used with a 32 or 64-bit MOV immediate operation." (and (match_code "const_int") (match_test "aarch64_move_imm (ival, DImode)"))) diff --git a/gcc/testsuite/gcc.target/aarch64/pr106583.c b/gcc/testsuite/gcc.target/aarch64/pr106583.c new file mode 100644 index 0000000000000000000000000000000000000000..0f931580817d78dc1cc58f03b251bd21bec71f59 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/pr106583.c @@ -0,0 +1,41 @@ +/* { dg-do assemble } */ +/* { dg-options "-O2 --save-temps" } */ + +long f1 (void) +{ + return 0x7efefefefefefeff; +} + +long f2 (void) +{ + return 0x12345678aaaaaaaa; +} + +long f3 (void) +{ + return 0x1234cccccccc5678; +} + +long f4 (void) +{ + return 0x7777123456787777; +} + +long f5 (void) +{ + return 0x5555555512345678; +} + +long f6 (void) +{ + return 0x1234bbbb5678bbbb; +} + +long f7 (void) +{ + return 0x4444123444445678; +} + + +/* { dg-final { scan-assembler-times {\tmovk\t} 14 } } */ +/* { dg-final { scan-assembler-times {\tmov\t} 7 } } */
diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc index 926e81f028c82aac9a5fecc18f921f84399c24ae..1601d11710cb6132c80a77bb4fe2f8429519aa5a 100644 --- a/gcc/config/aarch64/aarch64.cc +++ b/gcc/config/aarch64/aarch64.cc @@ -5568,7 +5568,7 @@ aarch64_internal_mov_immediate (rtx dest, rtx imm, bool generate, one_match = ((~val & mask) == 0) + ((~val & (mask << 16)) == 0) + ((~val & (mask << 32)) == 0) + ((~val & (mask << 48)) == 0); - if (zero_match != 2 && one_match != 2) + if (zero_match < 2 && one_match < 2) { /* Try emitting a bitmask immediate with a movk replacing 16 bits. For a 64-bit bitmask try whether changing 16 bits to all ones or @@ -5600,6 +5600,43 @@ aarch64_internal_mov_immediate (rtx dest, rtx imm, bool generate, } } + /* Try a bitmask plus 2 movk to generate the immediate in 3 instructions. */ + if (zero_match + one_match == 0) + { + mask = 0xffffffff; + + for (i = 0; i < 64; i += 16) + { + val2 = val & ~mask; + if (aarch64_bitmask_imm (val2, mode)) + break; + val2 = val | mask; + if (aarch64_bitmask_imm (val2, mode)) + break; + val2 = val2 & ~mask; + val2 = val2 | (((val2 >> 32) | (val2 << 32)) & mask); + if (aarch64_bitmask_imm (val2, mode)) + break; + + mask = (mask << 16) | (mask >> 48); + } + + if (i != 64) + { + if (generate) + { + emit_insn (gen_rtx_SET (dest, GEN_INT (val2))); + emit_insn (gen_insv_immdi (dest, GEN_INT (i), + GEN_INT ((val >> i) & 0xffff))); + i = (i + 16) & 63; + emit_insn (gen_insv_immdi (dest, GEN_INT (i), + GEN_INT ((val >> i) & 0xffff))); + } + + return 3; + } + } + /* Generate 2-4 instructions, skipping 16 bits of all zeroes or ones which are emitted by the initial mov. If one_match > zero_match, skip set bits, otherwise skip zero bits. */ diff --git a/gcc/testsuite/gcc.target/aarch64/pr106583.c b/gcc/testsuite/gcc.target/aarch64/pr106583.c new file mode 100644 index 0000000000000000000000000000000000000000..f0a027a0950e506d4ddaacce5e151f57070948dc --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/pr106583.c @@ -0,0 +1,30 @@ +/* { dg-do assemble } */ +/* { dg-options "-O2 --save-temps" } */ + +long f1 (void) +{ + return 0x7efefefefefefeff; +} + +long f2 (void) +{ + return 0x12345678aaaaaaaa; +} + +long f3 (void) +{ + return 0x1234cccccccc5678; +} + +long f4 (void) +{ + return 0x7777123456787777; +} + +long f5 (void) +{ + return 0x5555555512345678; +} + +/* { dg-final { scan-assembler-times {\tmovk\t} 10 } } */ +/* { dg-final { scan-assembler-times {\tmov\t} 5 } } */