From patchwork Thu Dec 7 14:43:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Coplan X-Patchwork-Id: 81673 X-Patchwork-Delegate: rsandifo@gcc.gnu.org Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 943FC384577C for ; Thu, 7 Dec 2023 14:44:07 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR04-DB3-obe.outbound.protection.outlook.com (mail-db3eur04on2061.outbound.protection.outlook.com [40.107.6.61]) by sourceware.org (Postfix) with ESMTPS id C4F303858282 for ; Thu, 7 Dec 2023 14:43:44 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org C4F303858282 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org C4F303858282 Authentication-Results: server2.sourceware.org; arc=pass smtp.remote-ip=40.107.6.61 ARC-Seal: i=3; a=rsa-sha256; d=sourceware.org; s=key; t=1701960228; cv=pass; b=vRcyibrfAGuLYI1qtT03Ye+Zq7TvVlmw+1SDe0egMQPxGzkUfehoBj/5nOSaBpDGlNN28p/X9dUB7Oi9muwcS3f3q1JFaJorzvyd0ZoCndWv6wQyVaxlWC95BtF5nqHfPG+rFOIrW2b62/7eeKtseWbnemvZyjLedYuMQgTuzu4= ARC-Message-Signature: i=3; a=rsa-sha256; d=sourceware.org; s=key; t=1701960228; c=relaxed/simple; bh=pMoMoas81SdLgHnVbvaI3aCQIQiGo+8Zc6Uln8HHTcs=; h=DKIM-Signature:DKIM-Signature:Date:From:To:Subject:Message-ID: MIME-Version; b=HGKAlzS5+dvoAYkIiasLNaof8QzZo2Fr/fDtvlSUW7nOP6vuFvYuOxDlyReeTlOMDGmi87JC7ldXNAGsmZBewmKaWz56Gj0YDQ7DTr85BL/JiTBpQWBuFIxwND5VhbQGnX8AFHziQw03S5T2+54Q6X3dH/GayIrefzcBgavOY/o= ARC-Authentication-Results: i=3; server2.sourceware.org ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=lO8g8lJP9q8CZHnnUmi+U5wAj4MnGAvwTMgbbg+F9KPjQ1aLdNEs6yOE+CXB9a1Gn3DRoD2Tcy9pxZ6fzwDIO+KHdbp7DQTU4ZhNQNapdu89NWZ5NwDhE+yIDbZxtXsp6lIuBIihr+NNWv0qOKltuDi+j5cX+mZ+5RVDCTwANVggA5n0GnLZagkQ/+wxnpAFfwYleMMBtGo4oIp12LT/kvGNTgwxo/PcMwxw3/QRy8D2Q3p0h+zzv+T7Q3E0W1I9I0V1U2yppa5BTXLsliOoVss6R9uiG1rLA84lIVdm9msryzKoAX2SbEMmrwF6wfBV+CnCHAY3WqNjJy5ksAGfhA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=QQO60iIqOMFc0RpDXUefIpc/9cuzmP/yY2IibprfeS4=; b=GDO7SM3Q2NUjGH5WNkDKw7N4TUxeDddReaT/nx2gZ0+4zqgc25opFyOsRBdOdyxPDh8uTnKkHigp1VTQjBNstlAeuwsC/m7ZnNGbiQoFcK7hHkzT/LEqz+itCQx/TslpkixySYbXfUbvrU+b9NGqWaZoVuWciCDQLiDk5DXc0Z9xJ2c5/QMVokAko3zhumajCfRykcw4k20k/Zt8TL4OtAgAIrNjy8zkwisWDp4Hu2foBFJG14ZGQq33WehcvhV7QCKofaJbdq9j3hG0HCnUGjTmeg/SIJsTC6EPrft6VMXDDUOzIjgELukXy4cltXsFN69JBexH5lnDVhfQyszJYA== ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com]) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QQO60iIqOMFc0RpDXUefIpc/9cuzmP/yY2IibprfeS4=; b=6U8KiF54YuvfsbDWmHnijZcDos9HQ0/mdP+faaFP2gun6E6WQJ4s1xeqdKmXesbABSS7symlIt3rrTr9pTrp4vTeSpTL0dze+VC6Lw8MFbR39+bQRPm9dxnqJIw8LX5QlsOuxkE9c62sT6Zu/Z10y/kM862k3Y0zgEUzLYCoVHU= Received: from AM0PR07CA0022.eurprd07.prod.outlook.com (2603:10a6:208:ac::35) by AM7PR08MB5368.eurprd08.prod.outlook.com (2603:10a6:20b:103::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7068.27; Thu, 7 Dec 2023 14:43:41 +0000 Received: from AM3PEPF0000A791.eurprd04.prod.outlook.com (2603:10a6:208:ac:cafe::61) by AM0PR07CA0022.outlook.office365.com (2603:10a6:208:ac::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7068.24 via Frontend Transport; Thu, 7 Dec 2023 14:43:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM3PEPF0000A791.mail.protection.outlook.com (10.167.16.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7068.20 via Frontend Transport; Thu, 7 Dec 2023 14:43:40 +0000 Received: ("Tessian outbound 20615a7e7970:v228"); Thu, 07 Dec 2023 14:43:40 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: a5d0c51b041885c1 X-CR-MTA-TID: 64aa7808 Received: from f151f6dbead1.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id EBB10913-5F0F-45F4-83B6-8E9E3BF6BF0D.1; Thu, 07 Dec 2023 14:43:33 +0000 Received: from EUR05-DB8-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f151f6dbead1.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Thu, 07 Dec 2023 14:43:33 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hjbs/qjJ394W/dbGDC6Bhk8BXKOiCliMIfwoAH2cRwwBgIabK27YGu/npwM5Pb6a6Y9oW0kPXwldvx7f8p2MLvbf299kgTHSwZUe2S0jXRXVaHbAFtM+0pVXRpnNc/M0IHUnxI9wKpIKTkuxN9f70jdmaz1FaMekwC+SsYuAj3EOv6QZc7Q0VeSs3obHylxvaWViBrdMh2tFB/AQK+Df70UUaxfq2pvkfbNzoZNbvqKFWRSInrQmm+0VGlCBpNlcvTogWdWGjZ/8v43Hw05StzUPaH9Xnn93zbOYGBaWdIAtoZ7D0dMm3hJnqPhVoLUaOeLodC1N3iYDNhZalMVvjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=QQO60iIqOMFc0RpDXUefIpc/9cuzmP/yY2IibprfeS4=; b=oNsXYMMqKUTLROBeUi/QuPsi7LvxSnjApHJlZXL3oX6hU/JbGWlFcFlWDFs8fYT87RmehovfDu7e4/j07jfrikJUbbPHeS8jsI1lQ7/LBcskbdRPDGietVlOIqnzpb4DoSx7UtwBI8vlOhdLWRqFbxSfqsJy5gnD4JpUPVtJyfR9TaU8mQNfkcfM3NAS8J2s2R5lOdS3Lz/qnEdi/R1lJjRnQtxFiu8EZasdV5sAO0Ya9TdkcGhy6oxCxixNuDw93fDoM7oObueSsRDOZvj5FuMCAAFgYU6YRW9cmbVAMNrMyJiUx4ciXeS1zPeEqnVCqkagHyFmFM+kO9e2qoAsxw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QQO60iIqOMFc0RpDXUefIpc/9cuzmP/yY2IibprfeS4=; b=6U8KiF54YuvfsbDWmHnijZcDos9HQ0/mdP+faaFP2gun6E6WQJ4s1xeqdKmXesbABSS7symlIt3rrTr9pTrp4vTeSpTL0dze+VC6Lw8MFbR39+bQRPm9dxnqJIw8LX5QlsOuxkE9c62sT6Zu/Z10y/kM862k3Y0zgEUzLYCoVHU= Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; Received: from PAWPR08MB8958.eurprd08.prod.outlook.com (2603:10a6:102:33e::15) by GV2PR08MB8728.eurprd08.prod.outlook.com (2603:10a6:150:b3::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7068.27; Thu, 7 Dec 2023 14:43:31 +0000 Received: from PAWPR08MB8958.eurprd08.prod.outlook.com ([fe80::8512:cc10:24d4:1919]) by PAWPR08MB8958.eurprd08.prod.outlook.com ([fe80::8512:cc10:24d4:1919%5]) with mapi id 15.20.7068.027; Thu, 7 Dec 2023 14:43:31 +0000 Date: Thu, 7 Dec 2023 14:43:28 +0000 From: Alex Coplan To: gcc-patches@gcc.gnu.org Cc: Richard Sandiford , Kyrylo Tkachov Subject: [PATCH v3 08/11] aarch64: Generalize writeback ldp/stp patterns Message-ID: Content-Disposition: inline X-ClientProxiedBy: LO4P123CA0685.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:37b::8) To PAWPR08MB8958.eurprd08.prod.outlook.com (2603:10a6:102:33e::15) MIME-Version: 1.0 X-MS-TrafficTypeDiagnostic: PAWPR08MB8958:EE_|GV2PR08MB8728:EE_|AM3PEPF0000A791:EE_|AM7PR08MB5368:EE_ X-MS-Office365-Filtering-Correlation-Id: 456ae869-6db6-4986-8468-08dbf732e7b6 x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: Gy6Fa9R7gOlWp4Ze0FPyiWw0fFgF9XEXsiH5N+7PVC6lmq9AUHEANpMH9OvA/RoV8zWIAnIaLvSLt96uqGq/Xh52d9MiDCnlOoTPL8RKtQvLG9pbFnf1/szkxnuasinyNXx+txyMf6/uHdXKp8kO2WJekWnbmIVgdZjU8PhS2fboTqf4PDziDHlPkkvsWg+dLlIyOLIq4o7Z3HTzyZLvBbD6zblHhsMPd2xuFZSNUF1EpdmBCqDLnmu0DJHI/5n43//1K8OYCWHTt/UNVWDVgVjEDkE+1xYXoiqVI9WcZDWpuKpcNSNf3LIrMxX7H4KtVdo8JnBcSoky/E/P+m/MrjqVoh5JMIs3sdmTBU1TyCiohmVKfU35Mbmjl1MJssMLv9bO9gOAzqUhaBpHDA53riYKCAWFzwOZ+Gb64YQDStTn+BhXxdCvOBRdHqNf3DlEQaXOvNYvNvCEntz0bVSp8Wmt+c02CIn6bHyXsGfotuxyDyMULpxzUOBfPzbzmWwb/QALdgmfa7v7IjEr7YYh5VFeWT76mimBEl8WxA/cO65L4lW5xvlQJlCoeWsX42rH9Sa73kOhP6TFXM+SOU6oMBlwzqt5xeiDcANyRoJGtyQ= X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PAWPR08MB8958.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(396003)(346002)(366004)(136003)(376002)(39860400002)(230922051799003)(64100799003)(451199024)(186009)(1800799012)(83380400001)(21480400003)(478600001)(66574015)(2616005)(6512007)(2906002)(26005)(6666004)(966005)(6486002)(44144004)(33964004)(6506007)(4326008)(8676002)(8936002)(86362001)(66556008)(235185007)(5660300002)(44832011)(41300700001)(36756003)(54906003)(6916009)(316002)(38100700002)(66476007)(66946007)(2700100001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8728 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM3PEPF0000A791.eurprd04.prod.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: dc1d4b7e-3311-4cdd-1971-08dbf732e1b8 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ZvK+7Dltt/kscuHYPJUFlVJl4rhgC1IlC5pheTmhnHynE79bSpbqSgy7fHMkaDec7QwpGki/IuQQT79dI8uRUjHZZngNZSmngL7e6AuFRqMStz2Ycpir9GdYKaA9Vwterw20di3nb5CjVLBE6MNjJiHHcSryQPvkKCkK8E441YPY96JzALYP8eZf1z2RhECpwclIgFVNRogZl7n/mNPqXIggWqLk8Iwfn+LvFW3I1SUkaDD91Q3rcLq8tJRtEYxP4uSwGJZkCuZDuiVQ68qzjTJFPBAe5+LODa8KSVHFEnWjC2yNlWtDfE1AvSSUYiQZNw3MX7ttgZgifJ8o7ok0is4qDBrDY6gep+TtD7dsbQNzHcivC6IPzVQcEgkxHmPRo4gfCKmEItYGlt2X5KtI3Z8hfbwek2PvDjKONxcYSiU/LCewnCUxWH0k/0nIhozWuCXaHWjD2dj4I3e2E7vKm/AgsV6HwMNKN+wE1K1lfvPE+rOH/5ANDYXbbpnCkAyPlvVIfd/0I/33T4+oXCYpba6QSAsMCbLhsBb0Zc26oBMy3JJmZAXoWVBHc4Zw49JeWfpjAOb5F0LSfI3kkGlFyeKrN4nmTDmjvvpIkP7tjoOwdLdOPGarpVZTcFKmDeEoj9OFD05mAqLx4Hu6AannGymx2sE8ZNiTGfpNGMFV/bwzXaXbO0EQshD65Sxs83OzAyaG8CKRpLFGFpg/IR8CodFmkCKOuGu56CkB1IUaTvXCuLkW3dKlnoEt6N/ZzGGdSnN4XuiXNi68/JQankvFxA== X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230031)(4636009)(376002)(346002)(396003)(39850400004)(136003)(230922051799003)(82310400011)(186009)(1800799012)(64100799003)(451199024)(46966006)(40470700004)(36840700001)(2906002)(40480700001)(40460700003)(235185007)(5660300002)(44832011)(36860700001)(6506007)(6666004)(6512007)(4326008)(478600001)(8676002)(36756003)(81166007)(356005)(6486002)(966005)(8936002)(82740400003)(6916009)(83380400001)(336012)(26005)(2616005)(66574015)(44144004)(33964004)(41300700001)(21480400003)(54906003)(70586007)(86362001)(70206006)(316002)(47076005)(2700100001); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Dec 2023 14:43:40.9887 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 456ae869-6db6-4986-8468-08dbf732e7b6 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM3PEPF0000A791.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5368 X-Spam-Status: No, score=-12.0 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Hi, This is a v3 patch which is rebased on top of the SME changes. Otherwise it is the same as v2, posted here: https://gcc.gnu.org/pipermail/gcc-patches/2023-December/639367.html Bootstrapped/regtested as a series on aarch64-linux-gnu, OK for trunk? Thanks, Alex -- >8 -- Thus far the writeback forms of ldp/stp have been exclusively used in prologue and epilogue code for saving/restoring of registers to/from the stack. As such, forms of ldp/stp that weren't needed for prologue/epilogue code weren't supported by the aarch64 backend. This patch generalizes the load/store pair writeback patterns to allow: - Base registers other than the stack pointer. - Modes that weren't previously supported. - Combinations of distinct modes provided they have the same size. - Pre/post variants that weren't previously needed in prologue/epilogue code. We make quite some effort to avoid a combinatorial explosion in the number of patterns generated (and those in the source) by making extensive use of special predicates. An updated version of the upcoming ldp/stp pass can generate the writeback forms, so this patch is motivated by that. This patch doesn't add zero-extending or sign-extending forms of the writeback patterns; that is left for future work. gcc/ChangeLog: * config/aarch64/aarch64-protos.h (aarch64_ldpstp_operand_mode_p): Declare. * config/aarch64/aarch64.cc (aarch64_gen_storewb_pair): Build RTL directly instead of invoking named pattern. (aarch64_gen_loadwb_pair): Likewise. (aarch64_ldpstp_operand_mode_p): New. * config/aarch64/aarch64.md (loadwb_pair_): Replace with ... (*loadwb_post_pair_): ... this. Generalize as described in cover letter. (loadwb_pair_): Delete (superseded by the above). (*loadwb_post_pair_16): New. (*loadwb_pre_pair_): New. (loadwb_pair_): Delete. (*loadwb_pre_pair_16): New. (storewb_pair_): Replace with ... (*storewb_pre_pair_): ... this. Generalize as described in cover letter. (*storewb_pre_pair_16): New. (storewb_pair_): Delete. (*storewb_post_pair_): New. (storewb_pair_): Delete. (*storewb_post_pair_16): New. * config/aarch64/predicates.md (aarch64_mem_pair_operator): New. (pmode_plus_operator): New. (aarch64_ldp_reg_operand): New. (aarch64_stp_reg_operand): New. diff --git a/gcc/config/aarch64/aarch64-protos.h b/gcc/config/aarch64/aarch64-protos.h index 42f7bfad5cb..ee0f0a18541 100644 --- a/gcc/config/aarch64/aarch64-protos.h +++ b/gcc/config/aarch64/aarch64-protos.h @@ -1041,6 +1041,7 @@ bool aarch64_operands_ok_for_ldpstp (rtx *, bool, machine_mode); bool aarch64_operands_adjust_ok_for_ldpstp (rtx *, bool, machine_mode); bool aarch64_mem_ok_with_ldpstp_policy_model (rtx, bool, machine_mode); void aarch64_swap_ldrstr_operands (rtx *, bool); +bool aarch64_ldpstp_operand_mode_p (machine_mode); extern void aarch64_asm_output_pool_epilogue (FILE *, const char *, tree, HOST_WIDE_INT); diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc index d870973dcd6..baa2b6ca3f7 100644 --- a/gcc/config/aarch64/aarch64.cc +++ b/gcc/config/aarch64/aarch64.cc @@ -8097,23 +8097,15 @@ static rtx aarch64_gen_storewb_pair (machine_mode mode, rtx base, rtx reg, rtx reg2, HOST_WIDE_INT adjustment) { - switch (mode) - { - case E_DImode: - return gen_storewb_pairdi_di (base, base, reg, reg2, - GEN_INT (-adjustment), - GEN_INT (UNITS_PER_WORD - adjustment)); - case E_DFmode: - return gen_storewb_pairdf_di (base, base, reg, reg2, - GEN_INT (-adjustment), - GEN_INT (UNITS_PER_WORD - adjustment)); - case E_TFmode: - return gen_storewb_pairtf_di (base, base, reg, reg2, - GEN_INT (-adjustment), - GEN_INT (UNITS_PER_VREG - adjustment)); - default: - gcc_unreachable (); - } + rtx new_base = plus_constant (Pmode, base, -adjustment); + rtx mem = gen_frame_mem (mode, new_base); + rtx mem2 = adjust_address_nv (mem, mode, GET_MODE_SIZE (mode)); + + return gen_rtx_PARALLEL (VOIDmode, + gen_rtvec (3, + gen_rtx_SET (base, new_base), + gen_rtx_SET (mem, reg), + gen_rtx_SET (mem2, reg2))); } /* Push registers numbered REGNO1 and REGNO2 to the stack, adjusting the @@ -8145,20 +8137,15 @@ static rtx aarch64_gen_loadwb_pair (machine_mode mode, rtx base, rtx reg, rtx reg2, HOST_WIDE_INT adjustment) { - switch (mode) - { - case E_DImode: - return gen_loadwb_pairdi_di (base, base, reg, reg2, GEN_INT (adjustment), - GEN_INT (UNITS_PER_WORD)); - case E_DFmode: - return gen_loadwb_pairdf_di (base, base, reg, reg2, GEN_INT (adjustment), - GEN_INT (UNITS_PER_WORD)); - case E_TFmode: - return gen_loadwb_pairtf_di (base, base, reg, reg2, GEN_INT (adjustment), - GEN_INT (UNITS_PER_VREG)); - default: - gcc_unreachable (); - } + rtx mem = gen_frame_mem (mode, base); + rtx mem2 = adjust_address_nv (mem, mode, GET_MODE_SIZE (mode)); + rtx new_base = plus_constant (Pmode, base, adjustment); + + return gen_rtx_PARALLEL (VOIDmode, + gen_rtvec (3, + gen_rtx_SET (base, new_base), + gen_rtx_SET (reg, mem), + gen_rtx_SET (reg2, mem2))); } /* Pop the two registers numbered REGNO1, REGNO2 from the stack, adjusting it @@ -26685,6 +26672,20 @@ aarch64_check_consecutive_mems (rtx *mem1, rtx *mem2, bool *reversed) return false; } +/* Test if MODE is suitable for a single transfer register in an ldp or stp + instruction. */ + +bool +aarch64_ldpstp_operand_mode_p (machine_mode mode) +{ + if (!targetm.hard_regno_mode_ok (V0_REGNUM, mode) + || hard_regno_nregs (V0_REGNUM, mode) > 1) + return false; + + const auto size = GET_MODE_SIZE (mode); + return known_eq (size, 4) || known_eq (size, 8) || known_eq (size, 16); +} + /* Return true if MEM1 and MEM2 can be combined into a single access of mode MODE, with the combined access having the same address as MEM1. */ diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md index a6d5e8c2a1a..f87cddf8f4b 100644 --- a/gcc/config/aarch64/aarch64.md +++ b/gcc/config/aarch64/aarch64.md @@ -1919,102 +1919,208 @@ (define_insn "store_pair_dw_" (set_attr "fp" "yes")] ) +;; Writeback load/store pair patterns. +;; +;; Note that modes in the patterns [SI DI TI] are used only as a proxy for their +;; size; aarch64_ldp_reg_operand and aarch64_mem_pair_operator are special +;; predicates which accept a wide range of operand modes, with the requirement +;; that the contextual (pattern) mode is of the same size as the operand mode. + ;; Load pair with post-index writeback. This is primarily used in function ;; epilogues. -(define_insn "loadwb_pair_" - [(parallel - [(set (match_operand:P 0 "register_operand" "=k") - (plus:P (match_operand:P 1 "register_operand" "0") - (match_operand:P 4 "aarch64_mem_pair_offset" "n"))) - (set (match_operand:GPI 2 "register_operand" "=r") - (mem:GPI (match_dup 1))) - (set (match_operand:GPI 3 "register_operand" "=r") - (mem:GPI (plus:P (match_dup 1) - (match_operand:P 5 "const_int_operand" "n"))))])] - "INTVAL (operands[5]) == GET_MODE_SIZE (mode)" - "ldp\\t%2, %3, [%1], %4" - [(set_attr "type" "load_")] -) - -(define_insn "loadwb_pair_" - [(parallel - [(set (match_operand:P 0 "register_operand" "=k") - (plus:P (match_operand:P 1 "register_operand" "0") - (match_operand:P 4 "aarch64_mem_pair_offset" "n"))) - (set (match_operand:GPF 2 "register_operand" "=w") - (mem:GPF (match_dup 1))) - (set (match_operand:GPF 3 "register_operand" "=w") - (mem:GPF (plus:P (match_dup 1) - (match_operand:P 5 "const_int_operand" "n"))))])] - "INTVAL (operands[5]) == GET_MODE_SIZE (mode)" - "ldp\\t%2, %3, [%1], %4" - [(set_attr "type" "neon_load1_2reg")] -) - -(define_insn "loadwb_pair_" - [(parallel - [(set (match_operand:P 0 "register_operand" "=k") - (plus:P (match_operand:P 1 "register_operand" "0") - (match_operand:P 4 "aarch64_mem_pair_offset" "n"))) - (set (match_operand:TX 2 "register_operand" "=w") - (mem:TX (match_dup 1))) - (set (match_operand:TX 3 "register_operand" "=w") - (mem:TX (plus:P (match_dup 1) - (match_operand:P 5 "const_int_operand" "n"))))])] - "TARGET_BASE_SIMD && INTVAL (operands[5]) == GET_MODE_SIZE (mode)" - "ldp\\t%q2, %q3, [%1], %4" +(define_insn "*loadwb_post_pair_" + [(set (match_operand 0 "pmode_register_operand") + (match_operator 7 "pmode_plus_operator" [ + (match_operand 1 "pmode_register_operand") + (match_operand 4 "const_int_operand")])) + (set (match_operand:GPI 2 "aarch64_ldp_reg_operand") + (match_operator 5 "memory_operand" [(match_dup 1)])) + (set (match_operand:GPI 3 "aarch64_ldp_reg_operand") + (match_operator 6 "memory_operand" [ + (match_operator 8 "pmode_plus_operator" [ + (match_dup 1) + (const_int )])]))] + "aarch64_mem_pair_offset (operands[4], mode)" + {@ [cons: =0, 1, =2, =3; attrs: type] + [ rk, 0, r, r; load_] ldp\t%2, %3, [%1], %4 + [ rk, 0, w, w; neon_load1_2reg ] ldp\t%2, %3, [%1], %4 + } +) + +;; q-register variant of the above +(define_insn "*loadwb_post_pair_16" + [(set (match_operand 0 "pmode_register_operand" "=rk") + (match_operator 7 "pmode_plus_operator" [ + (match_operand 1 "pmode_register_operand" "0") + (match_operand 4 "const_int_operand")])) + (set (match_operand:TI 2 "aarch64_ldp_reg_operand" "=w") + (match_operator 5 "memory_operand" [(match_dup 1)])) + (set (match_operand:TI 3 "aarch64_ldp_reg_operand" "=w") + (match_operator 6 "memory_operand" + [(match_operator 8 "pmode_plus_operator" [ + (match_dup 1) + (const_int 16)])]))] + "TARGET_FLOAT + && aarch64_mem_pair_offset (operands[4], TImode)" + "ldp\t%q2, %q3, [%1], %4" + [(set_attr "type" "neon_ldp_q")] +) + +;; Load pair with pre-index writeback. +(define_insn "*loadwb_pre_pair_" + [(set (match_operand 0 "pmode_register_operand") + (match_operator 8 "pmode_plus_operator" [ + (match_operand 1 "pmode_register_operand") + (match_operand 4 "const_int_operand")])) + (set (match_operand:GPI 2 "aarch64_ldp_reg_operand") + (match_operator 6 "memory_operand" [ + (match_operator 9 "pmode_plus_operator" [ + (match_dup 1) + (match_dup 4) + ])])) + (set (match_operand:GPI 3 "aarch64_ldp_reg_operand") + (match_operator 7 "memory_operand" [ + (match_operator 10 "pmode_plus_operator" [ + (match_dup 1) + (match_operand 5 "const_int_operand") + ])]))] + "aarch64_mem_pair_offset (operands[4], mode) + && known_eq (INTVAL (operands[5]), + INTVAL (operands[4]) + GET_MODE_SIZE (mode))" + {@ [cons: =&0, 1, =2, =3; attrs: type ] + [ rk, 0, r, r; load_] ldp\t%2, %3, [%0, %4]! + [ rk, 0, w, w; neon_load1_2reg ] ldp\t%2, %3, [%0, %4]! + } +) + +;; q-register variant of the above +(define_insn "*loadwb_pre_pair_16" + [(set (match_operand 0 "pmode_register_operand" "=&rk") + (match_operator 8 "pmode_plus_operator" [ + (match_operand 1 "pmode_register_operand" "0") + (match_operand 4 "const_int_operand")])) + (set (match_operand:TI 2 "aarch64_ldp_reg_operand" "=w") + (match_operator 6 "memory_operand" [ + (match_operator 10 "pmode_plus_operator" [ + (match_dup 1) + (match_dup 4) + ])])) + (set (match_operand:TI 3 "aarch64_ldp_reg_operand" "=w") + (match_operator 7 "memory_operand" [ + (match_operator 9 "pmode_plus_operator" [ + (match_dup 1) + (match_operand 5 "const_int_operand") + ])]))] + "TARGET_FLOAT + && aarch64_mem_pair_offset (operands[4], TImode) + && known_eq (INTVAL (operands[5]), INTVAL (operands[4]) + 16)" + "ldp\t%q2, %q3, [%0, %4]!" [(set_attr "type" "neon_ldp_q")] ) ;; Store pair with pre-index writeback. This is primarily used in function ;; prologues. -(define_insn "storewb_pair_" - [(parallel - [(set (match_operand:P 0 "register_operand" "=&k") - (plus:P (match_operand:P 1 "register_operand" "0") - (match_operand:P 4 "aarch64_mem_pair_offset" "n"))) - (set (mem:GPI (plus:P (match_dup 0) - (match_dup 4))) - (match_operand:GPI 2 "register_operand" "r")) - (set (mem:GPI (plus:P (match_dup 0) - (match_operand:P 5 "const_int_operand" "n"))) - (match_operand:GPI 3 "register_operand" "r"))])] - "INTVAL (operands[5]) == INTVAL (operands[4]) + GET_MODE_SIZE (mode)" - "stp\\t%2, %3, [%0, %4]!" - [(set_attr "type" "store_")] +(define_insn "*storewb_pre_pair_" + [(set (match_operand 0 "pmode_register_operand") + (match_operator 6 "pmode_plus_operator" [ + (match_operand 1 "pmode_register_operand") + (match_operand 4 "const_int_operand") + ])) + (set (match_operator:GPI 7 "aarch64_mem_pair_operator" [ + (match_operator 8 "pmode_plus_operator" [ + (match_dup 0) + (match_dup 4) + ])]) + (match_operand:GPI 2 "aarch64_stp_reg_operand")) + (set (match_operator:GPI 9 "aarch64_mem_pair_operator" [ + (match_operator 10 "pmode_plus_operator" [ + (match_dup 0) + (match_operand 5 "const_int_operand") + ])]) + (match_operand:GPI 3 "aarch64_stp_reg_operand"))] + "aarch64_mem_pair_offset (operands[4], mode) + && known_eq (INTVAL (operands[5]), + INTVAL (operands[4]) + GET_MODE_SIZE (mode)) + && !reg_overlap_mentioned_p (operands[0], operands[2]) + && !reg_overlap_mentioned_p (operands[0], operands[3])" + {@ [cons: =&0, 1, 2, 3; attrs: type ] + [ rk, 0, rYZ, rYZ; store_] stp\t%2, %3, [%0, %4]! + [ rk, 0, w, w; neon_store1_2reg ] stp\t%2, %3, [%0, %4]! + } +) + +;; q-register variant of the above. +(define_insn "*storewb_pre_pair_16" + [(set (match_operand 0 "pmode_register_operand" "=&rk") + (match_operator 6 "pmode_plus_operator" [ + (match_operand 1 "pmode_register_operand" "0") + (match_operand 4 "const_int_operand") + ])) + (set (match_operator:TI 7 "aarch64_mem_pair_operator" [ + (match_operator 8 "pmode_plus_operator" [ + (match_dup 0) + (match_dup 4) + ])]) + (match_operand:TI 2 "aarch64_ldp_reg_operand" "w")) + (set (match_operator:TI 9 "aarch64_mem_pair_operator" [ + (match_operator 10 "pmode_plus_operator" [ + (match_dup 0) + (match_operand 5 "const_int_operand") + ])]) + (match_operand:TI 3 "aarch64_ldp_reg_operand" "w"))] + "TARGET_FLOAT + && aarch64_mem_pair_offset (operands[4], TImode) + && known_eq (INTVAL (operands[5]), INTVAL (operands[4]) + 16) + && !reg_overlap_mentioned_p (operands[0], operands[2]) + && !reg_overlap_mentioned_p (operands[0], operands[3])" + "stp\\t%q2, %q3, [%0, %4]!" + [(set_attr "type" "neon_stp_q")] ) -(define_insn "storewb_pair_" - [(parallel - [(set (match_operand:P 0 "register_operand" "=&k") - (plus:P (match_operand:P 1 "register_operand" "0") - (match_operand:P 4 "aarch64_mem_pair_offset" "n"))) - (set (mem:GPF (plus:P (match_dup 0) - (match_dup 4))) - (match_operand:GPF 2 "register_operand" "w")) - (set (mem:GPF (plus:P (match_dup 0) - (match_operand:P 5 "const_int_operand" "n"))) - (match_operand:GPF 3 "register_operand" "w"))])] - "INTVAL (operands[5]) == INTVAL (operands[4]) + GET_MODE_SIZE (mode)" - "stp\\t%2, %3, [%0, %4]!" - [(set_attr "type" "neon_store1_2reg")] -) - -(define_insn "storewb_pair_" - [(parallel - [(set (match_operand:P 0 "register_operand" "=&k") - (plus:P (match_operand:P 1 "register_operand" "0") - (match_operand:P 4 "aarch64_mem_pair_offset" "n"))) - (set (mem:TX (plus:P (match_dup 0) - (match_dup 4))) - (match_operand:TX 2 "register_operand" "w")) - (set (mem:TX (plus:P (match_dup 0) - (match_operand:P 5 "const_int_operand" "n"))) - (match_operand:TX 3 "register_operand" "w"))])] - "TARGET_BASE_SIMD - && INTVAL (operands[5]) - == INTVAL (operands[4]) + GET_MODE_SIZE (mode)" - "stp\\t%q2, %q3, [%0, %4]!" +;; Store pair with post-index writeback. +(define_insn "*storewb_post_pair_" + [(set (match_operand 0 "pmode_register_operand") + (match_operator 5 "pmode_plus_operator" [ + (match_operand 1 "pmode_register_operand") + (match_operand 4 "const_int_operand") + ])) + (set (match_operator:GPI 6 "aarch64_mem_pair_operator" [(match_dup 1)]) + (match_operand 2 "aarch64_stp_reg_operand")) + (set (match_operator:GPI 7 "aarch64_mem_pair_operator" [ + (match_operator 8 "pmode_plus_operator" [ + (match_dup 0) + (const_int ) + ])]) + (match_operand 3 "aarch64_stp_reg_operand"))] + "aarch64_mem_pair_offset (operands[4], mode) + && !reg_overlap_mentioned_p (operands[0], operands[2]) + && !reg_overlap_mentioned_p (operands[0], operands[3])" + {@ [cons: =0, 1, 2, 3; attrs: type ] + [ rk, 0, rYZ, rYZ; store_] stp\t%2, %3, [%0], %4 + [ rk, 0, w, w; neon_store1_2reg ] stp\t%2, %3, [%0], %4 + } +) + +;; Store pair with post-index writeback. +(define_insn "*storewb_post_pair_16" + [(set (match_operand 0 "pmode_register_operand" "=rk") + (match_operator 5 "pmode_plus_operator" [ + (match_operand 1 "pmode_register_operand" "0") + (match_operand 4 "const_int_operand") + ])) + (set (match_operator:TI 6 "aarch64_mem_pair_operator" [(match_dup 1)]) + (match_operand:TI 2 "aarch64_ldp_reg_operand" "w")) + (set (match_operator:TI 7 "aarch64_mem_pair_operator" [ + (match_operator 8 "pmode_plus_operator" [ + (match_dup 0) + (const_int 16) + ])]) + (match_operand:TI 3 "aarch64_ldp_reg_operand" "w"))] + "TARGET_FLOAT + && aarch64_mem_pair_offset (operands[4], TImode) + && !reg_overlap_mentioned_p (operands[0], operands[2]) + && !reg_overlap_mentioned_p (operands[0], operands[3])" + "stp\t%q2, %q3, [%0], %4" [(set_attr "type" "neon_stp_q")] ) diff --git a/gcc/config/aarch64/predicates.md b/gcc/config/aarch64/predicates.md index 9af28103a74..698a68a6311 100644 --- a/gcc/config/aarch64/predicates.md +++ b/gcc/config/aarch64/predicates.md @@ -291,11 +291,46 @@ (define_predicate "aarch64_mem_pair_offset" (and (match_code "const_int") (match_test "aarch64_offset_7bit_signed_scaled_p (mode, INTVAL (op))"))) +(define_special_predicate "aarch64_mem_pair_operator" + (and + (match_code "mem") + (match_test "aarch64_ldpstp_operand_mode_p (GET_MODE (op))") + (ior + (match_test "mode == VOIDmode") + (match_test "known_eq (GET_MODE_SIZE (mode), + GET_MODE_SIZE (GET_MODE (op)))")))) + (define_predicate "aarch64_mem_pair_operand" (and (match_code "mem") (match_test "aarch64_legitimate_address_p (mode, XEXP (op, 0), false, ADDR_QUERY_LDP_STP)"))) +(define_predicate "pmode_plus_operator" + (and (match_code "plus") + (match_test "GET_MODE (op) == Pmode"))) + +(define_special_predicate "aarch64_ldp_reg_operand" + (and + (match_code "reg,subreg") + (match_test "aarch64_ldpstp_operand_mode_p (GET_MODE (op))") + (ior + (match_test "mode == VOIDmode") + (match_test "known_eq (GET_MODE_SIZE (mode), + GET_MODE_SIZE (GET_MODE (op)))")))) + +(define_special_predicate "aarch64_stp_reg_operand" + (ior (match_operand 0 "aarch64_ldp_reg_operand") + (and (match_code "const_int,const,const_vector,const_double") + (match_test "aarch64_const_zero_rtx_p (op)")) + (ior + (match_test "GET_MODE (op) == VOIDmode") + (and + (match_test "aarch64_ldpstp_operand_mode_p (GET_MODE (op))") + (ior + (match_test "mode == VOIDmode") + (match_test "known_eq (GET_MODE_SIZE (mode), + GET_MODE_SIZE (GET_MODE (op)))")))))) + ;; Used for storing two 64-bit values in an AdvSIMD register using an STP ;; as a 128-bit vec_concat. (define_predicate "aarch64_mem_pair_lanes_operand" From patchwork Thu Dec 7 14:45:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Coplan X-Patchwork-Id: 81674 X-Patchwork-Delegate: rsandifo@gcc.gnu.org Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 27CE7385E010 for ; Thu, 7 Dec 2023 14:46:34 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR05-DB8-obe.outbound.protection.outlook.com (mail-db8eur05on2082.outbound.protection.outlook.com [40.107.20.82]) by sourceware.org (Postfix) with ESMTPS id 1C898385AC1B for ; Thu, 7 Dec 2023 14:46:04 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 1C898385AC1B Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 1C898385AC1B Authentication-Results: server2.sourceware.org; arc=pass smtp.remote-ip=40.107.20.82 ARC-Seal: i=3; a=rsa-sha256; d=sourceware.org; s=key; t=1701960369; cv=pass; b=gyKKQu7w4YNGN0P0S3G8w3sVT0UfPSvp09Dr5Z5T0dAW9Loj1YG9DkQu+Fz+5v79nzO92rFkD3Lb70MeV3gFabsRF/h/4qFStq0Dn7KCHdhlMCg1vL717jmDLwZones8gdu0/5qs1ZZfU/iHQgRD8Tv9o5MbC9o58wHwfykDTmg= ARC-Message-Signature: i=3; a=rsa-sha256; d=sourceware.org; s=key; t=1701960369; c=relaxed/simple; bh=GUdrRNegIGSf4frK9JnpX1SqqtjdLIT3brLOgSsFKNU=; h=DKIM-Signature:DKIM-Signature:Date:From:To:Subject:Message-ID: MIME-Version; b=Un5uFJGbCBLjUkInhskUITugtOdPOxwhk/vzUqPbd3P3H5IfNwQYl4fMGjFJatAKQDPhExdds6mi70mO5W9gVOZHfs6IlFbr0Q7zmpO5chYqqHNayTCybM2v5/oVCVboAxzIvGt6AjGQ5pHyWPb+fXhmmXo7o/V2Qk/gRssOWf4= ARC-Authentication-Results: i=3; server2.sourceware.org ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=J87FS9AfnVBR9NyNb8lucluSc/pmgYfuVI1FSzGDshrYolhHprst5sU9SMpKdcDy0OkjzHuW6Jp5AsFizz5GscXJ3PQ0jiUnTm3MH2TDc747ufOS6u019NMoF2uFuMC2hEOptgioS35i2yTNLEdVrNHewugpgIfSa4Es6t/zmA9kFlPAZ+/Tilce0F3zziz6g4zDlytvPwpbFG1ipzcYBK6uFLeMycDmdJhBQokZUIrvCCOgNsZXvVXH/7BGYwR80sjLOJSWCSwZzmWJF+xYlr8rmLlqWvx7KvWy8HynmMhQyRriA6qNmM2eD84YeFP2pf7foPl+rGuZNiemyTSeuA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=tZH+cscMoAll+hG1vSgeZHtjsM/tQd7+vPbC0fLtNBQ=; b=U4+IGB1CxjJd7QyDJvTT4Qulf/ahlVCPx/HsmekvBxf0AnUu+K4Ep4JTL8pcyZYgkVrOn1PiEa2Pj1fbC52kfazi8JZhnmsoAkIMXtm488Fpf72RtKwzn2uvuKx+I1yNVR9Khl6EIl5obzyIpSqaRiRhOfvSJOHl9rDKmyYzVQ9pGZL3VGiRYX+Qx9JR/2sNkOTqJy9R7mR27ERtudwZoLOpNJbeqTbgbV7URTyNQqqpFeTsievl2yYy92I32JuK5pQVgXEQdxxpEzSliA4aJWbt7TDS8WFOtI+BmkIEI0FmQBWZKnn7cLxpbYAlDT/5X061yQJni9uJQr/rHQZggw== ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com]) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tZH+cscMoAll+hG1vSgeZHtjsM/tQd7+vPbC0fLtNBQ=; b=3ZWJIA/7hElM2O88ehYv1BMWey35vNtEGxzABm58J3BwGuKCAJwkYwdQA1FOAFMoUlugd/4Ny6E/Aj47LoFv6clVjUQ/5baLVrYtPS+TrqDyy3fRRlq3oFn+voEndTirtNBWr++2N6THju7w9MPYvbqK6Q/R3ixTJR+1uos7Ci8= Received: from AM0PR06CA0076.eurprd06.prod.outlook.com (2603:10a6:208:fa::17) by AS2PR08MB9872.eurprd08.prod.outlook.com (2603:10a6:20b:593::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7068.25; Thu, 7 Dec 2023 14:45:58 +0000 Received: from AM2PEPF0001C70B.eurprd05.prod.outlook.com (2603:10a6:208:fa:cafe::7f) by AM0PR06CA0076.outlook.office365.com (2603:10a6:208:fa::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7046.34 via Frontend Transport; Thu, 7 Dec 2023 14:45:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM2PEPF0001C70B.mail.protection.outlook.com (10.167.16.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7068.20 via Frontend Transport; Thu, 7 Dec 2023 14:45:58 +0000 Received: ("Tessian outbound 385ad2f98d71:v228"); Thu, 07 Dec 2023 14:45:58 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: f5fa4ab7970d38c2 X-CR-MTA-TID: 64aa7808 Received: from 4f90f529be3a.2 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 18CD029B-093D-4084-84FA-8FCC6F637B6F.1; Thu, 07 Dec 2023 14:45:50 +0000 Received: from EUR03-DBA-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4f90f529be3a.2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Thu, 07 Dec 2023 14:45:50 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bCBRrHyvVJsqWJ+LKxUEmAe1OCyHUZkz9fL9XVLlsvsOSc1BElZNHKHc2IeCdkUCxTbBdoIRlikLxGA5M540/ix06CmzB2xlKwe+CuNhIY4agDve7TJ6KmVtec98pLEoxwEpWx6OqZYYIXNQX/WuH8rDKTj3HB6+eS37lsxiZLMPsK9cqlVJ8arw6VzOBp5uqxW40CGPICEMigud5pDZ7aVu84C1uzsHTHKzfqtEPV8sFAMqXa/+AIvRHAcGI6bxj6VqDJAVX79wi7GUEROqm+tl9hMxvnnsElMs4CgJ5dFIisYw+nZfSSDWHjKdGNgHiyjtcO3DpM6vuIPu4FdoIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=tZH+cscMoAll+hG1vSgeZHtjsM/tQd7+vPbC0fLtNBQ=; b=PgQrbdkyqrmicCA3EiOUTErvoTCYEkL9JVo0eBoAVl/v9Q3hV0MQJVrfz2mZ+2HtYLqUSAzmkPPeOdEroRL+QZpOLDJMpicTJxVfstVBKjHlK8w6JcpteYQiYzDGbyJnYSOtnpB5qemTCWFhUdQr2Jf7FoOB3+nMCIW1c7FeUQjIZ8EFo4l1+3OF96IuoF1grfVD8neHSznY1/6AFpRRuIIKG+wkh+PmbAd7GQnekRMc9c3ZcOx6RvfLJnHLB/MmtdF6uVn2gl49J6HItoXPsdDIIdHd2fXFMke2yR+fj34Yy9aM0z+SRMqrw9zncScoqaqFaWsO5zStTMuFZDYCTw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tZH+cscMoAll+hG1vSgeZHtjsM/tQd7+vPbC0fLtNBQ=; b=3ZWJIA/7hElM2O88ehYv1BMWey35vNtEGxzABm58J3BwGuKCAJwkYwdQA1FOAFMoUlugd/4Ny6E/Aj47LoFv6clVjUQ/5baLVrYtPS+TrqDyy3fRRlq3oFn+voEndTirtNBWr++2N6THju7w9MPYvbqK6Q/R3ixTJR+1uos7Ci8= Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; Received: from PAWPR08MB8958.eurprd08.prod.outlook.com (2603:10a6:102:33e::15) by GV2PR08MB8728.eurprd08.prod.outlook.com (2603:10a6:150:b3::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7068.27; Thu, 7 Dec 2023 14:45:47 +0000 Received: from PAWPR08MB8958.eurprd08.prod.outlook.com ([fe80::8512:cc10:24d4:1919]) by PAWPR08MB8958.eurprd08.prod.outlook.com ([fe80::8512:cc10:24d4:1919%5]) with mapi id 15.20.7068.027; Thu, 7 Dec 2023 14:45:46 +0000 Date: Thu, 7 Dec 2023 14:45:44 +0000 From: Alex Coplan To: gcc-patches@gcc.gnu.org Cc: Richard Sandiford , Kyrylo Tkachov Subject: [PATCH v3 09/11] aarch64: Rewrite non-writeback ldp/stp patterns Message-ID: Content-Disposition: inline X-ClientProxiedBy: LO0P123CA0002.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:354::13) To PAWPR08MB8958.eurprd08.prod.outlook.com (2603:10a6:102:33e::15) MIME-Version: 1.0 X-MS-TrafficTypeDiagnostic: PAWPR08MB8958:EE_|GV2PR08MB8728:EE_|AM2PEPF0001C70B:EE_|AS2PR08MB9872:EE_ X-MS-Office365-Filtering-Correlation-Id: 36ef6ad6-11de-463a-acc6-08dbf7333984 x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: fE9Ak2lGfP0SWb/NVp9Axa+G2G91MLUFI61bhwaFW//k4MfyXpZJ8gw+lbY7Z+HD7Y8UeD4i/JozpskcNunXxlP3+Kt8+Wun9I5gOfDsPorXsEMV8YGGhH90rmHQDDRJJfW5y9WmbtI0TmhYzCVEsA0C57L6hmKh+17jE9xNeSnwUTFaNUwfokr40xDJzYDzIQCmkbQho8K4F6r1vDQzMmVfqVIq58Tu/6GV7No1CmduUlSiPvQBKUnXyOaf4gOfY7ky6cAsmNTOyDpMQJ9RyK/UqwFzzxgtJC8gRhLHUCMtz+0lf1E3tp8bVFSHkq23zYCRYqmYpUD+w3pJ5G5xCf/WyR98mzzpPPDsCujqxxVxu3c1GJViQjHTPUEsN2TJ11P6wnsTtvkiWysv8QW/90GlAw8kud1RvMdwJivkZY5c7unsI5mAHElg+Pmqunu+yWdiNuxaTJffbDSDmj1v+zPjzGzR/6jrWctU3TVUvoyc0Ztn33ymztCOP0hPmA9wTdZslaLcxMYg7gQGRXWy119wAQk0rqQiuWnhVI9CTEPWVnS1eNlLE5TTF6sfWVmdNYcVl3Az5tlOtNw8pkpmSLsHC+xVhncZjs/6EOSnFlI= X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PAWPR08MB8958.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(396003)(346002)(366004)(136003)(376002)(39860400002)(230922051799003)(64100799003)(451199024)(186009)(1800799012)(83380400001)(21480400003)(478600001)(2616005)(6512007)(2906002)(26005)(966005)(6486002)(44144004)(33964004)(6506007)(4326008)(8676002)(8936002)(86362001)(66556008)(235185007)(5660300002)(44832011)(41300700001)(36756003)(54906003)(6916009)(316002)(38100700002)(66476007)(66946007)(2700100001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8728 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM2PEPF0001C70B.eurprd05.prod.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 3467debf-7cf3-4f64-65fb-08dbf7333291 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: q5C6OLgRcYf9Or1KNjSAd07VBZxseDYRdjVAWcJRWX8kDv91FMA1CmhQ1fmFsmRNqM4J0vTp9E0+GWGARZDvRVaM24QFY96ez9PyeCiykPGlu/N1n1N7rBK0TiRii3YW5YgLcOlKQYJVrbhThc4gI2WL+FZouZcfKCn96L/XvkciO63wXrYGqqL9hYFqVMq8JDPr2xnq4RshbmBqbBMQ2zvzGr3VRih4ibbSudcb6u5T1iNg1sPRz+RNBrPQ0owCtjzwrUvffNgjPCKwCZyNDTXUxWG6QXZQ/tlRCko++UN833dKQsGE6X8xwk42kMwEGmr3LXDfdt6F1xY1iTFytu4Kh5qpeFszrmsonBJgekLqzTNW2Lt6ukc7kQ5I/yp7iPMZeWKjUW3d60S2kTAPZfQOiD8FZiBrdH9eeVpZ5udUv1wyaFlLslornR1KbaG4qMnSlmq+ci3EpVxXPjGtn6DVACuSMcyvRmVCs0ZkcKVE9Rpfzy5kvO5OTSe0EScCqM5mETSzVUEofyAcEC+eh+tm3zEWRrxZEAFJAUQvx4y5svjBt9Vq5nSeKbbQvA88SF8j8Fo1vGfBFgMb6xahDa6+x9y/A89k54+XGUcKCNwd3KDAKrdpowJiF/LAqg1V4uhDnRBJcu7BBed2qyqK166nc+veZdehzMawpJhqYqxjwBeszA88KqUXV2EbWY5JRAmzNSCw+4mmzj0XUfA9NEdvOfUhBVdwvQSl8zp7bg7g8m8n2bWTfpLrw3inaEb+Egg4dBTXvYngFrRWhzTH24OQOifE8RAdjQu1CQZfhV/AKPocmr4WYhB/pvcjfBU5 X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230031)(4636009)(376002)(39860400002)(346002)(136003)(396003)(230922051799003)(64100799003)(451199024)(82310400011)(1800799012)(186009)(46966006)(40470700004)(36840700001)(83380400001)(40460700003)(21480400003)(6512007)(2616005)(44144004)(336012)(26005)(33964004)(6506007)(316002)(70206006)(70586007)(54906003)(6916009)(40480700001)(235185007)(5660300002)(4326008)(2906002)(86362001)(8936002)(44832011)(8676002)(36756003)(41300700001)(966005)(82740400003)(81166007)(6486002)(356005)(478600001)(36860700001)(47076005)(2700100001); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Dec 2023 14:45:58.2665 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 36ef6ad6-11de-463a-acc6-08dbf7333984 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM2PEPF0001C70B.eurprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9872 X-Spam-Status: No, score=-11.6 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_ASCII_DIVIDERS, KAM_DMARC_NONE, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Hi, This is a v3, rebased on top of the SME changes. v2 is here: https://gcc.gnu.org/pipermail/gcc-patches/2023-December/639361.html Bootstrapped/regtested as a series on aarch64-linux-gnu, OK for trunk? Thanks, Alex -- >8 -- This patch overhauls the load/store pair patterns with two main goals: 1. Fixing a correctness issue (the current patterns are not RA-friendly). 2. Allowing more flexibility in which operand modes are supported, and which combinations of modes are allowed in the two arms of the load/store pair, while reducing the number of patterns required both in the source and in the generated code. The correctness issue (1) is due to the fact that the current patterns have two independent memory operands tied together only by a predicate on the insns. Since LRA only looks at the constraints, one of the memory operands can get reloaded without the other one being changed, leading to the insn becoming unrecognizable after reload. We fix this issue by changing the patterns such that they only ever have one memory operand representing the entire pair. For the store case, we use an unspec to logically concatenate the register operands before storing them. For the load case, we use unspecs to extract the "lanes" from the pair mem, with the second occurrence of the mem matched using a match_dup (such that there is still really only one memory operand as far as the RA is concerned). In terms of the modes used for the pair memory operands, we canonicalize these to V2x4QImode, V2x8QImode, and V2x16QImode. These modes have not only the correct size but also correct alignment requirement for a memory operand representing an entire load/store pair. Unlike the other two, V2x4QImode didn't previously exist, so had to be added with the patch. As with the previous patch generalizing the writeback patterns, this patch aims to be flexible in the combinations of modes supported by the patterns without requiring a large number of generated patterns by using distinct mode iterators. The new scheme means we only need a single (generated) pattern for each load/store operation of a given operand size. For the 4-byte and 8-byte operand cases, we use the GPI iterator to synthesize the two patterns. The 16-byte case is implemented as a separate pattern in the source (due to only having a single possible alternative). Since the UNSPEC patterns can't be interpreted by the dwarf2cfi code, we add REG_CFA_OFFSET notes to the store pair insns emitted by aarch64_save_callee_saves, so that correct CFI information can still be generated. Furthermore, we now unconditionally generate these CFA notes on frame-related insns emitted by aarch64_save_callee_saves. This is done in case that the load/store pair pass forms these into pairs, in which case the CFA notes would be needed. We also adjust the ldp/stp peepholes to generate the new form. This is done by switching the generation to use the aarch64_gen_{load,store}_pair interface, making it easier to change the form in the future if needed. (Likewise, the upcoming aarch64 load/store pair pass also makes use of this interface). This patch also adds an "ldpstp" attribute to the non-writeback load/store pair patterns, which is used by the post-RA load/store pair pass to identify existing patterns and see if they can be promoted to writeback variants. One potential concern with using unspecs for the patterns is that it can block optimization by the generic RTL passes. This patch series tries to mitigate this in two ways: 1. The pre-RA load/store pair pass runs very late in the pre-RA pipeline. 2. A later patch in the series adjusts the aarch64 mem{cpy,set} expansion to emit individual loads/stores instead of ldp/stp. These should then be formed back into load/store pairs much later in the RTL pipeline by the new load/store pair pass. gcc/ChangeLog: * config/aarch64/aarch64-ldpstp.md: Abstract ldp/stp representation from peepholes, allowing use of new form. * config/aarch64/aarch64-modes.def (V2x4QImode): Define. * config/aarch64/aarch64-protos.h (aarch64_finish_ldpstp_peephole): Declare. (aarch64_swap_ldrstr_operands): Delete declaration. (aarch64_gen_load_pair): Adjust parameters. (aarch64_gen_store_pair): Likewise. * config/aarch64/aarch64-simd.md (load_pair): Delete. (vec_store_pair): Delete. (load_pair): Delete. (vec_store_pair): Delete. * config/aarch64/aarch64.cc (aarch64_sme_mode_switch_regs::emit_mem_128_moves): Use aarch64_gen_{load,store}_pair instead of emitting parallel directly. (aarch64_gen_store_pair): Adjust to use new unspec form of stp. Drop second mem from parameters. (aarch64_gen_load_pair): Likewise. (aarch64_pair_mode_for_mode): New. (aarch64_pair_mem_from_base): New. (aarch64_save_callee_saves): Emit REG_CFA_OFFSET notes for frame-related saves. Adjust call to aarch64_gen_store_pair (aarch64_restore_callee_saves): Adjust calls to aarch64_gen_load_pair to account for change in interface. (aarch64_process_components): Likewise. (aarch64_classify_address): Handle 32-byte pair mems in LDP_STP_N case. (aarch64_print_operand): Likewise. (aarch64_init_tpidr2_block): Use aarch64_gen_store_pair to emit stp. (aarch64_copy_one_block_and_progress_pointers): Adjust calls to account for change in aarch64_gen_{load,store}_pair interface. (aarch64_set_one_block_and_progress_pointer): Likewise. (aarch64_finish_ldpstp_peephole): New. (aarch64_gen_adjusted_ldpstp): Adjust to use generation helper. * config/aarch64/aarch64.md (ldpstp): New attribute. (load_pair_sw_): Delete. (load_pair_dw_): Delete. (load_pair_dw_): Delete. (*load_pair_): New. (*load_pair_16): New. (store_pair_sw_): Delete. (store_pair_dw_): Delete. (store_pair_dw_): Delete. (*store_pair_): New. (*store_pair_16): New. (*load_pair_extendsidi2_aarch64): Adjust to use new form. (*zero_extendsidi2_aarch64): Likewise. * config/aarch64/iterators.md (VPAIR): New. * config/aarch64/predicates.md (aarch64_mem_pair_operand): Change to a special predicate derived from aarch64_mem_pair_operator. diff --git a/gcc/config/aarch64/aarch64-ldpstp.md b/gcc/config/aarch64/aarch64-ldpstp.md index 1ee7c73ff0c..dc39af85254 100644 --- a/gcc/config/aarch64/aarch64-ldpstp.md +++ b/gcc/config/aarch64/aarch64-ldpstp.md @@ -24,10 +24,10 @@ (define_peephole2 (set (match_operand:GPI 2 "register_operand" "") (match_operand:GPI 3 "memory_operand" ""))] "aarch64_operands_ok_for_ldpstp (operands, true, mode)" - [(parallel [(set (match_dup 0) (match_dup 1)) - (set (match_dup 2) (match_dup 3))])] + [(const_int 0)] { - aarch64_swap_ldrstr_operands (operands, true); + aarch64_finish_ldpstp_peephole (operands, true); + DONE; }) (define_peephole2 @@ -36,10 +36,10 @@ (define_peephole2 (set (match_operand:GPI 2 "memory_operand" "") (match_operand:GPI 3 "aarch64_reg_or_zero" ""))] "aarch64_operands_ok_for_ldpstp (operands, false, mode)" - [(parallel [(set (match_dup 0) (match_dup 1)) - (set (match_dup 2) (match_dup 3))])] + [(const_int 0)] { - aarch64_swap_ldrstr_operands (operands, false); + aarch64_finish_ldpstp_peephole (operands, false); + DONE; }) (define_peephole2 @@ -48,10 +48,10 @@ (define_peephole2 (set (match_operand:GPF 2 "register_operand" "") (match_operand:GPF 3 "memory_operand" ""))] "aarch64_operands_ok_for_ldpstp (operands, true, mode)" - [(parallel [(set (match_dup 0) (match_dup 1)) - (set (match_dup 2) (match_dup 3))])] + [(const_int 0)] { - aarch64_swap_ldrstr_operands (operands, true); + aarch64_finish_ldpstp_peephole (operands, true); + DONE; }) (define_peephole2 @@ -60,10 +60,10 @@ (define_peephole2 (set (match_operand:GPF 2 "memory_operand" "") (match_operand:GPF 3 "aarch64_reg_or_fp_zero" ""))] "aarch64_operands_ok_for_ldpstp (operands, false, mode)" - [(parallel [(set (match_dup 0) (match_dup 1)) - (set (match_dup 2) (match_dup 3))])] + [(const_int 0)] { - aarch64_swap_ldrstr_operands (operands, false); + aarch64_finish_ldpstp_peephole (operands, false); + DONE; }) (define_peephole2 @@ -72,10 +72,10 @@ (define_peephole2 (set (match_operand:DREG2 2 "register_operand" "") (match_operand:DREG2 3 "memory_operand" ""))] "aarch64_operands_ok_for_ldpstp (operands, true, mode)" - [(parallel [(set (match_dup 0) (match_dup 1)) - (set (match_dup 2) (match_dup 3))])] + [(const_int 0)] { - aarch64_swap_ldrstr_operands (operands, true); + aarch64_finish_ldpstp_peephole (operands, true); + DONE; }) (define_peephole2 @@ -84,10 +84,10 @@ (define_peephole2 (set (match_operand:DREG2 2 "memory_operand" "") (match_operand:DREG2 3 "register_operand" ""))] "aarch64_operands_ok_for_ldpstp (operands, false, mode)" - [(parallel [(set (match_dup 0) (match_dup 1)) - (set (match_dup 2) (match_dup 3))])] + [(const_int 0)] { - aarch64_swap_ldrstr_operands (operands, false); + aarch64_finish_ldpstp_peephole (operands, false); + DONE; }) (define_peephole2 @@ -99,10 +99,10 @@ (define_peephole2 && aarch64_operands_ok_for_ldpstp (operands, true, mode) && (aarch64_tune_params.extra_tuning_flags & AARCH64_EXTRA_TUNE_NO_LDP_STP_QREGS) == 0" - [(parallel [(set (match_dup 0) (match_dup 1)) - (set (match_dup 2) (match_dup 3))])] + [(const_int 0)] { - aarch64_swap_ldrstr_operands (operands, true); + aarch64_finish_ldpstp_peephole (operands, true); + DONE; }) (define_peephole2 @@ -114,10 +114,10 @@ (define_peephole2 && aarch64_operands_ok_for_ldpstp (operands, false, mode) && (aarch64_tune_params.extra_tuning_flags & AARCH64_EXTRA_TUNE_NO_LDP_STP_QREGS) == 0" - [(parallel [(set (match_dup 0) (match_dup 1)) - (set (match_dup 2) (match_dup 3))])] + [(const_int 0)] { - aarch64_swap_ldrstr_operands (operands, false); + aarch64_finish_ldpstp_peephole (operands, false); + DONE; }) @@ -129,10 +129,10 @@ (define_peephole2 (set (match_operand:DI 2 "register_operand" "") (sign_extend:DI (match_operand:SI 3 "memory_operand" "")))] "aarch64_operands_ok_for_ldpstp (operands, true, SImode)" - [(parallel [(set (match_dup 0) (sign_extend:DI (match_dup 1))) - (set (match_dup 2) (sign_extend:DI (match_dup 3)))])] + [(const_int 0)] { - aarch64_swap_ldrstr_operands (operands, true); + aarch64_finish_ldpstp_peephole (operands, true, SIGN_EXTEND); + DONE; }) (define_peephole2 @@ -141,10 +141,10 @@ (define_peephole2 (set (match_operand:DI 2 "register_operand" "") (zero_extend:DI (match_operand:SI 3 "memory_operand" "")))] "aarch64_operands_ok_for_ldpstp (operands, true, SImode)" - [(parallel [(set (match_dup 0) (zero_extend:DI (match_dup 1))) - (set (match_dup 2) (zero_extend:DI (match_dup 3)))])] + [(const_int 0)] { - aarch64_swap_ldrstr_operands (operands, true); + aarch64_finish_ldpstp_peephole (operands, true, ZERO_EXTEND); + DONE; }) ;; Handle storing of a floating point zero with integer data. @@ -163,10 +163,10 @@ (define_peephole2 (set (match_operand: 2 "memory_operand" "") (match_operand: 3 "aarch64_reg_zero_or_fp_zero" ""))] "aarch64_operands_ok_for_ldpstp (operands, false, mode)" - [(parallel [(set (match_dup 0) (match_dup 1)) - (set (match_dup 2) (match_dup 3))])] + [(const_int 0)] { - aarch64_swap_ldrstr_operands (operands, false); + aarch64_finish_ldpstp_peephole (operands, false); + DONE; }) ;; Handle consecutive load/store whose offset is out of the range diff --git a/gcc/config/aarch64/aarch64-modes.def b/gcc/config/aarch64/aarch64-modes.def index ffca5517dec..ecab660e867 100644 --- a/gcc/config/aarch64/aarch64-modes.def +++ b/gcc/config/aarch64/aarch64-modes.def @@ -96,9 +96,13 @@ INT_MODE (XI, 64); /* V8DI mode. */ VECTOR_MODE_WITH_PREFIX (V, INT, DI, 8, 5); - ADJUST_ALIGNMENT (V8DI, 8); +/* V2x4QImode. Used in load/store pair patterns. */ +VECTOR_MODE_WITH_PREFIX (V2x, INT, QI, 4, 5); +ADJUST_NUNITS (V2x4QI, 8); +ADJUST_ALIGNMENT (V2x4QI, 4); + /* Define Advanced SIMD modes for structures of 2, 3 and 4 d-registers. */ #define ADV_SIMD_D_REG_STRUCT_MODES(NVECS, VB, VH, VS, VD) \ VECTOR_MODES_WITH_PREFIX (V##NVECS##x, INT, 8, 3); \ diff --git a/gcc/config/aarch64/aarch64-protos.h b/gcc/config/aarch64/aarch64-protos.h index ee0f0a18541..eb3dff22bf0 100644 --- a/gcc/config/aarch64/aarch64-protos.h +++ b/gcc/config/aarch64/aarch64-protos.h @@ -980,6 +980,8 @@ void aarch64_split_compare_and_swap (rtx op[]); void aarch64_split_atomic_op (enum rtx_code, rtx, rtx, rtx, rtx, rtx, rtx); bool aarch64_gen_adjusted_ldpstp (rtx *, bool, machine_mode, RTX_CODE); +void aarch64_finish_ldpstp_peephole (rtx *, bool, + enum rtx_code = (enum rtx_code)0); void aarch64_expand_sve_vec_cmp_int (rtx, rtx_code, rtx, rtx); bool aarch64_expand_sve_vec_cmp_float (rtx, rtx_code, rtx, rtx, bool); @@ -1040,8 +1042,9 @@ bool aarch64_mergeable_load_pair_p (machine_mode, rtx, rtx); bool aarch64_operands_ok_for_ldpstp (rtx *, bool, machine_mode); bool aarch64_operands_adjust_ok_for_ldpstp (rtx *, bool, machine_mode); bool aarch64_mem_ok_with_ldpstp_policy_model (rtx, bool, machine_mode); -void aarch64_swap_ldrstr_operands (rtx *, bool); bool aarch64_ldpstp_operand_mode_p (machine_mode); +rtx aarch64_gen_load_pair (rtx, rtx, rtx, enum rtx_code = (enum rtx_code)0); +rtx aarch64_gen_store_pair (rtx, rtx, rtx); extern void aarch64_asm_output_pool_epilogue (FILE *, const char *, tree, HOST_WIDE_INT); diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md index 50b68552fe4..af877ccd5a3 100644 --- a/gcc/config/aarch64/aarch64-simd.md +++ b/gcc/config/aarch64/aarch64-simd.md @@ -232,38 +232,6 @@ (define_insn "aarch64_store_lane0" [(set_attr "type" "neon_store1_1reg")] ) -(define_insn "load_pair" - [(set (match_operand:DREG 0 "register_operand") - (match_operand:DREG 1 "aarch64_mem_pair_operand")) - (set (match_operand:DREG2 2 "register_operand") - (match_operand:DREG2 3 "memory_operand"))] - "TARGET_FLOAT - && rtx_equal_p (XEXP (operands[3], 0), - plus_constant (Pmode, - XEXP (operands[1], 0), - GET_MODE_SIZE (mode)))" - {@ [ cons: =0 , 1 , =2 , 3 ; attrs: type ] - [ w , Ump , w , m ; neon_ldp ] ldp\t%d0, %d2, %z1 - [ r , Ump , r , m ; load_16 ] ldp\t%x0, %x2, %z1 - } -) - -(define_insn "vec_store_pair" - [(set (match_operand:DREG 0 "aarch64_mem_pair_operand") - (match_operand:DREG 1 "register_operand")) - (set (match_operand:DREG2 2 "memory_operand") - (match_operand:DREG2 3 "register_operand"))] - "TARGET_FLOAT - && rtx_equal_p (XEXP (operands[2], 0), - plus_constant (Pmode, - XEXP (operands[0], 0), - GET_MODE_SIZE (mode)))" - {@ [ cons: =0 , 1 , =2 , 3 ; attrs: type ] - [ Ump , w , m , w ; neon_stp ] stp\t%d1, %d3, %z0 - [ Ump , r , m , r ; store_16 ] stp\t%x1, %x3, %z0 - } -) - (define_insn "aarch64_simd_stp" [(set (match_operand:VP_2E 0 "aarch64_mem_pair_lanes_operand") (vec_duplicate:VP_2E (match_operand: 1 "register_operand")))] @@ -274,34 +242,6 @@ (define_insn "aarch64_simd_stp" } ) -(define_insn "load_pair" - [(set (match_operand:VQ 0 "register_operand" "=w") - (match_operand:VQ 1 "aarch64_mem_pair_operand" "Ump")) - (set (match_operand:VQ2 2 "register_operand" "=w") - (match_operand:VQ2 3 "memory_operand" "m"))] - "TARGET_FLOAT - && rtx_equal_p (XEXP (operands[3], 0), - plus_constant (Pmode, - XEXP (operands[1], 0), - GET_MODE_SIZE (mode)))" - "ldp\\t%q0, %q2, %z1" - [(set_attr "type" "neon_ldp_q")] -) - -(define_insn "vec_store_pair" - [(set (match_operand:VQ 0 "aarch64_mem_pair_operand" "=Ump") - (match_operand:VQ 1 "register_operand" "w")) - (set (match_operand:VQ2 2 "memory_operand" "=m") - (match_operand:VQ2 3 "register_operand" "w"))] - "TARGET_FLOAT - && rtx_equal_p (XEXP (operands[2], 0), - plus_constant (Pmode, - XEXP (operands[0], 0), - GET_MODE_SIZE (mode)))" - "stp\\t%q1, %q3, %z0" - [(set_attr "type" "neon_stp_q")] -) - (define_expand "@aarch64_split_simd_mov" [(set (match_operand:VQMOV 0) (match_operand:VQMOV 1))] diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc index baa2b6ca3f7..119da218529 100644 --- a/gcc/config/aarch64/aarch64.cc +++ b/gcc/config/aarch64/aarch64.cc @@ -5214,15 +5214,17 @@ aarch64_sme_mode_switch_regs::emit_mem_128_moves (sequence seq) rtx set2 = gen_rtx_SET (ops[lhs + 2], ops[3 - lhs]); /* Combine the sets with any stack allocation/deallocation. */ - rtvec vec; + rtx pat; if (prev_loc->index == 0) { rtx plus_sp = plus_constant (Pmode, sp, sp_adjust); - vec = gen_rtvec (3, gen_rtx_SET (sp, plus_sp), set1, set2); + rtvec vec = gen_rtvec (3, gen_rtx_SET (sp, plus_sp), set1, set2); + pat = gen_rtx_PARALLEL (VOIDmode, vec); } + else if (seq == PROLOGUE) + pat = aarch64_gen_store_pair (ops[1], ops[0], ops[2]); else - vec = gen_rtvec (2, set1, set2); - rtx pat = gen_rtx_PARALLEL (VOIDmode, vec); + pat = aarch64_gen_load_pair (ops[0], ops[2], ops[1]); /* Queue a deallocation to the end, otherwise emit the instruction now. */ @@ -8176,59 +8178,87 @@ aarch64_pop_regs (unsigned regno1, unsigned regno2, HOST_WIDE_INT adjustment, } } -/* Generate and return a store pair instruction of mode MODE to store - register REG1 to MEM1 and register REG2 to MEM2. */ +/* Given an ldp/stp register operand mode MODE, return a suitable mode to use + for a mem rtx representing the entire pair. */ -static rtx -aarch64_gen_store_pair (machine_mode mode, rtx mem1, rtx reg1, rtx mem2, - rtx reg2) -{ - switch (mode) - { - case E_DImode: - return gen_store_pair_dw_didi (mem1, reg1, mem2, reg2); +static machine_mode +aarch64_pair_mode_for_mode (machine_mode mode) +{ + if (known_eq (GET_MODE_SIZE (mode), 4)) + return V2x4QImode; + else if (known_eq (GET_MODE_SIZE (mode), 8)) + return V2x8QImode; + else if (known_eq (GET_MODE_SIZE (mode), 16)) + return V2x16QImode; + else + gcc_unreachable (); +} - case E_DFmode: - return gen_store_pair_dw_dfdf (mem1, reg1, mem2, reg2); +/* Given a base mem MEM with a mode suitable for an ldp/stp register operand, + return an rtx like MEM which instead represents the entire pair. */ - case E_TFmode: - return gen_store_pair_dw_tftf (mem1, reg1, mem2, reg2); +static rtx +aarch64_pair_mem_from_base (rtx mem) +{ + auto pair_mode = aarch64_pair_mode_for_mode (GET_MODE (mem)); + mem = adjust_bitfield_address_nv (mem, pair_mode, 0); + gcc_assert (aarch64_mem_pair_lanes_operand (mem, pair_mode)); + return mem; +} - case E_V4SImode: - return gen_vec_store_pairv4siv4si (mem1, reg1, mem2, reg2); +/* Generate and return a store pair instruction to store REG1 and REG2 + into memory starting at BASE_MEM. All three rtxes should have modes of the + same size. */ - case E_V16QImode: - return gen_vec_store_pairv16qiv16qi (mem1, reg1, mem2, reg2); +rtx +aarch64_gen_store_pair (rtx base_mem, rtx reg1, rtx reg2) +{ + rtx pair_mem = aarch64_pair_mem_from_base (base_mem); - default: - gcc_unreachable (); - } + return gen_rtx_SET (pair_mem, + gen_rtx_UNSPEC (GET_MODE (pair_mem), + gen_rtvec (2, reg1, reg2), + UNSPEC_STP)); } -/* Generate and regurn a load pair isntruction of mode MODE to load register - REG1 from MEM1 and register REG2 from MEM2. */ +/* Generate and return a load pair instruction to load a pair of + registers starting at BASE_MEM into REG1 and REG2. If CODE is + UNKNOWN, all three rtxes should have modes of the same size. + Otherwise, CODE is {SIGN,ZERO}_EXTEND, base_mem should be in SImode, + and REG{1,2} should be in DImode. */ -static rtx -aarch64_gen_load_pair (machine_mode mode, rtx reg1, rtx mem1, rtx reg2, - rtx mem2) +rtx +aarch64_gen_load_pair (rtx reg1, rtx reg2, rtx base_mem, enum rtx_code code) { - switch (mode) - { - case E_DImode: - return gen_load_pair_dw_didi (reg1, mem1, reg2, mem2); + rtx pair_mem = aarch64_pair_mem_from_base (base_mem); - case E_DFmode: - return gen_load_pair_dw_dfdf (reg1, mem1, reg2, mem2); - - case E_TFmode: - return gen_load_pair_dw_tftf (reg1, mem1, reg2, mem2); + const bool any_extend_p = (code == ZERO_EXTEND || code == SIGN_EXTEND); + if (any_extend_p) + { + gcc_checking_assert (GET_MODE (base_mem) == SImode + && GET_MODE (reg1) == DImode + && GET_MODE (reg2) == DImode); + } + else + gcc_assert (code == UNKNOWN); + + rtx unspecs[2] = { + gen_rtx_UNSPEC (any_extend_p ? SImode : GET_MODE (reg1), + gen_rtvec (1, pair_mem), + UNSPEC_LDP_FST), + gen_rtx_UNSPEC (any_extend_p ? SImode : GET_MODE (reg2), + gen_rtvec (1, copy_rtx (pair_mem)), + UNSPEC_LDP_SND) + }; - case E_V4SImode: - return gen_load_pairv4siv4si (reg1, mem1, reg2, mem2); + if (any_extend_p) + for (int i = 0; i < 2; i++) + unspecs[i] = gen_rtx_fmt_e (code, DImode, unspecs[i]); - default: - gcc_unreachable (); - } + return gen_rtx_PARALLEL (VOIDmode, + gen_rtvec (2, + gen_rtx_SET (reg1, unspecs[0]), + gen_rtx_SET (reg2, unspecs[1]))); } /* Return TRUE if return address signing should be enabled for the current @@ -8411,7 +8441,7 @@ aarch64_save_callee_saves (poly_int64 bytes_below_sp, emit_move_insn (move_src, gen_int_mode (aarch64_sve_vg, DImode)); } rtx base_rtx = stack_pointer_rtx; - poly_int64 sp_offset = offset; + poly_int64 cfa_offset = offset; HOST_WIDE_INT const_offset; if (mode == VNx2DImode && BYTES_BIG_ENDIAN) @@ -8436,8 +8466,17 @@ aarch64_save_callee_saves (poly_int64 bytes_below_sp, offset -= fp_offset; } rtx mem = gen_frame_mem (mode, plus_constant (Pmode, base_rtx, offset)); - bool need_cfa_note_p = (base_rtx != stack_pointer_rtx); + rtx cfa_base = stack_pointer_rtx; + if (hard_fp_valid_p && frame_pointer_needed) + { + cfa_base = hard_frame_pointer_rtx; + cfa_offset += (bytes_below_sp - frame.bytes_below_hard_fp); + } + + rtx cfa_mem = gen_frame_mem (mode, + plus_constant (Pmode, + cfa_base, cfa_offset)); unsigned int regno2; if (!aarch64_sve_mode_p (mode) && reg == move_src @@ -8447,12 +8486,9 @@ aarch64_save_callee_saves (poly_int64 bytes_below_sp, frame.reg_offset[regno2] - frame.reg_offset[regno])) { rtx reg2 = gen_rtx_REG (mode, regno2); - rtx mem2; offset += GET_MODE_SIZE (mode); - mem2 = gen_frame_mem (mode, plus_constant (Pmode, base_rtx, offset)); - insn = emit_insn (aarch64_gen_store_pair (mode, mem, reg, mem2, - reg2)); + insn = emit_insn (aarch64_gen_store_pair (mem, reg, reg2)); /* The first part of a frame-related parallel insn is always assumed to be relevant to the frame @@ -8460,31 +8496,28 @@ aarch64_save_callee_saves (poly_int64 bytes_below_sp, frame-related if explicitly marked. */ if (aarch64_emit_cfi_for_reg_p (regno2)) { - if (need_cfa_note_p) - aarch64_add_cfa_expression (insn, reg2, stack_pointer_rtx, - sp_offset + GET_MODE_SIZE (mode)); - else - RTX_FRAME_RELATED_P (XVECEXP (PATTERN (insn), 0, 1)) = 1; + const auto off = cfa_offset + GET_MODE_SIZE (mode); + rtx cfa_mem2 = gen_frame_mem (mode, + plus_constant (Pmode, + cfa_base, + off)); + add_reg_note (insn, REG_CFA_OFFSET, + gen_rtx_SET (cfa_mem2, reg2)); } regno = regno2; ++i; } else if (mode == VNx2DImode && BYTES_BIG_ENDIAN) - { - insn = emit_insn (gen_aarch64_pred_mov (mode, mem, ptrue, move_src)); - need_cfa_note_p = true; - } + insn = emit_insn (gen_aarch64_pred_mov (mode, mem, ptrue, move_src)); else if (aarch64_sve_mode_p (mode)) insn = emit_insn (gen_rtx_SET (mem, move_src)); else insn = emit_move_insn (mem, move_src); RTX_FRAME_RELATED_P (insn) = frame_related_p; - if (frame_related_p && need_cfa_note_p) - aarch64_add_cfa_expression (insn, reg, stack_pointer_rtx, sp_offset); - else if (frame_related_p && move_src != reg) - add_reg_note (insn, REG_FRAME_RELATED_EXPR, gen_rtx_SET (mem, reg)); + if (frame_related_p) + add_reg_note (insn, REG_CFA_OFFSET, gen_rtx_SET (cfa_mem, reg)); /* Emit a fake instruction to indicate that the VG save slot has been initialized. */ @@ -8548,11 +8581,9 @@ aarch64_restore_callee_saves (poly_int64 bytes_below_sp, frame.reg_offset[regno2] - frame.reg_offset[regno])) { rtx reg2 = gen_rtx_REG (mode, regno2); - rtx mem2; offset += GET_MODE_SIZE (mode); - mem2 = gen_frame_mem (mode, plus_constant (Pmode, base_rtx, offset)); - emit_insn (aarch64_gen_load_pair (mode, reg, mem, reg2, mem2)); + emit_insn (aarch64_gen_load_pair (reg, reg2, mem)); *cfi_ops = alloc_reg_note (REG_CFA_RESTORE, reg2, *cfi_ops); regno = regno2; @@ -8896,9 +8927,9 @@ aarch64_process_components (sbitmap components, bool prologue_p) : gen_rtx_SET (reg2, mem2); if (prologue_p) - insn = emit_insn (aarch64_gen_store_pair (mode, mem, reg, mem2, reg2)); + insn = emit_insn (aarch64_gen_store_pair (mem, reg, reg2)); else - insn = emit_insn (aarch64_gen_load_pair (mode, reg, mem, reg2, mem2)); + insn = emit_insn (aarch64_gen_load_pair (reg, reg2, mem)); if (frame_related_p || frame_related2_p) { @@ -10294,12 +10325,18 @@ aarch64_classify_address (struct aarch64_address_info *info, mode of the corresponding addressing mode is half of that. */ if (type == ADDR_QUERY_LDP_STP_N) { - if (known_eq (GET_MODE_SIZE (mode), 16)) + if (known_eq (GET_MODE_SIZE (mode), 32)) + mode = V16QImode; + else if (known_eq (GET_MODE_SIZE (mode), 16)) mode = DFmode; else if (known_eq (GET_MODE_SIZE (mode), 8)) mode = SFmode; else return false; + + /* This isn't really an Advanced SIMD struct mode, but a mode + used to represent the complete mem in a load/store pair. */ + advsimd_struct_p = false; } bool allow_reg_index_p = (!load_store_pair_p @@ -10917,9 +10954,7 @@ aarch64_init_tpidr2_block () /* The first word of the block points to the save buffer and the second word is the number of ZA slices to save. */ rtx block_0 = adjust_address (block, DImode, 0); - rtx block_8 = adjust_address (block, DImode, 8); - emit_insn (gen_store_pair_dw_didi (block_0, za_save_buffer, - block_8, svl_bytes_reg)); + emit_insn (aarch64_gen_store_pair (block_0, za_save_buffer, svl_bytes_reg)); if (!memory_operand (block, V16QImode)) block = replace_equiv_address (block, force_reg (Pmode, XEXP (block, 0))); @@ -12268,7 +12303,8 @@ aarch64_print_operand (FILE *f, rtx x, int code) if (!MEM_P (x) || (code == 'y' && maybe_ne (GET_MODE_SIZE (mode), 8) - && maybe_ne (GET_MODE_SIZE (mode), 16))) + && maybe_ne (GET_MODE_SIZE (mode), 16) + && maybe_ne (GET_MODE_SIZE (mode), 32))) { output_operand_lossage ("invalid operand for '%%%c'", code); return; @@ -25432,10 +25468,8 @@ aarch64_copy_one_block_and_progress_pointers (rtx *src, rtx *dst, *src = adjust_address (*src, mode, 0); *dst = adjust_address (*dst, mode, 0); /* Emit the memcpy. */ - emit_insn (aarch64_gen_load_pair (mode, reg1, *src, reg2, - aarch64_progress_pointer (*src))); - emit_insn (aarch64_gen_store_pair (mode, *dst, reg1, - aarch64_progress_pointer (*dst), reg2)); + emit_insn (aarch64_gen_load_pair (reg1, reg2, *src)); + emit_insn (aarch64_gen_store_pair (*dst, reg1, reg2)); /* Move the pointers forward. */ *src = aarch64_move_pointer (*src, 32); *dst = aarch64_move_pointer (*dst, 32); @@ -25614,8 +25648,7 @@ aarch64_set_one_block_and_progress_pointer (rtx src, rtx *dst, /* "Cast" the *dst to the correct mode. */ *dst = adjust_address (*dst, mode, 0); /* Emit the memset. */ - emit_insn (aarch64_gen_store_pair (mode, *dst, src, - aarch64_progress_pointer (*dst), src)); + emit_insn (aarch64_gen_store_pair (*dst, src, src)); /* Move the pointers forward. */ *dst = aarch64_move_pointer (*dst, 32); @@ -26812,6 +26845,29 @@ aarch64_swap_ldrstr_operands (rtx* operands, bool load) } } +/* Helper function used for generation of load/store pair instructions, called + from peepholes in aarch64-ldpstp.md. OPERANDS is an array of + operands as matched by the peepholes in that file. LOAD_P is true if we're + generating a load pair, otherwise we're generating a store pair. CODE is + either {ZERO,SIGN}_EXTEND for extending loads or UNKNOWN if we're generating a + standard load/store pair. */ + +void +aarch64_finish_ldpstp_peephole (rtx *operands, bool load_p, enum rtx_code code) +{ + aarch64_swap_ldrstr_operands (operands, load_p); + + if (load_p) + emit_insn (aarch64_gen_load_pair (operands[0], operands[2], + operands[1], code)); + else + { + gcc_assert (code == UNKNOWN); + emit_insn (aarch64_gen_store_pair (operands[0], operands[1], + operands[3])); + } +} + /* Taking X and Y to be HOST_WIDE_INT pointers, return the result of a comparison between the two. */ int @@ -26993,10 +27049,10 @@ bool aarch64_gen_adjusted_ldpstp (rtx *operands, bool load, machine_mode mode, RTX_CODE code) { - rtx base, offset_1, offset_3, t1, t2; - rtx mem_1, mem_2, mem_3, mem_4; + rtx base, offset_1, offset_2; + rtx mem_1, mem_2; rtx temp_operands[8]; - HOST_WIDE_INT off_val_1, off_val_3, base_off, new_off_1, new_off_3, + HOST_WIDE_INT off_val_1, off_val_2, base_off, new_off_1, new_off_2, stp_off_upper_limit, stp_off_lower_limit, msize; /* We make changes on a copy as we may still bail out. */ @@ -27019,23 +27075,19 @@ aarch64_gen_adjusted_ldpstp (rtx *operands, bool load, if (load) { mem_1 = copy_rtx (temp_operands[1]); - mem_2 = copy_rtx (temp_operands[3]); - mem_3 = copy_rtx (temp_operands[5]); - mem_4 = copy_rtx (temp_operands[7]); + mem_2 = copy_rtx (temp_operands[5]); } else { mem_1 = copy_rtx (temp_operands[0]); - mem_2 = copy_rtx (temp_operands[2]); - mem_3 = copy_rtx (temp_operands[4]); - mem_4 = copy_rtx (temp_operands[6]); + mem_2 = copy_rtx (temp_operands[4]); gcc_assert (code == UNKNOWN); } extract_base_offset_in_addr (mem_1, &base, &offset_1); - extract_base_offset_in_addr (mem_3, &base, &offset_3); + extract_base_offset_in_addr (mem_2, &base, &offset_2); gcc_assert (base != NULL_RTX && offset_1 != NULL_RTX - && offset_3 != NULL_RTX); + && offset_2 != NULL_RTX); /* Adjust offset so it can fit in LDP/STP instruction. */ msize = GET_MODE_SIZE (mode).to_constant(); @@ -27043,11 +27095,11 @@ aarch64_gen_adjusted_ldpstp (rtx *operands, bool load, stp_off_lower_limit = - msize * 0x40; off_val_1 = INTVAL (offset_1); - off_val_3 = INTVAL (offset_3); + off_val_2 = INTVAL (offset_2); /* The base offset is optimally half way between the two STP/LDP offsets. */ if (msize <= 4) - base_off = (off_val_1 + off_val_3) / 2; + base_off = (off_val_1 + off_val_2) / 2; else /* However, due to issues with negative LDP/STP offset generation for larger modes, for DF, DD, DI and vector modes. we must not use negative @@ -27087,73 +27139,58 @@ aarch64_gen_adjusted_ldpstp (rtx *operands, bool load, new_off_1 = off_val_1 - base_off; /* Offset of the second STP/LDP. */ - new_off_3 = off_val_3 - base_off; + new_off_2 = off_val_2 - base_off; /* The offsets must be within the range of the LDP/STP instructions. */ if (new_off_1 > stp_off_upper_limit || new_off_1 < stp_off_lower_limit - || new_off_3 > stp_off_upper_limit || new_off_3 < stp_off_lower_limit) + || new_off_2 > stp_off_upper_limit || new_off_2 < stp_off_lower_limit) return false; replace_equiv_address_nv (mem_1, plus_constant (Pmode, operands[8], new_off_1), true); replace_equiv_address_nv (mem_2, plus_constant (Pmode, operands[8], - new_off_1 + msize), true); - replace_equiv_address_nv (mem_3, plus_constant (Pmode, operands[8], - new_off_3), true); - replace_equiv_address_nv (mem_4, plus_constant (Pmode, operands[8], - new_off_3 + msize), true); + new_off_2), true); if (!aarch64_mem_pair_operand (mem_1, mode) - || !aarch64_mem_pair_operand (mem_3, mode)) + || !aarch64_mem_pair_operand (mem_2, mode)) return false; - if (code == ZERO_EXTEND) - { - mem_1 = gen_rtx_ZERO_EXTEND (DImode, mem_1); - mem_2 = gen_rtx_ZERO_EXTEND (DImode, mem_2); - mem_3 = gen_rtx_ZERO_EXTEND (DImode, mem_3); - mem_4 = gen_rtx_ZERO_EXTEND (DImode, mem_4); - } - else if (code == SIGN_EXTEND) - { - mem_1 = gen_rtx_SIGN_EXTEND (DImode, mem_1); - mem_2 = gen_rtx_SIGN_EXTEND (DImode, mem_2); - mem_3 = gen_rtx_SIGN_EXTEND (DImode, mem_3); - mem_4 = gen_rtx_SIGN_EXTEND (DImode, mem_4); - } - if (load) { operands[0] = temp_operands[0]; operands[1] = mem_1; operands[2] = temp_operands[2]; - operands[3] = mem_2; operands[4] = temp_operands[4]; - operands[5] = mem_3; + operands[5] = mem_2; operands[6] = temp_operands[6]; - operands[7] = mem_4; } else { operands[0] = mem_1; operands[1] = temp_operands[1]; - operands[2] = mem_2; operands[3] = temp_operands[3]; - operands[4] = mem_3; + operands[4] = mem_2; operands[5] = temp_operands[5]; - operands[6] = mem_4; operands[7] = temp_operands[7]; } /* Emit adjusting instruction. */ emit_insn (gen_rtx_SET (operands[8], plus_constant (DImode, base, base_off))); /* Emit ldp/stp instructions. */ - t1 = gen_rtx_SET (operands[0], operands[1]); - t2 = gen_rtx_SET (operands[2], operands[3]); - emit_insn (gen_rtx_PARALLEL (VOIDmode, gen_rtvec (2, t1, t2))); - t1 = gen_rtx_SET (operands[4], operands[5]); - t2 = gen_rtx_SET (operands[6], operands[7]); - emit_insn (gen_rtx_PARALLEL (VOIDmode, gen_rtvec (2, t1, t2))); + if (load) + { + emit_insn (aarch64_gen_load_pair (operands[0], operands[2], + operands[1], code)); + emit_insn (aarch64_gen_load_pair (operands[4], operands[6], + operands[5], code)); + } + else + { + emit_insn (aarch64_gen_store_pair (operands[0], operands[1], + operands[3])); + emit_insn (aarch64_gen_store_pair (operands[4], operands[5], + operands[7])); + } return true; } diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md index f87cddf8f4b..a402729e3dc 100644 --- a/gcc/config/aarch64/aarch64.md +++ b/gcc/config/aarch64/aarch64.md @@ -228,6 +228,9 @@ (define_c_enum "unspec" [ UNSPEC_GOTSMALLTLS UNSPEC_GOTTINYPIC UNSPEC_GOTTINYTLS + UNSPEC_STP + UNSPEC_LDP_FST + UNSPEC_LDP_SND UNSPEC_LD1 UNSPEC_LD2 UNSPEC_LD2_DREG @@ -527,6 +530,11 @@ (define_attr "predicated" "yes,no" (const_string "no")) ;; may chose to hold the tracking state encoded in SP. (define_attr "speculation_barrier" "true,false" (const_string "false")) +;; Attribute use to identify load pair and store pair instructions. +;; Currently the attribute is only applied to the non-writeback ldp/stp +;; patterns. +(define_attr "ldpstp" "ldp,stp,none" (const_string "none")) + ;; ------------------------------------------------------------------- ;; Pipeline descriptions and scheduling ;; ------------------------------------------------------------------- @@ -1823,100 +1831,62 @@ (define_expand "setmemdi" FAIL; }) -;; Operands 1 and 3 are tied together by the final condition; so we allow -;; fairly lax checking on the second memory operation. -(define_insn "load_pair_sw_" - [(set (match_operand:SX 0 "register_operand") - (match_operand:SX 1 "aarch64_mem_pair_operand")) - (set (match_operand:SX2 2 "register_operand") - (match_operand:SX2 3 "memory_operand"))] - "rtx_equal_p (XEXP (operands[3], 0), - plus_constant (Pmode, - XEXP (operands[1], 0), - GET_MODE_SIZE (mode)))" - {@ [ cons: =0 , 1 , =2 , 3 ; attrs: type , arch ] - [ r , Ump , r , m ; load_8 , * ] ldp\t%w0, %w2, %z1 - [ w , Ump , w , m ; neon_load1_2reg , fp ] ldp\t%s0, %s2, %z1 - } -) - -;; Storing different modes that can still be merged -(define_insn "load_pair_dw_" - [(set (match_operand:DX 0 "register_operand") - (match_operand:DX 1 "aarch64_mem_pair_operand")) - (set (match_operand:DX2 2 "register_operand") - (match_operand:DX2 3 "memory_operand"))] - "rtx_equal_p (XEXP (operands[3], 0), - plus_constant (Pmode, - XEXP (operands[1], 0), - GET_MODE_SIZE (mode)))" - {@ [ cons: =0 , 1 , =2 , 3 ; attrs: type , arch ] - [ r , Ump , r , m ; load_16 , * ] ldp\t%x0, %x2, %z1 - [ w , Ump , w , m ; neon_load1_2reg , fp ] ldp\t%d0, %d2, %z1 - } -) - -(define_insn "load_pair_dw_" - [(set (match_operand:TX 0 "register_operand" "=w") - (match_operand:TX 1 "aarch64_mem_pair_operand" "Ump")) - (set (match_operand:TX2 2 "register_operand" "=w") - (match_operand:TX2 3 "memory_operand" "m"))] - "TARGET_BASE_SIMD - && rtx_equal_p (XEXP (operands[3], 0), - plus_constant (Pmode, - XEXP (operands[1], 0), - GET_MODE_SIZE (mode)))" - "ldp\\t%q0, %q2, %z1" +(define_insn "*load_pair_" + [(set (match_operand:GPI 0 "aarch64_ldp_reg_operand") + (unspec [ + (match_operand: 1 "aarch64_mem_pair_lanes_operand") + ] UNSPEC_LDP_FST)) + (set (match_operand:GPI 2 "aarch64_ldp_reg_operand") + (unspec [ + (match_dup 1) + ] UNSPEC_LDP_SND))] + "" + {@ [cons: =0, 1, =2; attrs: type, arch] + [ r, Umn, r; load_, * ] ldp\t%0, %2, %y1 + [ w, Umn, w; neon_load1_2reg, fp ] ldp\t%0, %2, %y1 + } + [(set_attr "ldpstp" "ldp")] +) + +(define_insn "*load_pair_16" + [(set (match_operand:TI 0 "aarch64_ldp_reg_operand" "=w") + (unspec [ + (match_operand:V2x16QI 1 "aarch64_mem_pair_lanes_operand" "Umn") + ] UNSPEC_LDP_FST)) + (set (match_operand:TI 2 "aarch64_ldp_reg_operand" "=w") + (unspec [ + (match_dup 1) + ] UNSPEC_LDP_SND))] + "TARGET_FLOAT" + "ldp\\t%q0, %q2, %y1" [(set_attr "type" "neon_ldp_q") - (set_attr "fp" "yes")] -) - -;; Operands 0 and 2 are tied together by the final condition; so we allow -;; fairly lax checking on the second memory operation. -(define_insn "store_pair_sw_" - [(set (match_operand:SX 0 "aarch64_mem_pair_operand") - (match_operand:SX 1 "aarch64_reg_zero_or_fp_zero")) - (set (match_operand:SX2 2 "memory_operand") - (match_operand:SX2 3 "aarch64_reg_zero_or_fp_zero"))] - "rtx_equal_p (XEXP (operands[2], 0), - plus_constant (Pmode, - XEXP (operands[0], 0), - GET_MODE_SIZE (mode)))" - {@ [ cons: =0 , 1 , =2 , 3 ; attrs: type , arch ] - [ Ump , rYZ , m , rYZ ; store_8 , * ] stp\t%w1, %w3, %z0 - [ Ump , w , m , w ; neon_store1_2reg , fp ] stp\t%s1, %s3, %z0 - } -) - -;; Storing different modes that can still be merged -(define_insn "store_pair_dw_" - [(set (match_operand:DX 0 "aarch64_mem_pair_operand") - (match_operand:DX 1 "aarch64_reg_zero_or_fp_zero")) - (set (match_operand:DX2 2 "memory_operand") - (match_operand:DX2 3 "aarch64_reg_zero_or_fp_zero"))] - "rtx_equal_p (XEXP (operands[2], 0), - plus_constant (Pmode, - XEXP (operands[0], 0), - GET_MODE_SIZE (mode)))" - {@ [ cons: =0 , 1 , =2 , 3 ; attrs: type , arch ] - [ Ump , rYZ , m , rYZ ; store_16 , * ] stp\t%x1, %x3, %z0 - [ Ump , w , m , w ; neon_store1_2reg , fp ] stp\t%d1, %d3, %z0 - } -) - -(define_insn "store_pair_dw_" - [(set (match_operand:TX 0 "aarch64_mem_pair_operand" "=Ump") - (match_operand:TX 1 "register_operand" "w")) - (set (match_operand:TX2 2 "memory_operand" "=m") - (match_operand:TX2 3 "register_operand" "w"))] - "TARGET_BASE_SIMD - && rtx_equal_p (XEXP (operands[2], 0), - plus_constant (Pmode, - XEXP (operands[0], 0), - GET_MODE_SIZE (TFmode)))" - "stp\\t%q1, %q3, %z0" + (set_attr "fp" "yes") + (set_attr "ldpstp" "ldp")] +) + +(define_insn "*store_pair_" + [(set (match_operand: 0 "aarch64_mem_pair_lanes_operand") + (unspec: + [(match_operand:GPI 1 "aarch64_stp_reg_operand") + (match_operand:GPI 2 "aarch64_stp_reg_operand")] UNSPEC_STP))] + "" + {@ [cons: =0, 1, 2; attrs: type , arch] + [ Umn, rYZ, rYZ; store_, * ] stp\t%1, %2, %y0 + [ Umn, w, w; neon_store1_2reg , fp ] stp\t%1, %2, %y0 + } + [(set_attr "ldpstp" "stp")] +) + +(define_insn "*store_pair_16" + [(set (match_operand:V2x16QI 0 "aarch64_mem_pair_lanes_operand" "=Umn") + (unspec:V2x16QI + [(match_operand:TI 1 "aarch64_ldp_reg_operand" "w") + (match_operand:TI 2 "aarch64_ldp_reg_operand" "w")] UNSPEC_STP))] + "TARGET_FLOAT" + "stp\t%q1, %q2, %y0" [(set_attr "type" "neon_stp_q") - (set_attr "fp" "yes")] + (set_attr "fp" "yes") + (set_attr "ldpstp" "stp")] ) ;; Writeback load/store pair patterns. @@ -2146,14 +2116,15 @@ (define_insn "*extendsidi2_aarch64" (define_insn "*load_pair_extendsidi2_aarch64" [(set (match_operand:DI 0 "register_operand" "=r") - (sign_extend:DI (match_operand:SI 1 "aarch64_mem_pair_operand" "Ump"))) + (sign_extend:DI (unspec:SI [ + (match_operand:V2x4QI 1 "aarch64_mem_pair_lanes_operand" "Umn") + ] UNSPEC_LDP_FST))) (set (match_operand:DI 2 "register_operand" "=r") - (sign_extend:DI (match_operand:SI 3 "memory_operand" "m")))] - "rtx_equal_p (XEXP (operands[3], 0), - plus_constant (Pmode, - XEXP (operands[1], 0), - GET_MODE_SIZE (SImode)))" - "ldpsw\\t%0, %2, %z1" + (sign_extend:DI (unspec:SI [ + (match_dup 1) + ] UNSPEC_LDP_SND)))] + "" + "ldpsw\\t%0, %2, %y1" [(set_attr "type" "load_8")] ) @@ -2173,16 +2144,17 @@ (define_insn "*zero_extendsidi2_aarch64" (define_insn "*load_pair_zero_extendsidi2_aarch64" [(set (match_operand:DI 0 "register_operand") - (zero_extend:DI (match_operand:SI 1 "aarch64_mem_pair_operand"))) + (zero_extend:DI (unspec:SI [ + (match_operand:V2x4QI 1 "aarch64_mem_pair_lanes_operand") + ] UNSPEC_LDP_FST))) (set (match_operand:DI 2 "register_operand") - (zero_extend:DI (match_operand:SI 3 "memory_operand")))] - "rtx_equal_p (XEXP (operands[3], 0), - plus_constant (Pmode, - XEXP (operands[1], 0), - GET_MODE_SIZE (SImode)))" - {@ [ cons: =0 , 1 , =2 , 3 ; attrs: type , arch ] - [ r , Ump , r , m ; load_8 , * ] ldp\t%w0, %w2, %z1 - [ w , Ump , w , m ; neon_load1_2reg , fp ] ldp\t%s0, %s2, %z1 + (zero_extend:DI (unspec:SI [ + (match_dup 1) + ] UNSPEC_LDP_SND)))] + "" + {@ [ cons: =0 , 1 , =2; attrs: type , arch] + [ r , Umn , r ; load_8 , * ] ldp\t%w0, %w2, %y1 + [ w , Umn , w ; neon_load1_2reg, fp ] ldp\t%s0, %s2, %y1 } ) diff --git a/gcc/config/aarch64/iterators.md b/gcc/config/aarch64/iterators.md index f204850850c..65c5cadccde 100644 --- a/gcc/config/aarch64/iterators.md +++ b/gcc/config/aarch64/iterators.md @@ -1604,6 +1604,9 @@ (define_mode_attr VDBL [(V8QI "V16QI") (V4HI "V8HI") (SI "V2SI") (SF "V2SF") (DI "V2DI") (DF "V2DF")]) +;; Load/store pair mode. +(define_mode_attr VPAIR [(SI "V2x4QI") (DI "V2x8QI")]) + ;; Register suffix for double-length mode. (define_mode_attr Vdtype [(V4HF "8h") (V2SF "4s")]) diff --git a/gcc/config/aarch64/predicates.md b/gcc/config/aarch64/predicates.md index 698a68a6311..9e6231691c0 100644 --- a/gcc/config/aarch64/predicates.md +++ b/gcc/config/aarch64/predicates.md @@ -300,10 +300,12 @@ (define_special_predicate "aarch64_mem_pair_operator" (match_test "known_eq (GET_MODE_SIZE (mode), GET_MODE_SIZE (GET_MODE (op)))")))) -(define_predicate "aarch64_mem_pair_operand" - (and (match_code "mem") - (match_test "aarch64_legitimate_address_p (mode, XEXP (op, 0), false, - ADDR_QUERY_LDP_STP)"))) +;; Like aarch64_mem_pair_operator, but additionally check the +;; address is suitable. +(define_special_predicate "aarch64_mem_pair_operand" + (and (match_operand 0 "aarch64_mem_pair_operator") + (match_test "aarch64_legitimate_address_p (GET_MODE (op), XEXP (op, 0), + false, ADDR_QUERY_LDP_STP)"))) (define_predicate "pmode_plus_operator" (and (match_code "plus") From patchwork Thu Dec 7 14:48:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Coplan X-Patchwork-Id: 81675 X-Patchwork-Delegate: rsandifo@gcc.gnu.org Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 536BA3860015 for ; Thu, 7 Dec 2023 14:49:47 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR03-DBA-obe.outbound.protection.outlook.com (mail-dbaeur03on2055.outbound.protection.outlook.com [40.107.104.55]) by sourceware.org (Postfix) with ESMTPS id 014C7385AC12 for ; Thu, 7 Dec 2023 14:49:06 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 014C7385AC12 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 014C7385AC12 Authentication-Results: server2.sourceware.org; arc=pass smtp.remote-ip=40.107.104.55 ARC-Seal: i=3; a=rsa-sha256; d=sourceware.org; s=key; t=1701960553; cv=pass; b=b0hyMHhmoqDRm2dstHq/RYKLlLdpJrcUxz6XRQnVLIysUaMepalQM8azsbzJv5725I7u3Qrcj8MWbKZ8k38a8eYig8AwrjWtjvfhYyMKhLZ3hefc1FzXk3zzgbz+H81RkwFyL2WRHrQoVb0RxL6vrZv/c0B9Hw5o1sAfOrEwgfM= ARC-Message-Signature: i=3; a=rsa-sha256; d=sourceware.org; s=key; t=1701960553; c=relaxed/simple; bh=xG0/joNkmHVxncQmUP0GWedwVdQ3AJ76sd2+inqBhVU=; h=DKIM-Signature:DKIM-Signature:Date:From:To:Subject:Message-ID: MIME-Version; b=O+FQRY71ROy6+ZL8YqfkwZX/hLiBzKbpCV1Ox6TPJu6HwRw7bY1ShDyfyHlkvZHlWyd5rCS0z/IBn0aBEFzt9TW7aaRkQHIlXHac7cq9ZvtpgdyU1/ga8svm1HnyPvKRlJEjhq8h8zSsLiW+megXlwm7AFEVSu+C1/IUUxKFMlA= ARC-Authentication-Results: i=3; server2.sourceware.org ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=epfUsGSmulkIsPCT4SJ2S6PS7PN+BMadwBE9C5Uv6FOfzf9IC5ag3ynBI/Dc99GSWJqMbJtrp9RBRumh5S7Vbwp9Lxi2lU7IDQ7kYHhkMLuKPOiaHmNfGYetPPafn4caDLLlCM0OrO9/vhpX2vYrVktBmyBC/OMVMKSKCTAZJ1LPknlH4LcwsCClQ7xypgUpXa3wdUoOydYnZ8VKKCXzLTOdXglwElXQBqLOCz0DS4CrB3Iamiwd0rwcs/cHWhFQyvmPhG36fA05pZv2K9qQ3h/z6G1FBFhGvRcym9Hu8y/f3DBGipgw145hyA9Ysqw03eKup/LzVnypJ1WZWNG2Lw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=D2fPUrPHsiH/WKlU7cO1EIIdJ3rx4nkURQ7kpv0/OCo=; b=TA/npovF3uhZIg2gbiqoGJhXtmYCXnKyUkYTocrvar2HL927nQgB5vdMRQjw4QCr/9FUS6U29tdw8wy1dIBHOSiN3h7tsFrHINxuw5QTBLG0ib4SOd6ta1O25JQWCYUP4sw90fz6PjOZ+axxJ2L5YUVh2H/vTSeqb8xJEsTvFoVv9hMWBe021VT2bPqhuqE+4QbzWhf5sh9Gt1lCEyNlhGmrJPBTPbsBn9z4nXkGOvdLY1WX66JGPIJsrLbz2oz76FgfvreZegCHQ/q170ndFx7rKODBxnYi7ODiDn2rfHfxcsGF8cvueA4qc/cPy9CiwJ5qIjOjM4moW9B/iZo03A== ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com]) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=D2fPUrPHsiH/WKlU7cO1EIIdJ3rx4nkURQ7kpv0/OCo=; b=RiX7yqFyHhXCYx0M9GqghIFxpDLRLOww8rL57mFFUpuR/z9rbx3w0/WMFn00C9+yE4cXzCnHnYCE+m3n1+RQ3B6ZaWk5HGp0MlSz54i04qxmNtC7SLThxt1lxzOZmzTIrQd/L6UROKkA4A/ULKdIloPWble6pbf7w6qmFwjPu1M= Received: from DB8PR09CA0002.eurprd09.prod.outlook.com (2603:10a6:10:a0::15) by DBBPR08MB10602.eurprd08.prod.outlook.com (2603:10a6:10:52c::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7068.27; Thu, 7 Dec 2023 14:49:01 +0000 Received: from DB1PEPF0003922F.eurprd03.prod.outlook.com (2603:10a6:10:a0:cafe::29) by DB8PR09CA0002.outlook.office365.com (2603:10a6:10:a0::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7068.27 via Frontend Transport; Thu, 7 Dec 2023 14:49:01 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DB1PEPF0003922F.mail.protection.outlook.com (10.167.8.102) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7068.20 via Frontend Transport; Thu, 7 Dec 2023 14:49:01 +0000 Received: ("Tessian outbound 8289ea11ec17:v228"); Thu, 07 Dec 2023 14:49:00 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 11d37de96ba0f91a X-CR-MTA-TID: 64aa7808 Received: from 5ea925dd75d3.2 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 5168099D-782E-4ACC-B876-F177F7543D58.1; Thu, 07 Dec 2023 14:48:53 +0000 Received: from EUR03-AM7-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5ea925dd75d3.2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Thu, 07 Dec 2023 14:48:53 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OBEDQnv1FCn3aVUHUa9fhNuDFEPT3mAhNssiH19fTqlGQOFMCP1wQqJ8lc8ghU7g3xqdCorkr+xZJJrWqehk72DlBkQwOj4QxbuHcAswavi8NNJEwfmqb/pwcwM8K3rg9LyusFPc3YCu85I09beJ4DfGWBWrJraDvfJatCE4B13YGJx0OIHrHJU8jvft/r7z97lcPBwFeaw1yCsmlUdCGSSLYiQNobZSjIG9Grxz0ShnlC9r71yHcBMa9D4wt3/hBcgztujUNjOXevaERojPYAsEZvaGdsjkSCqgDbX50LBM8yZshPylfiUqq0AhjJbrajgCFbFsNgMYJaQmvAyJWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=D2fPUrPHsiH/WKlU7cO1EIIdJ3rx4nkURQ7kpv0/OCo=; b=TJ38LwLzh+bTo30ivLJX6xTT24w003mndRK49hO7bRt/i8CLHa8Hn1nJMYH7qx0J+SrY9PgywSKwsKE3HipBhTPDhNlCcphtFAB4m3MKNAy3X8zLaTsbpocESJLMB7+rvHC3mEUWt1XTsCgdLs6kiLhadCtdQ2kC+FFHVQziAqgiNyyMowlKZynvHjl32TAIMcclDowNKYjWB+2OIR2Fdj7lRRDFO7+neDN6g1qK4iRZZFoVbeEC92oFfe7kELO2g9fvA1kiHLfAhUQvRDntdLpYXxhGHo9F5dxCm13XMRKKBgV0u0WKXFDIL21cwJe/LgGcqfRxGESQ5fRa/0Ycag== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=D2fPUrPHsiH/WKlU7cO1EIIdJ3rx4nkURQ7kpv0/OCo=; b=RiX7yqFyHhXCYx0M9GqghIFxpDLRLOww8rL57mFFUpuR/z9rbx3w0/WMFn00C9+yE4cXzCnHnYCE+m3n1+RQ3B6ZaWk5HGp0MlSz54i04qxmNtC7SLThxt1lxzOZmzTIrQd/L6UROKkA4A/ULKdIloPWble6pbf7w6qmFwjPu1M= Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; Received: from PAWPR08MB8958.eurprd08.prod.outlook.com (2603:10a6:102:33e::15) by DB9PR08MB9681.eurprd08.prod.outlook.com (2603:10a6:10:45c::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7068.27; Thu, 7 Dec 2023 14:48:50 +0000 Received: from PAWPR08MB8958.eurprd08.prod.outlook.com ([fe80::8512:cc10:24d4:1919]) by PAWPR08MB8958.eurprd08.prod.outlook.com ([fe80::8512:cc10:24d4:1919%5]) with mapi id 15.20.7068.027; Thu, 7 Dec 2023 14:48:50 +0000 Date: Thu, 7 Dec 2023 14:48:48 +0000 From: Alex Coplan To: gcc-patches@gcc.gnu.org Cc: Richard Sandiford , Kyrylo Tkachov Subject: [PATCH v3 10/11] aarch64: Add new load/store pair fusion pass Message-ID: Content-Disposition: inline X-ClientProxiedBy: LO4P123CA0184.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:1a4::9) To PAWPR08MB8958.eurprd08.prod.outlook.com (2603:10a6:102:33e::15) MIME-Version: 1.0 X-MS-TrafficTypeDiagnostic: PAWPR08MB8958:EE_|DB9PR08MB9681:EE_|DB1PEPF0003922F:EE_|DBBPR08MB10602:EE_ X-MS-Office365-Filtering-Correlation-Id: c1644755-0218-4931-fd7f-08dbf733a66b x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: jXFzLFSPG+xKwckT6QFCfbEcyWyp6Xeinvw1E52ucGM/WgsZYnigEmHm+4Nl2BW62jIda7vHB9m50252wir0ZfgKz8nsiwOyS2UK8HVDqVJXADBV7Ab3PseZ1y0YhNM5ADGPlTBXNejXUfOvvpB6AsbGkKi9vB6S4BhtOdGmPNrAjw8mK2cXBsv5i1tx2FuGHfPo8JGudfqAHoIXDJJwJ1RJxBgwgiLAAHgvWuRRDwaS2ifhfX4qJEI42PuE3R0ZUg5zvIoRs1cgfwsghkN2Bx/uSyY9zRvHEbNoCxTg5j71UC5zW9BsGbcd8GTgNXEs3rPN9rhgPu6XUnSCiVNbSJEIB60WznqELHfyfQ8qodF8EA6FrkbALGJTCzWX0rO5xW54KNWz2q2Swhy886YFyF5sR+N+zMymenDTdecyjMmBt23ylmKVIB0Y6d6OPguy/3iuLbXtxZHtzGXDLTRSrrcXuOtdUfOZz8YVGyRDgqer7SdjH2RTJq5KVZ2XALD0gX3NqEQACkYPbgbHNA+8orLpX1K7paipBSD+FvyM1jUAGwxxDm9dLV+zxJISZxwm3f9ByfwXvfuRrPJ3SYAgBK4lhp88SBGQIi1oOtq7m8A= X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PAWPR08MB8958.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(376002)(346002)(396003)(39860400002)(136003)(366004)(230922051799003)(1800799012)(451199024)(64100799003)(186009)(26005)(21480400003)(2616005)(36756003)(38100700002)(86362001)(6506007)(44144004)(83380400001)(44832011)(235185007)(5660300002)(6512007)(33964004)(41300700001)(316002)(66556008)(66946007)(8936002)(8676002)(6486002)(966005)(54906003)(6916009)(2906002)(66476007)(478600001)(4326008)(2700100001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB9681 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB1PEPF0003922F.eurprd03.prod.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 77c2a13a-edc7-4e15-4ad0-08dbf733a028 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: CH0JVj/27Y3ixi3U9aaUFSIVXzbrHYBoFJZLEz9XDcWjlZsWSCqrIAu3f3HbnuKGHixeQKfj1i057QY1U9vyigCAuvTSEcoXdce6SOb4l9+5aGIQDTAFtJoLF5IovhxRb9ALLrZqAstg/h5Zk8xg1/ZjdaS2SM/mt+lMVnrdTJZVjj0HZMU4iYsBCKPEEH19SDi29XEM2NOxc606/6YSpsi/81YD9vwFDeCYQsez7KYZAYL5IZwhMUMYPrjLu/zQQMCDMqAT3/8DgO8OsbAj3ZDkwnZ69t5l6rpodHFN2vuwC94mQdTeZt/drmbjVWfn93315yq/knUfzk5rLtOx7WiCFmlKWm86/PVmGoBX0BehxZSTW0nesiZox10ZFxb6LPdQGugCsz5rmy/7Z41n0rSEygxM1aLKJQqWlBGtP2YEXA1NO70iFrD9uVgKs9RnWcZDgs9j2mB3md3IGTI7zGtUVZnBaSooCenXEr6ILmJzEYGLL1uV3SBew0RtCbLeWJ06qazQPGLIhFSkh56ZTt62YAXTi20G9LREHQ4dmhMuKwKcJjSdgHmn2zKKT/FklTQDBJwAdd4rAPYMIPMtMBz3x6jxN+X0YEyEnluMHOLbVDFKrTFUjIhAE+vk4ezMFwvt1a9iI7d6NChB1QB+eoT20Y1qVGHEJfIBN0i3xIuCPCy2AyQOXbS53ZoXC3fw8CGeY0c04k8V3g6LSWjUV8idFw1lnGbcvoipbQ8X0O76TkLt11B4iN25aYFSc3gcMxXWbe1rVLJxarb5/OIEMg== X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230031)(4636009)(396003)(346002)(136003)(39860400002)(376002)(230922051799003)(186009)(64100799003)(451199024)(82310400011)(1800799012)(36840700001)(46966006)(40470700004)(336012)(83380400001)(21480400003)(82740400003)(6512007)(26005)(478600001)(6486002)(966005)(6506007)(40480700001)(44144004)(33964004)(2616005)(8676002)(70586007)(54906003)(70206006)(8936002)(316002)(6916009)(4326008)(36860700001)(81166007)(356005)(47076005)(5660300002)(40460700003)(235185007)(2906002)(41300700001)(36756003)(44832011)(86362001)(2700100001); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Dec 2023 14:49:01.0545 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c1644755-0218-4931-fd7f-08dbf733a66b X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DB1PEPF0003922F.eurprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB10602 X-Spam-Status: No, score=-12.0 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, GIT_PATCH_0, KAM_DMARC_NONE, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SCC_5_SHORT_WORD_LINES, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Hi, This is a v5 of the aarch64 load/store pair fusion pass, rebased on top of the SME changes. v4 is here: https://gcc.gnu.org/pipermail/gcc-patches/2023-December/639404.html There are no changes to the pass itself since v4, this is just a rebase. Bootstrapped/regtested as a series on aarch64-linux-gnu, OK for trunk? Thanks, Alex -- >8 -- This adds a new aarch64-specific RTL-SSA pass dedicated to forming load and store pairs (LDPs and STPs). As a motivating example for the kind of thing this improves, take the following testcase: extern double c[20]; double f(double x) { double y = x*x; y += c[16]; y += c[17]; y += c[18]; y += c[19]; return y; } for which we currently generate (at -O2): f: adrp x0, c add x0, x0, :lo12:c ldp d31, d29, [x0, 128] ldr d30, [x0, 144] fmadd d0, d0, d0, d31 ldr d31, [x0, 152] fadd d0, d0, d29 fadd d0, d0, d30 fadd d0, d0, d31 ret but with the pass, we generate: f: .LFB0: adrp x0, c add x0, x0, :lo12:c ldp d31, d29, [x0, 128] fmadd d0, d0, d0, d31 ldp d30, d31, [x0, 144] fadd d0, d0, d29 fadd d0, d0, d30 fadd d0, d0, d31 ret The pass is local (only considers a BB at a time). In theory, it should be possible to extend it to run over EBBs, at least in the case of pure (MEM_READONLY_P) loads, but this is left for future work. The pass works by identifying two kinds of bases: tree decls obtained via MEM_EXPR, and RTL register bases in the form of RTL-SSA def_infos. If a candidate memory access has a MEM_EXPR base, then we track it via this base, and otherwise if it is of a simple reg + form, we track it via the RTL-SSA def_info for the register. For each BB, for a given kind of base, we build up a hash table mapping the base to an access_group. The access_group data structure holds a list of accesses at each offset relative to the same base. It uses a splay tree to support efficient insertion (while walking the bb), and the nodes are chained using a linked list to support efficient iteration (while doing the transformation). For each base, we then iterate over the access_group to identify adjacent accesses, and try to form load/store pairs for those insns that access adjacent memory. The pass is currently run twice, both before and after register allocation. The first copy of the pass is run late in the pre-RA RTL pipeline, immediately after sched1, since it was found that sched1 was increasing register pressure when the pass was run before. The second copy of the pass runs immediately before peephole2, so as to get any opportunities that the existing ldp/stp peepholes can handle. There are some cases that we punt on before RA, e.g. accesses relative to eliminable regs (such as the soft frame pointer). We do this since we can't know the elimination offset before RA, and we want to avoid the RA reloading the offset (due to being out of ldp/stp immediate range) as this can generate worse code. The post-RA copy of the pass is there to pick up the crumbs that were left behind / things we punted on in the pre-RA pass. Among other things, it's needed to handle accesses relative to the stack pointer (see the previous patch in the series for an example). It can also handle code that didn't exist at the time the pre-RA pass was run (spill code, prologue/epilogue code). This is an initial implementation, and there are (among other possible improvements) the following notable caveats / missing features that are left for future work, but could give further improvements: - Moving accesses between BBs within in an EBB, see above. - Out-of-range opportunities: currently the pass refuses to form pairs if there isn't a suitable base register with an immediate in range for ldp/stp, but it can be profitable to emit anchor addresses in the case that there are four or more out-of-range nearby accesses that can be formed into pairs. This is handled by the current ldp/stp peepholes, so it would be good to support this in the future. - Discovery: currently we prioritize MEM_EXPR bases over RTL bases, which can lead to us missing opportunities in the case that two accesses have distinct MEM_EXPR bases (i.e. different DECLs) but they are still adjacent in memory (e.g. adjacent variables on the stack). I hope to address this for GCC 15, hopefully getting to the point where we can remove the ldp/stp peepholes and scheduling hooks. Furthermore it would be nice to make the pass aware of section anchors (adding these as a third kind of base) allowing merging accesses to adjacent variables within the same section. gcc/ChangeLog: * config.gcc: Add aarch64-ldp-fusion.o to extra_objs for aarch64. * config/aarch64/aarch64-passes.def: Add copies of pass_ldp_fusion before and after RA. * config/aarch64/aarch64-protos.h (make_pass_ldp_fusion): Declare. * config/aarch64/aarch64.opt (-mearly-ldp-fusion): New. (-mlate-ldp-fusion): New. (--param=aarch64-ldp-alias-check-limit): New. (--param=aarch64-ldp-writeback): New. * config/aarch64/t-aarch64: Add rule for aarch64-ldp-fusion.o. * config/aarch64/aarch64-ldp-fusion.cc: New file. diff --git a/gcc/config.gcc b/gcc/config.gcc index 6450448f2f0..6901ef6e5c0 100644 --- a/gcc/config.gcc +++ b/gcc/config.gcc @@ -349,7 +349,7 @@ aarch64*-*-*) c_target_objs="aarch64-c.o" cxx_target_objs="aarch64-c.o" d_target_objs="aarch64-d.o" - extra_objs="aarch64-builtins.o aarch-common.o aarch64-sve-builtins.o aarch64-sve-builtins-shapes.o aarch64-sve-builtins-base.o aarch64-sve-builtins-sve2.o aarch64-sve-builtins-sme.o cortex-a57-fma-steering.o aarch64-speculation.o falkor-tag-collision-avoidance.o aarch-bti-insert.o aarch64-cc-fusion.o" + extra_objs="aarch64-builtins.o aarch-common.o aarch64-sve-builtins.o aarch64-sve-builtins-shapes.o aarch64-sve-builtins-base.o aarch64-sve-builtins-sve2.o aarch64-sve-builtins-sme.o cortex-a57-fma-steering.o aarch64-speculation.o falkor-tag-collision-avoidance.o aarch-bti-insert.o aarch64-cc-fusion.o aarch64-ldp-fusion.o" target_gtfiles="\$(srcdir)/config/aarch64/aarch64-builtins.cc \$(srcdir)/config/aarch64/aarch64-sve-builtins.h \$(srcdir)/config/aarch64/aarch64-sve-builtins.cc" target_has_targetm_common=yes ;; diff --git a/gcc/config/aarch64/aarch64-ldp-fusion.cc b/gcc/config/aarch64/aarch64-ldp-fusion.cc new file mode 100644 index 00000000000..ea59e8976d4 --- /dev/null +++ b/gcc/config/aarch64/aarch64-ldp-fusion.cc @@ -0,0 +1,2765 @@ +// LoadPair fusion optimization pass for AArch64. +// Copyright (C) 2023 Free Software Foundation, Inc. +// +// This file is part of GCC. +// +// GCC is free software; you can redistribute it and/or modify it +// under the terms of the GNU General Public License as published by +// the Free Software Foundation; either version 3, or (at your option) +// any later version. +// +// GCC is distributed in the hope that it will be useful, but +// WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +// General Public License for more details. +// +// You should have received a copy of the GNU General Public License +// along with GCC; see the file COPYING3. If not see +// . + +#define INCLUDE_ALGORITHM +#define INCLUDE_FUNCTIONAL +#define INCLUDE_LIST +#define INCLUDE_TYPE_TRAITS +#include "config.h" +#include "system.h" +#include "coretypes.h" +#include "backend.h" +#include "rtl.h" +#include "df.h" +#include "rtl-ssa.h" +#include "cfgcleanup.h" +#include "tree-pass.h" +#include "ordered-hash-map.h" +#include "tree-dfa.h" +#include "fold-const.h" +#include "tree-hash-traits.h" +#include "print-tree.h" +#include "insn-attr.h" + +using namespace rtl_ssa; + +static constexpr HOST_WIDE_INT LDP_IMM_BITS = 7; +static constexpr HOST_WIDE_INT LDP_IMM_SIGN_BIT = (1 << (LDP_IMM_BITS - 1)); +static constexpr HOST_WIDE_INT LDP_MAX_IMM = LDP_IMM_SIGN_BIT - 1; +static constexpr HOST_WIDE_INT LDP_MIN_IMM = -LDP_MAX_IMM - 1; + +// We pack these fields (load_p, fpsimd_p, and size) into an integer +// (LFS) which we use as part of the key into the main hash tables. +// +// The idea is that we group candidates together only if they agree on +// the fields below. Candidates that disagree on any of these +// properties shouldn't be merged together. +struct lfs_fields +{ + bool load_p; + bool fpsimd_p; + unsigned size; +}; + +using insn_list_t = std::list; +using insn_iter_t = insn_list_t::iterator; + +// Information about the accesses at a given offset from a particular +// base. Stored in an access_group, see below. +struct access_record +{ + poly_int64 offset; + std::list cand_insns; + std::list::iterator place; + + access_record (poly_int64 off) : offset (off) {} +}; + +// A group of accesses where adjacent accesses could be ldp/stp +// candidates. The splay tree supports efficient insertion, +// while the list supports efficient iteration. +struct access_group +{ + splay_tree tree; + std::list list; + + template + inline void track (Alloc node_alloc, poly_int64 offset, insn_info *insn); +}; + +// Information about a potential base candidate, used in try_fuse_pair. +// There may be zero, one, or two viable RTL bases for a given pair. +struct base_cand +{ + // DEF is the def of the base register to be used by the pair. + def_info *def; + + // FROM_INSN is -1 if the base candidate is already shared by both + // candidate insns. Otherwise it holds the index of the insn from + // which the base originated. + // + // In the case that the base is shared, either DEF is already used + // by both candidate accesses, or both accesses see different versions + // of the same regno, in which case DEF is the def consumed by the + // first candidate access. + int from_insn; + + // To form a pair, we do so by moving the first access down and the second + // access up. To determine where to form the pair, and whether or not + // it is safe to form the pair, we track instructions which cannot be + // re-ordered past due to either dataflow or alias hazards. + // + // Since we allow changing the base used by an access, the choice of + // base can change which instructions act as re-ordering hazards for + // this pair (due to different dataflow). We store the initial + // dataflow hazards for this choice of base candidate in HAZARDS. + // + // These hazards act as re-ordering barriers to each candidate insn + // respectively, in program order. + // + // Later on, when we take alias analysis into account, we narrow + // HAZARDS accordingly. + insn_info *hazards[2]; + + base_cand (def_info *def, int insn) + : def (def), from_insn (insn), hazards {nullptr, nullptr} {} + + base_cand (def_info *def) : base_cand (def, -1) {} + + // Test if this base candidate is viable according to HAZARDS. + bool viable () const + { + return !hazards[0] || !hazards[1] || (*hazards[0] > *hazards[1]); + } +}; + +// Information about an alternate base. For a def_info D, it may +// instead be expressed as D = BASE + OFFSET. +struct alt_base +{ + def_info *base; + poly_int64 offset; +}; + +// State used by the pass for a given basic block. +struct ldp_bb_info +{ + using def_hash = nofree_ptr_hash; + using expr_key_t = pair_hash>; + using def_key_t = pair_hash>; + + // Map of -> access_group. + ordered_hash_map expr_map; + + // Map of -> access_group. + ordered_hash_map def_map; + + // Given the def_info for an RTL base register, express it as an offset from + // some canonical base instead. + // + // Canonicalizing bases in this way allows us to identify adjacent accesses + // even if they see different base register defs. + hash_map canon_base_map; + + static const size_t obstack_alignment = sizeof (void *); + bb_info *m_bb; + + ldp_bb_info (bb_info *bb) : m_bb (bb), m_emitted_tombstone (false) + { + obstack_specify_allocation (&m_obstack, OBSTACK_CHUNK_SIZE, + obstack_alignment, obstack_chunk_alloc, + obstack_chunk_free); + } + ~ldp_bb_info () + { + obstack_free (&m_obstack, nullptr); + + if (m_emitted_tombstone) + { + bitmap_release (&m_tombstone_bitmap); + bitmap_obstack_release (&m_bitmap_obstack); + } + } + + inline void track_access (insn_info *, bool load, rtx mem); + inline void transform (); + inline void cleanup_tombstones (); + +private: + obstack m_obstack; + + // State for keeping track of tombstone insns emitted for this BB. + bitmap_obstack m_bitmap_obstack; + bitmap_head m_tombstone_bitmap; + bool m_emitted_tombstone; + + inline splay_tree_node *node_alloc (access_record *); + + template + inline void traverse_base_map (Map &map); + inline void transform_for_base (int load_size, access_group &group); + + inline bool try_form_pairs (insn_list_t *, insn_list_t *, + bool load_p, unsigned access_size); + + inline void merge_pairs (insn_list_t &, insn_list_t &, + hash_set &to_delete, + bool load_p, + unsigned access_size); + + inline bool try_fuse_pair (bool load_p, unsigned access_size, + insn_info *i1, insn_info *i2); + + inline bool fuse_pair (bool load_p, unsigned access_size, + int writeback, + insn_info *i1, insn_info *i2, + base_cand &base, + const insn_range_info &move_range); + + inline void track_tombstone (int uid); + + inline bool track_via_mem_expr (insn_info *, rtx mem, lfs_fields lfs); +}; + +splay_tree_node * +ldp_bb_info::node_alloc (access_record *access) +{ + using T = splay_tree_node; + void *addr = obstack_alloc (&m_obstack, sizeof (T)); + return new (addr) T (access); +} + +// Given a mem MEM, if the address has side effects, return a MEM that accesses +// the same address but without the side effects. Otherwise, return +// MEM unchanged. +static rtx +drop_writeback (rtx mem) +{ + rtx addr = XEXP (mem, 0); + + if (!side_effects_p (addr)) + return mem; + + switch (GET_CODE (addr)) + { + case PRE_MODIFY: + addr = XEXP (addr, 1); + break; + case POST_MODIFY: + case POST_INC: + case POST_DEC: + addr = XEXP (addr, 0); + break; + case PRE_INC: + case PRE_DEC: + { + poly_int64 adjustment = GET_MODE_SIZE (GET_MODE (mem)); + if (GET_CODE (addr) == PRE_DEC) + adjustment *= -1; + addr = plus_constant (GET_MODE (addr), XEXP (addr, 0), adjustment); + break; + } + default: + gcc_unreachable (); + } + + return change_address (mem, GET_MODE (mem), addr); +} + +// Convenience wrapper around strip_offset that can also look through +// RTX_AUTOINC addresses. The interface is like strip_offset except we take a +// MEM so that we know the mode of the access. +static rtx ldp_strip_offset (rtx mem, poly_int64 *offset) +{ + rtx addr = XEXP (mem, 0); + + switch (GET_CODE (addr)) + { + case PRE_MODIFY: + case POST_MODIFY: + addr = strip_offset (XEXP (addr, 1), offset); + gcc_checking_assert (REG_P (addr)); + gcc_checking_assert (rtx_equal_p (XEXP (XEXP (mem, 0), 0), addr)); + break; + case PRE_INC: + case POST_INC: + addr = XEXP (addr, 0); + *offset = GET_MODE_SIZE (GET_MODE (mem)); + gcc_checking_assert (REG_P (addr)); + break; + case PRE_DEC: + case POST_DEC: + addr = XEXP (addr, 0); + *offset = -GET_MODE_SIZE (GET_MODE (mem)); + gcc_checking_assert (REG_P (addr)); + break; + + default: + addr = strip_offset (addr, offset); + } + + return addr; +} + +// Return true if X is a PRE_{INC,DEC,MODIFY} rtx. +static bool +any_pre_modify_p (rtx x) +{ + const auto code = GET_CODE (x); + return code == PRE_INC || code == PRE_DEC || code == PRE_MODIFY; +} + +// Return true if X is a POST_{INC,DEC,MODIFY} rtx. +static bool +any_post_modify_p (rtx x) +{ + const auto code = GET_CODE (x); + return code == POST_INC || code == POST_DEC || code == POST_MODIFY; +} + +// Return true if we should consider forming ldp/stp insns from memory +// accesses with operand mode MODE at this stage in compilation. +static bool +ldp_operand_mode_ok_p (machine_mode mode) +{ + const bool allow_qregs + = !(aarch64_tune_params.extra_tuning_flags + & AARCH64_EXTRA_TUNE_NO_LDP_STP_QREGS); + + if (!aarch64_ldpstp_operand_mode_p (mode)) + return false; + + const auto size = GET_MODE_SIZE (mode).to_constant (); + if (size == 16 && !allow_qregs) + return false; + + // We don't pair up TImode accesses before RA because TImode is + // special in that it can be allocated to a pair of GPRs or a single + // FPR, and the RA is best placed to make that decision. + return reload_completed || mode != TImode; +} + +// Given LFS (load_p, fpsimd_p, size) fields in FIELDS, encode these +// into an integer for use as a hash table key. +static int +encode_lfs (lfs_fields fields) +{ + int size_log2 = exact_log2 (fields.size); + gcc_checking_assert (size_log2 >= 2 && size_log2 <= 4); + return ((int)fields.load_p << 3) + | ((int)fields.fpsimd_p << 2) + | (size_log2 - 2); +} + +// Inverse of encode_lfs. +static lfs_fields +decode_lfs (int lfs) +{ + bool load_p = (lfs & (1 << 3)); + bool fpsimd_p = (lfs & (1 << 2)); + unsigned size = 1U << ((lfs & 3) + 2); + return { load_p, fpsimd_p, size }; +} + +// Track the access INSN at offset OFFSET in this access group. +// ALLOC_NODE is used to allocate splay tree nodes. +template +void +access_group::track (Alloc alloc_node, poly_int64 offset, insn_info *insn) +{ + auto insert_before = [&](std::list::iterator after) + { + auto it = list.emplace (after, offset); + it->cand_insns.push_back (insn); + it->place = it; + return &*it; + }; + + if (!list.size ()) + { + auto access = insert_before (list.end ()); + tree.insert_max_node (alloc_node (access)); + return; + } + + auto compare = [&](splay_tree_node *node) + { + return compare_sizes_for_sort (offset, node->value ()->offset); + }; + auto result = tree.lookup (compare); + splay_tree_node *node = tree.root (); + if (result == 0) + node->value ()->cand_insns.push_back (insn); + else + { + auto it = node->value ()->place; + auto after = (result > 0) ? std::next (it) : it; + auto access = insert_before (after); + tree.insert_child (node, result > 0, alloc_node (access)); + } +} + +// Given a candidate access INSN (with mem MEM), see if it has a suitable +// MEM_EXPR base (i.e. a tree decl) relative to which we can track the access. +// LFS is used as part of the key to the hash table, see track_access. +bool +ldp_bb_info::track_via_mem_expr (insn_info *insn, rtx mem, lfs_fields lfs) +{ + if (!MEM_EXPR (mem) || !MEM_OFFSET_KNOWN_P (mem)) + return false; + + poly_int64 offset; + tree base_expr = get_addr_base_and_unit_offset (MEM_EXPR (mem), + &offset); + if (!base_expr || !DECL_P (base_expr)) + return false; + + offset += MEM_OFFSET (mem); + + const machine_mode mem_mode = GET_MODE (mem); + const HOST_WIDE_INT mem_size = GET_MODE_SIZE (mem_mode).to_constant (); + + // Punt on misaligned offsets. LDP/STP instructions require offsets to be a + // multiple of the access size, and we believe that misaligned offsets on + // MEM_EXPR bases are likely to lead to misaligned offsets w.r.t. RTL bases. + if (!multiple_p (offset, mem_size)) + return false; + + const auto key = std::make_pair (base_expr, encode_lfs (lfs)); + access_group &group = expr_map.get_or_insert (key, NULL); + auto alloc = [&](access_record *access) { return node_alloc (access); }; + group.track (alloc, offset, insn); + + if (dump_file) + { + fprintf (dump_file, "[bb %u] tracking insn %d via ", + m_bb->index (), insn->uid ()); + print_node_brief (dump_file, "mem expr", base_expr, 0); + fprintf (dump_file, " [L=%d FP=%d, %smode, off=", + lfs.load_p, lfs.fpsimd_p, mode_name[mem_mode]); + print_dec (offset, dump_file); + fprintf (dump_file, "]\n"); + } + + return true; +} + +// Main function to begin pair discovery. Given a memory access INSN, +// determine whether it could be a candidate for fusing into an ldp/stp, +// and if so, track it in the appropriate data structure for this basic +// block. LOAD_P is true if the access is a load, and MEM is the mem +// rtx that occurs in INSN. +void +ldp_bb_info::track_access (insn_info *insn, bool load_p, rtx mem) +{ + // We can't combine volatile MEMs, so punt on these. + if (MEM_VOLATILE_P (mem)) + return; + + // Ignore writeback accesses if the param says to do so. + if (!aarch64_ldp_writeback + && GET_RTX_CLASS (GET_CODE (XEXP (mem, 0))) == RTX_AUTOINC) + return; + + const machine_mode mem_mode = GET_MODE (mem); + if (!ldp_operand_mode_ok_p (mem_mode)) + return; + + // Note ldp_operand_mode_ok_p already rejected VL modes. + const HOST_WIDE_INT mem_size = GET_MODE_SIZE (mem_mode).to_constant (); + + rtx reg_op = XEXP (PATTERN (insn->rtl ()), !load_p); + + // We want to segregate FP/SIMD accesses from GPR accesses. + // + // Before RA, we use the modes, noting that stores of constant zero + // operands use GPRs (even in non-integer modes). After RA, we use + // the hard register numbers. + const bool fpsimd_op_p + = reload_completed + ? (REG_P (reg_op) && FP_REGNUM_P (REGNO (reg_op))) + : (GET_MODE_CLASS (mem_mode) != MODE_INT + && (load_p || !aarch64_const_zero_rtx_p (reg_op))); + + const lfs_fields lfs = { load_p, fpsimd_op_p, mem_size }; + + if (track_via_mem_expr (insn, mem, lfs)) + return; + + poly_int64 mem_off; + rtx addr = XEXP (mem, 0); + const bool autoinc_p = GET_RTX_CLASS (GET_CODE (addr)) == RTX_AUTOINC; + rtx base = ldp_strip_offset (mem, &mem_off); + if (!REG_P (base)) + return; + + // Need to calculate two (possibly different) offsets: + // - Offset at which the access occurs. + // - Offset of the new base def. + poly_int64 access_off; + if (autoinc_p && any_post_modify_p (addr)) + access_off = 0; + else + access_off = mem_off; + + poly_int64 new_def_off = mem_off; + + // Punt on accesses relative to the eliminable regs: since we don't + // know the elimination offset pre-RA, we should postpone forming + // pairs on such accesses until after RA. + if (!reload_completed + && (REGNO (base) == FRAME_POINTER_REGNUM + || REGNO (base) == ARG_POINTER_REGNUM)) + return; + + // Now need to find def of base register. + def_info *base_def; + use_info *base_use = find_access (insn->uses (), REGNO (base)); + gcc_assert (base_use); + base_def = base_use->def (); + if (!base_def) + { + if (dump_file) + fprintf (dump_file, + "base register (regno %d) of insn %d is undefined", + REGNO (base), insn->uid ()); + return; + } + + alt_base *canon_base = canon_base_map.get (base_def); + if (canon_base) + { + // Express this as the combined offset from the canonical base. + base_def = canon_base->base; + new_def_off += canon_base->offset; + access_off += canon_base->offset; + } + + if (autoinc_p) + { + auto def = find_access (insn->defs (), REGNO (base)); + gcc_assert (def); + + // Record that DEF = BASE_DEF + MEM_OFF. + if (dump_file) + { + pretty_printer pp; + pp_access (&pp, def, 0); + pp_string (&pp, " = "); + pp_access (&pp, base_def, 0); + fprintf (dump_file, "[bb %u] recording %s + ", + m_bb->index (), pp_formatted_text (&pp)); + print_dec (new_def_off, dump_file); + fprintf (dump_file, "\n"); + } + + alt_base base_rec { base_def, new_def_off }; + if (canon_base_map.put (def, base_rec)) + gcc_unreachable (); // Base defs should be unique. + } + + // Punt on misaligned offsets. LDP/STP require offsets to be a multiple of + // the access size. + if (!multiple_p (mem_off, mem_size)) + return; + + const auto key = std::make_pair (base_def, encode_lfs (lfs)); + access_group &group = def_map.get_or_insert (key, NULL); + auto alloc = [&](access_record *access) { return node_alloc (access); }; + group.track (alloc, access_off, insn); + + if (dump_file) + { + pretty_printer pp; + pp_access (&pp, base_def, 0); + + fprintf (dump_file, "[bb %u] tracking insn %d via %s", + m_bb->index (), insn->uid (), pp_formatted_text (&pp)); + fprintf (dump_file, + " [L=%d, WB=%d, FP=%d, %smode, off=", + lfs.load_p, autoinc_p, lfs.fpsimd_p, mode_name[mem_mode]); + print_dec (access_off, dump_file); + fprintf (dump_file, "]\n"); + } +} + +// Dummy predicate that never ignores any insns. +static bool no_ignore (insn_info *) { return false; } + +// Return the latest dataflow hazard before INSN. +// +// If IGNORE is non-NULL, this points to a sub-rtx which we should ignore for +// dataflow purposes. This is needed when considering changing the RTL base of +// an access discovered through a MEM_EXPR base. +// +// If IGNORE_INSN is non-NULL, we should further ignore any hazards arising +// from that insn. +// +// N.B. we ignore any defs/uses of memory here as we deal with that separately, +// making use of alias disambiguation. +static insn_info * +latest_hazard_before (insn_info *insn, rtx *ignore, + insn_info *ignore_insn = nullptr) +{ + insn_info *result = nullptr; + + // Return true if we registered the hazard. + auto hazard = [&](insn_info *h) -> bool + { + gcc_checking_assert (*h < *insn); + if (h == ignore_insn) + return false; + + if (!result || *h > *result) + result = h; + + return true; + }; + + rtx pat = PATTERN (insn->rtl ()); + auto ignore_use = [&](use_info *u) + { + if (u->is_mem ()) + return true; + + return !refers_to_regno_p (u->regno (), u->regno () + 1, pat, ignore); + }; + + // Find defs of uses in INSN (RaW). + for (auto use : insn->uses ()) + if (!ignore_use (use) && use->def ()) + hazard (use->def ()->insn ()); + + // Find previous defs (WaW) or previous uses (WaR) of defs in INSN. + for (auto def : insn->defs ()) + { + if (def->is_mem ()) + continue; + + if (def->prev_def ()) + { + hazard (def->prev_def ()->insn ()); // WaW + + auto set = dyn_cast (def->prev_def ()); + if (set && set->has_nondebug_insn_uses ()) + for (auto use : set->reverse_nondebug_insn_uses ()) + if (use->insn () != insn && hazard (use->insn ())) // WaR + break; + } + + if (!HARD_REGISTER_NUM_P (def->regno ())) + continue; + + // Also need to check backwards for call clobbers (WaW). + for (auto call_group : def->ebb ()->call_clobbers ()) + { + if (!call_group->clobbers (def->resource ())) + continue; + + auto clobber_insn = prev_call_clobbers_ignoring (*call_group, + def->insn (), + no_ignore); + if (clobber_insn) + hazard (clobber_insn); + } + + } + + return result; +} + +// Return the first dataflow hazard after INSN. +// +// If IGNORE is non-NULL, this points to a sub-rtx which we should ignore for +// dataflow purposes. This is needed when considering changing the RTL base of +// an access discovered through a MEM_EXPR base. +// +// N.B. we ignore any defs/uses of memory here as we deal with that separately, +// making use of alias disambiguation. +static insn_info * +first_hazard_after (insn_info *insn, rtx *ignore) +{ + insn_info *result = nullptr; + auto hazard = [insn, &result](insn_info *h) + { + gcc_checking_assert (*h > *insn); + if (!result || *h < *result) + result = h; + }; + + rtx pat = PATTERN (insn->rtl ()); + auto ignore_use = [&](use_info *u) + { + if (u->is_mem ()) + return true; + + return !refers_to_regno_p (u->regno (), u->regno () + 1, pat, ignore); + }; + + for (auto def : insn->defs ()) + { + if (def->is_mem ()) + continue; + + if (def->next_def ()) + hazard (def->next_def ()->insn ()); // WaW + + auto set = dyn_cast (def); + if (set && set->has_nondebug_insn_uses ()) + hazard (set->first_nondebug_insn_use ()->insn ()); // RaW + + if (!HARD_REGISTER_NUM_P (def->regno ())) + continue; + + // Also check for call clobbers of this def (WaW). + for (auto call_group : def->ebb ()->call_clobbers ()) + { + if (!call_group->clobbers (def->resource ())) + continue; + + auto clobber_insn = next_call_clobbers_ignoring (*call_group, + def->insn (), + no_ignore); + if (clobber_insn) + hazard (clobber_insn); + } + } + + // Find any subsequent defs of uses in INSN (WaR). + for (auto use : insn->uses ()) + { + if (ignore_use (use)) + continue; + + if (use->def ()) + { + auto def = use->def ()->next_def (); + if (def && def->insn () == insn) + def = def->next_def (); + + if (def) + hazard (def->insn ()); + } + + if (!HARD_REGISTER_NUM_P (use->regno ())) + continue; + + // Also need to handle call clobbers of our uses (again WaR). + // + // See restrict_movement_for_uses_ignoring for why we don't + // need to check backwards for call clobbers. + for (auto call_group : use->ebb ()->call_clobbers ()) + { + if (!call_group->clobbers (use->resource ())) + continue; + + auto clobber_insn = next_call_clobbers_ignoring (*call_group, + use->insn (), + no_ignore); + if (clobber_insn) + hazard (clobber_insn); + } + } + + return result; +} + +// Return true iff R1 and R2 overlap. +static bool +ranges_overlap_p (const insn_range_info &r1, const insn_range_info &r2) +{ + // If either range is empty, then their intersection is empty. + if (!r1 || !r2) + return false; + + // When do they not overlap? When one range finishes before the other + // starts, i.e. (*r1.last < *r2.first || *r2.last < *r1.first). + // Inverting this, we get the below. + return *r1.last >= *r2.first && *r2.last >= *r1.first; +} + +// Get the range of insns that def feeds. +static insn_range_info get_def_range (def_info *def) +{ + insn_info *last = def->next_def ()->insn ()->prev_nondebug_insn (); + return { def->insn (), last }; +} + +// Given a def (of memory), return the downwards range within which we +// can safely move this def. +static insn_range_info +def_downwards_move_range (def_info *def) +{ + auto range = get_def_range (def); + + auto set = dyn_cast (def); + if (!set || !set->has_any_uses ()) + return range; + + auto use = set->first_nondebug_insn_use (); + if (use) + range = move_earlier_than (range, use->insn ()); + + return range; +} + +// Given a def (of memory), return the upwards range within which we can +// safely move this def. +static insn_range_info +def_upwards_move_range (def_info *def) +{ + def_info *prev = def->prev_def (); + insn_range_info range { prev->insn (), def->insn () }; + + auto set = dyn_cast (prev); + if (!set || !set->has_any_uses ()) + return range; + + auto use = set->last_nondebug_insn_use (); + if (use) + range = move_later_than (range, use->insn ()); + + return range; +} + +// Given candidate store insns FIRST and SECOND, see if we can re-purpose one +// of them (together with its def of memory) for the stp insn. If so, return +// that insn. Otherwise, return null. +static insn_info * +decide_stp_strategy (insn_info *first, + insn_info *second, + const insn_range_info &move_range) +{ + def_info * const defs[2] = { + memory_access (first->defs ()), + memory_access (second->defs ()) + }; + + if (move_range.includes (first) + || ranges_overlap_p (move_range, def_downwards_move_range (defs[0]))) + return first; + + if (move_range.includes (second) + || ranges_overlap_p (move_range, def_upwards_move_range (defs[1]))) + return second; + + return nullptr; +} + +// Generate the RTL pattern for a "tombstone"; used temporarily during this pass +// to replace stores that are marked for deletion where we can't immediately +// delete the store (since there are uses of mem hanging off the store). +// +// These are deleted at the end of the pass and uses re-parented appropriately +// at this point. +static rtx +gen_tombstone (void) +{ + return gen_rtx_CLOBBER (VOIDmode, + gen_rtx_MEM (BLKmode, gen_rtx_SCRATCH (Pmode))); +} + +// Given a pair mode MODE, return a canonical mode to be used for a single +// operand of such a pair. Currently we only use this when promoting a +// non-writeback pair into a writeback pair, as it isn't otherwise clear +// which mode to use when storing a modeless CONST_INT. +static machine_mode +aarch64_operand_mode_for_pair_mode (machine_mode mode) +{ + switch (mode) + { + case E_V2x4QImode: + return SImode; + case E_V2x8QImode: + return DImode; + case E_V2x16QImode: + return V16QImode; + default: + gcc_unreachable (); + } +} + +// Go through the reg notes rooted at NOTE, dropping those that we should drop, +// and preserving those that we want to keep by prepending them to (and +// returning) RESULT. EH_REGION is used to make sure we have at most one +// REG_EH_REGION note in the resulting list. +static rtx +filter_notes (rtx note, rtx result, bool *eh_region) +{ + for (; note; note = XEXP (note, 1)) + { + switch (REG_NOTE_KIND (note)) + { + case REG_DEAD: + // REG_DEAD notes aren't required to be maintained. + case REG_EQUAL: + case REG_EQUIV: + case REG_UNUSED: + case REG_NOALIAS: + // These can all be dropped. For REG_EQU{AL,IV} they cannot apply to + // non-single_set insns, and REG_UNUSED is re-computed by RTl-SSA, see + // rtl-ssa/changes.cc:update_notes. + // + // Similarly, REG_NOALIAS cannot apply to a parallel. + case REG_INC: + // When we form the pair insn, the reg update is implemented + // as just another SET in the parallel, so isn't really an + // auto-increment in the RTL sense, hence we drop the note. + break; + case REG_EH_REGION: + gcc_assert (!*eh_region); + *eh_region = true; + result = alloc_reg_note (REG_EH_REGION, XEXP (note, 0), result); + break; + case REG_CFA_DEF_CFA: + case REG_CFA_OFFSET: + case REG_CFA_RESTORE: + result = alloc_reg_note (REG_NOTE_KIND (note), + copy_rtx (XEXP (note, 0)), + result); + break; + default: + // Unexpected REG_NOTE kind. + gcc_unreachable (); + } + } + + return result; +} + +// Ensure we have a sensible scheme for combining REG_NOTEs +// given two candidate insns I1 and I2 where *I1 < *I2. +static rtx +combine_reg_notes (insn_info *i1, insn_info *i2) +{ + bool found_eh_region = false; + rtx result = NULL_RTX; + result = filter_notes (REG_NOTES (i2->rtl ()), result, &found_eh_region); + return filter_notes (REG_NOTES (i1->rtl ()), result, &found_eh_region); +} + +// Given two memory accesses in PATS, at least one of which is of a +// writeback form, extract two non-writeback memory accesses addressed +// relative to the initial value of the base register, and output these +// in PATS. Return an rtx that represents the overall change to the +// base register. +static rtx +extract_writebacks (bool load_p, rtx pats[2], int changed) +{ + rtx base_reg = NULL_RTX; + poly_int64 current_offset = 0; + + poly_int64 offsets[2]; + + for (int i = 0; i < 2; i++) + { + rtx mem = XEXP (pats[i], load_p); + rtx reg = XEXP (pats[i], !load_p); + + rtx addr = XEXP (mem, 0); + const bool autoinc_p = GET_RTX_CLASS (GET_CODE (addr)) == RTX_AUTOINC; + + poly_int64 offset; + rtx this_base = ldp_strip_offset (mem, &offset); + gcc_assert (REG_P (this_base)); + if (base_reg) + gcc_assert (rtx_equal_p (base_reg, this_base)); + else + base_reg = this_base; + + // If we changed base for the current insn, then we already + // derived the correct mem for this insn from the effective + // address of the other access. + if (i == changed) + { + gcc_checking_assert (!autoinc_p); + offsets[i] = offset; + continue; + } + + if (autoinc_p && any_pre_modify_p (addr)) + current_offset += offset; + + poly_int64 this_off = current_offset; + if (!autoinc_p) + this_off += offset; + + offsets[i] = this_off; + rtx new_mem = change_address (mem, GET_MODE (mem), + plus_constant (GET_MODE (base_reg), + base_reg, this_off)); + pats[i] = load_p + ? gen_rtx_SET (reg, new_mem) + : gen_rtx_SET (new_mem, reg); + + if (autoinc_p && any_post_modify_p (addr)) + current_offset += offset; + } + + if (known_eq (current_offset, 0)) + return NULL_RTX; + + return gen_rtx_SET (base_reg, plus_constant (GET_MODE (base_reg), + base_reg, current_offset)); +} + +// INSNS contains either {nullptr, pair insn} (when promoting an existing +// non-writeback pair) or contains the candidate insns used to form the pair +// (when fusing a new pair). +// +// PAIR_RANGE specifies where we want to form the final pair. +// INITIAL_OFFSET gives the current base offset for the pair, +// INITIAL_WRITEBACK says whether either of the initial accesses had +// writeback. +// ACCESS_SIZE gives the access size for a single arm of the pair. +// BASE_DEF gives the initial def of the base register consumed by the pair. +// +// Given the above, this function looks for a trailing destructive update of the +// base register. If there is one, we choose the first such update after +// INSNS[1] that is still in the same BB as our pair. We return the +// new def in *ADD_DEF and the resulting writeback effect in +// *WRITEBACK_EFFECT. +static insn_info * +find_trailing_add (insn_info *insns[2], + const insn_range_info &pair_range, + int initial_writeback, + rtx *writeback_effect, + def_info **add_def, + def_info *base_def, + poly_int64 initial_offset, + unsigned access_size) +{ + insn_info *pair_dst = pair_range.singleton (); + gcc_assert (pair_dst); + + def_info *def = base_def->next_def (); + + // In the case that either of the initial pair insns had writeback, + // then there will be intervening defs of the base register. + // Skip over these. + for (int i = 0; i < 2; i++) + if (initial_writeback & (1 << i)) + { + gcc_assert (def->insn () == insns[i]); + def = def->next_def (); + } + + if (!def || def->bb () != pair_dst->bb ()) + return nullptr; + + // DEF should now be the first def of the base register after PAIR_DST. + insn_info *cand = def->insn (); + gcc_assert (*cand > *pair_dst); + + const auto base_regno = base_def->regno (); + + // If CAND doesn't also use our base register, + // it can't destructively update it. + if (!find_access (cand->uses (), base_regno)) + return nullptr; + + auto rti = cand->rtl (); + + if (!INSN_P (rti)) + return nullptr; + + auto pat = PATTERN (rti); + if (GET_CODE (pat) != SET) + return nullptr; + + auto dest = XEXP (pat, 0); + if (!REG_P (dest) || REGNO (dest) != base_regno) + return nullptr; + + poly_int64 offset; + rtx rhs_base = strip_offset (XEXP (pat, 1), &offset); + if (!REG_P (rhs_base) + || REGNO (rhs_base) != base_regno + || !offset.is_constant ()) + return nullptr; + + // If the initial base offset is zero, we can handle any add offset + // (post-inc). Otherwise, we require the offsets to match (pre-inc). + if (!known_eq (initial_offset, 0) && !known_eq (offset, initial_offset)) + return nullptr; + + auto off_hwi = offset.to_constant (); + + if (off_hwi % access_size != 0) + return nullptr; + + off_hwi /= access_size; + + if (off_hwi < LDP_MIN_IMM || off_hwi > LDP_MAX_IMM) + return nullptr; + + auto dump_prefix = [&]() + { + if (!insns[0]) + fprintf (dump_file, "existing pair i%d: ", insns[1]->uid ()); + else + fprintf (dump_file, " (%d,%d)", + insns[0]->uid (), insns[1]->uid ()); + }; + + insn_info *hazard = latest_hazard_before (cand, nullptr, insns[1]); + if (!hazard || *hazard <= *pair_dst) + { + if (dump_file) + { + dump_prefix (); + fprintf (dump_file, + "folding in trailing add (%d) to use writeback form\n", + cand->uid ()); + } + + *add_def = def; + *writeback_effect = copy_rtx (pat); + return cand; + } + + if (dump_file) + { + dump_prefix (); + fprintf (dump_file, + "can't fold in trailing add (%d), hazard = %d\n", + cand->uid (), hazard->uid ()); + } + + return nullptr; +} + +// We just emitted a tombstone with uid UID, track it in a bitmap for +// this BB so we can easily identify it later when cleaning up tombstones. +void +ldp_bb_info::track_tombstone (int uid) +{ + if (!m_emitted_tombstone) + { + // Lazily initialize the bitmap for tracking tombstone insns. + bitmap_obstack_initialize (&m_bitmap_obstack); + bitmap_initialize (&m_tombstone_bitmap, &m_bitmap_obstack); + m_emitted_tombstone = true; + } + + if (!bitmap_set_bit (&m_tombstone_bitmap, uid)) + gcc_unreachable (); // Bit should have changed. +} + +// Try and actually fuse the pair given by insns I1 and I2. +bool +ldp_bb_info::fuse_pair (bool load_p, + unsigned access_size, + int writeback, + insn_info *i1, insn_info *i2, + base_cand &base, + const insn_range_info &move_range) +{ + auto attempt = crtl->ssa->new_change_attempt (); + + auto make_change = [&attempt](insn_info *insn) + { + return crtl->ssa->change_alloc (attempt, insn); + }; + auto make_delete = [&attempt](insn_info *insn) + { + return crtl->ssa->change_alloc (attempt, + insn, + insn_change::DELETE); + }; + + insn_info *first = (*i1 < *i2) ? i1 : i2; + insn_info *second = (first == i1) ? i2 : i1; + + insn_info *insns[2] = { first, second }; + + auto_vec changes (4); + auto_vec tombstone_uids (2); + + rtx pats[2] = { + PATTERN (first->rtl ()), + PATTERN (second->rtl ()) + }; + + use_array input_uses[2] = { first->uses (), second->uses () }; + def_array input_defs[2] = { first->defs (), second->defs () }; + + int changed_insn = -1; + if (base.from_insn != -1) + { + // If we're not already using a shared base, we need + // to re-write one of the accesses to use the base from + // the other insn. + gcc_checking_assert (base.from_insn == 0 || base.from_insn == 1); + changed_insn = !base.from_insn; + + rtx base_pat = pats[base.from_insn]; + rtx change_pat = pats[changed_insn]; + rtx base_mem = XEXP (base_pat, load_p); + rtx change_mem = XEXP (change_pat, load_p); + + const bool lower_base_p = (insns[base.from_insn] == i1); + HOST_WIDE_INT adjust_amt = access_size; + if (!lower_base_p) + adjust_amt *= -1; + + rtx change_reg = XEXP (change_pat, !load_p); + machine_mode mode_for_mem = GET_MODE (change_mem); + rtx effective_base = drop_writeback (base_mem); + rtx new_mem = adjust_address_nv (effective_base, + mode_for_mem, + adjust_amt); + rtx new_set = load_p + ? gen_rtx_SET (change_reg, new_mem) + : gen_rtx_SET (new_mem, change_reg); + + pats[changed_insn] = new_set; + + auto keep_use = [&](use_info *u) + { + return refers_to_regno_p (u->regno (), u->regno () + 1, + change_pat, &XEXP (change_pat, load_p)); + }; + + // Drop any uses that only occur in the old address. + input_uses[changed_insn] = filter_accesses (attempt, + input_uses[changed_insn], + keep_use); + } + + rtx writeback_effect = NULL_RTX; + if (writeback) + writeback_effect = extract_writebacks (load_p, pats, changed_insn); + + const auto base_regno = base.def->regno (); + + if (base.from_insn == -1 && (writeback & 1)) + { + // If the first of the candidate insns had a writeback form, we'll need to + // drop the use of the updated base register from the second insn's uses. + // + // N.B. we needn't worry about the base register occurring as a store + // operand, as we checked that there was no non-address true dependence + // between the insns in try_fuse_pair. + gcc_checking_assert (find_access (input_uses[1], base_regno)); + input_uses[1] = check_remove_regno_access (attempt, + input_uses[1], + base_regno); + } + + // Go through and drop uses that only occur in register notes, + // as we won't be preserving those. + for (int i = 0; i < 2; i++) + { + auto rti = insns[i]->rtl (); + if (!REG_NOTES (rti)) + continue; + + input_uses[i] = remove_note_accesses (attempt, input_uses[i]); + } + + // Edge case: if the first insn is a writeback load and the + // second insn is a non-writeback load which transfers into the base + // register, then we should drop the writeback altogether as the + // update of the base register from the second load should prevail. + // + // For example: + // ldr x2, [x1], #8 + // ldr x1, [x1] + // --> + // ldp x2, x1, [x1] + if (writeback == 1 + && load_p + && find_access (input_defs[1], base_regno)) + { + if (dump_file) + fprintf (dump_file, + " ldp: i%d has wb but subsequent i%d has non-wb " + "update of base (r%d), dropping wb\n", + insns[0]->uid (), insns[1]->uid (), base_regno); + gcc_assert (writeback_effect); + writeback_effect = NULL_RTX; + } + + // So far the patterns have been in instruction order, + // now we want them in offset order. + if (i1 != first) + std::swap (pats[0], pats[1]); + + poly_int64 offsets[2]; + for (int i = 0; i < 2; i++) + { + rtx mem = XEXP (pats[i], load_p); + gcc_checking_assert (MEM_P (mem)); + rtx base = strip_offset (XEXP (mem, 0), offsets + i); + gcc_checking_assert (REG_P (base)); + gcc_checking_assert (base_regno == REGNO (base)); + } + + // If either of the original insns had writeback, but the resulting pair insn + // does not (can happen e.g. in the ldp edge case above, or if the writeback + // effects cancel out), then drop the def(s) of the base register as + // appropriate. + // + // Also drop the first def in the case that both of the original insns had + // writeback. The second def could well have uses, but the first def should + // only be used by the second insn (and we dropped that use above). + for (int i = 0; i < 2; i++) + if ((!writeback_effect && (writeback & (1 << i))) + || (i == 0 && writeback == 3)) + input_defs[i] = check_remove_regno_access (attempt, + input_defs[i], + base_regno); + + // If we don't currently have a writeback pair, and we don't have + // a load that clobbers the base register, look for a trailing destructive + // update of the base register and try and fold it in to make this into a + // writeback pair. + insn_info *trailing_add = nullptr; + if (aarch64_ldp_writeback > 1 + && !writeback_effect + && (!load_p || (!refers_to_regno_p (base_regno, base_regno + 1, + XEXP (pats[0], 0), nullptr) + && !refers_to_regno_p (base_regno, base_regno + 1, + XEXP (pats[1], 0), nullptr)))) + { + def_info *add_def; + trailing_add = find_trailing_add (insns, move_range, writeback, + &writeback_effect, + &add_def, base.def, offsets[0], + access_size); + if (trailing_add) + { + // The def of the base register from the trailing add should prevail. + input_defs[0] = insert_access (attempt, add_def, input_defs[0]); + gcc_assert (input_defs[0].is_valid ()); + } + } + + // Now that we know what base mem we're going to use, check if it's OK + // with the ldp/stp policy. + rtx first_mem = XEXP (pats[0], load_p); + if (!aarch64_mem_ok_with_ldpstp_policy_model (first_mem, + load_p, + GET_MODE (first_mem))) + { + if (dump_file) + fprintf (dump_file, "punting on pair (%d,%d), ldp/stp policy says no\n", + i1->uid (), i2->uid ()); + return false; + } + + rtx reg_notes = combine_reg_notes (first, second); + + rtx pair_pat; + if (writeback_effect) + { + auto patvec = gen_rtvec (3, writeback_effect, pats[0], pats[1]); + pair_pat = gen_rtx_PARALLEL (VOIDmode, patvec); + } + else if (load_p) + pair_pat = aarch64_gen_load_pair (XEXP (pats[0], 0), + XEXP (pats[1], 0), + XEXP (pats[0], 1)); + else + pair_pat = aarch64_gen_store_pair (XEXP (pats[0], 0), + XEXP (pats[0], 1), + XEXP (pats[1], 1)); + + insn_change *pair_change = nullptr; + auto set_pair_pat = [pair_pat,reg_notes](insn_change *change) { + rtx_insn *rti = change->insn ()->rtl (); + gcc_assert (validate_unshare_change (rti, &PATTERN (rti), pair_pat, + true)); + gcc_assert (validate_change (rti, ®_NOTES (rti), + reg_notes, true)); + }; + + if (load_p) + { + changes.quick_push (make_delete (first)); + pair_change = make_change (second); + changes.quick_push (pair_change); + + pair_change->move_range = move_range; + pair_change->new_defs = merge_access_arrays (attempt, + input_defs[0], + input_defs[1]); + gcc_assert (pair_change->new_defs.is_valid ()); + + pair_change->new_uses + = merge_access_arrays (attempt, + drop_memory_access (input_uses[0]), + drop_memory_access (input_uses[1])); + gcc_assert (pair_change->new_uses.is_valid ()); + set_pair_pat (pair_change); + } + else + { + insn_info *store_to_change = decide_stp_strategy (first, second, + move_range); + + if (store_to_change && dump_file) + fprintf (dump_file, " stp: re-purposing store %d\n", + store_to_change->uid ()); + + insn_change *change; + for (int i = 0; i < 2; i++) + { + change = make_change (insns[i]); + if (insns[i] == store_to_change) + { + set_pair_pat (change); + change->new_uses = merge_access_arrays (attempt, + input_uses[0], + input_uses[1]); + auto d1 = drop_memory_access (input_defs[0]); + auto d2 = drop_memory_access (input_defs[1]); + change->new_defs = merge_access_arrays (attempt, d1, d2); + gcc_assert (change->new_defs.is_valid ()); + def_info *stp_def = memory_access (store_to_change->defs ()); + change->new_defs = insert_access (attempt, + stp_def, + change->new_defs); + gcc_assert (change->new_defs.is_valid ()); + change->move_range = move_range; + pair_change = change; + } + else + { + // Note that we are turning this insn into a tombstone, + // we need to keep track of these if we go ahead with the + // change. + tombstone_uids.quick_push (insns[i]->uid ()); + rtx_insn *rti = insns[i]->rtl (); + gcc_assert (validate_change (rti, &PATTERN (rti), + gen_tombstone (), true)); + gcc_assert (validate_change (rti, ®_NOTES (rti), + NULL_RTX, true)); + change->new_uses = use_array (nullptr, 0); + } + gcc_assert (change->new_uses.is_valid ()); + changes.quick_push (change); + } + + if (!store_to_change) + { + // Tricky case. Cannot re-purpose existing insns for stp. + // Need to insert new insn. + if (dump_file) + fprintf (dump_file, + " stp fusion: cannot re-purpose candidate stores\n"); + + auto new_insn = crtl->ssa->create_insn (attempt, INSN, pair_pat); + change = make_change (new_insn); + change->move_range = move_range; + change->new_uses = merge_access_arrays (attempt, + input_uses[0], + input_uses[1]); + gcc_assert (change->new_uses.is_valid ()); + + auto d1 = drop_memory_access (input_defs[0]); + auto d2 = drop_memory_access (input_defs[1]); + change->new_defs = merge_access_arrays (attempt, d1, d2); + gcc_assert (change->new_defs.is_valid ()); + + auto new_set = crtl->ssa->create_set (attempt, new_insn, memory); + change->new_defs = insert_access (attempt, new_set, + change->new_defs); + gcc_assert (change->new_defs.is_valid ()); + changes.safe_insert (1, change); + pair_change = change; + } + } + + if (trailing_add) + changes.quick_push (make_delete (trailing_add)); + + auto n_changes = changes.length (); + gcc_checking_assert (n_changes >= 2 && n_changes <= 4); + + + auto is_changing = insn_is_changing (changes); + for (unsigned i = 0; i < n_changes; i++) + gcc_assert (rtl_ssa::restrict_movement_ignoring (*changes[i], is_changing)); + + // Check the pair pattern is recog'd. + if (!rtl_ssa::recog_ignoring (attempt, *pair_change, is_changing)) + { + if (dump_file) + fprintf (dump_file, " failed to form pair, recog failed\n"); + + // Free any reg notes we allocated. + while (reg_notes) + { + rtx next = XEXP (reg_notes, 1); + free_EXPR_LIST_node (reg_notes); + reg_notes = next; + } + cancel_changes (0); + return false; + } + + gcc_assert (crtl->ssa->verify_insn_changes (changes)); + + confirm_change_group (); + crtl->ssa->change_insns (changes); + + gcc_checking_assert (tombstone_uids.length () <= 2); + for (auto uid : tombstone_uids) + track_tombstone (uid); + + return true; +} + +// Return true if STORE_INSN may modify mem rtx MEM. Make sure we keep +// within our BUDGET for alias analysis. +static bool +store_modifies_mem_p (rtx mem, insn_info *store_insn, int &budget) +{ + if (!budget) + { + if (dump_file) + { + fprintf (dump_file, + "exceeded budget, assuming store %d aliases with mem ", + store_insn->uid ()); + print_simple_rtl (dump_file, mem); + fprintf (dump_file, "\n"); + } + + return true; + } + + budget--; + return memory_modified_in_insn_p (mem, store_insn->rtl ()); +} + +// Return true if LOAD may be modified by STORE. Make sure we keep +// within our BUDGET for alias analysis. +static bool +load_modified_by_store_p (insn_info *load, + insn_info *store, + int &budget) +{ + gcc_checking_assert (budget >= 0); + + if (!budget) + { + if (dump_file) + { + fprintf (dump_file, + "exceeded budget, assuming load %d aliases with store %d\n", + load->uid (), store->uid ()); + } + return true; + } + + // It isn't safe to re-order stores over calls. + if (CALL_P (load->rtl ())) + return true; + + budget--; + return modified_in_p (PATTERN (load->rtl ()), store->rtl ()); +} + +// Virtual base class for load/store walkers used in alias analysis. +struct alias_walker +{ + virtual bool conflict_p (int &budget) const = 0; + virtual insn_info *insn () const = 0; + virtual bool valid () const = 0; + virtual void advance () = 0; +}; + +// Implement some common functionality used by both store_walker +// and load_walker. +template +class def_walker : public alias_walker +{ +protected: + using def_iter_t = typename std::conditional::type; + + static use_info *start_use_chain (def_iter_t &def_iter) + { + set_info *set = nullptr; + for (; *def_iter; def_iter++) + { + set = dyn_cast (*def_iter); + if (!set) + continue; + + use_info *use = reverse + ? set->last_nondebug_insn_use () + : set->first_nondebug_insn_use (); + + if (use) + return use; + } + + return nullptr; + } + + def_iter_t def_iter; + insn_info *limit; + def_walker (def_info *def, insn_info *limit) : + def_iter (def), limit (limit) {} + + virtual bool iter_valid () const { return *def_iter; } + +public: + insn_info *insn () const override { return (*def_iter)->insn (); } + void advance () override { def_iter++; } + bool valid () const override final + { + if (!iter_valid ()) + return false; + + if (reverse) + return *(insn ()) > *limit; + else + return *(insn ()) < *limit; + } +}; + +// alias_walker that iterates over stores. +template +class store_walker : public def_walker +{ + rtx cand_mem; + InsnPredicate tombstone_p; + +public: + store_walker (def_info *mem_def, rtx mem, insn_info *limit_insn, + InsnPredicate tombstone_fn) : + def_walker (mem_def, limit_insn), + cand_mem (mem), tombstone_p (tombstone_fn) {} + + bool conflict_p (int &budget) const override final + { + if (tombstone_p (this->insn ())) + return false; + + return store_modifies_mem_p (cand_mem, this->insn (), budget); + } +}; + +// alias_walker that iterates over loads. +template +class load_walker : public def_walker +{ + using Base = def_walker; + using use_iter_t = typename std::conditional::type; + + use_iter_t use_iter; + insn_info *cand_store; + + bool iter_valid () const override final { return *use_iter; } + +public: + void advance () override final + { + use_iter++; + if (*use_iter) + return; + this->def_iter++; + use_iter = Base::start_use_chain (this->def_iter); + } + + insn_info *insn () const override final + { + return (*use_iter)->insn (); + } + + bool conflict_p (int &budget) const override final + { + return load_modified_by_store_p (insn (), cand_store, budget); + } + + load_walker (def_info *def, insn_info *store, insn_info *limit_insn) + : Base (def, limit_insn), + use_iter (Base::start_use_chain (this->def_iter)), + cand_store (store) {} +}; + +// Process our alias_walkers in a round-robin fashion, proceeding until +// nothing more can be learned from alias analysis. +// +// We try to maintain the invariant that if a walker becomes invalid, we +// set its pointer to null. +static void +do_alias_analysis (insn_info *alias_hazards[4], + alias_walker *walkers[4], + bool load_p) +{ + const int n_walkers = 2 + (2 * !load_p); + int budget = aarch64_ldp_alias_check_limit; + + auto next_walker = [walkers,n_walkers](int current) -> int { + for (int j = 1; j <= n_walkers; j++) + { + int idx = (current + j) % n_walkers; + if (walkers[idx]) + return idx; + } + return -1; + }; + + int i = -1; + for (int j = 0; j < n_walkers; j++) + { + alias_hazards[j] = nullptr; + if (!walkers[j]) + continue; + + if (!walkers[j]->valid ()) + walkers[j] = nullptr; + else if (i == -1) + i = j; + } + + while (i >= 0) + { + int insn_i = i % 2; + int paired_i = (i & 2) + !insn_i; + int pair_fst = (i & 2); + int pair_snd = (i & 2) + 1; + + if (walkers[i]->conflict_p (budget)) + { + alias_hazards[i] = walkers[i]->insn (); + + // We got an aliasing conflict for this {load,store} walker, + // so we don't need to walk any further. + walkers[i] = nullptr; + + // If we have a pair of alias conflicts that prevent + // forming the pair, stop. There's no need to do further + // analysis. + if (alias_hazards[paired_i] + && (*alias_hazards[pair_fst] <= *alias_hazards[pair_snd])) + return; + + if (!load_p) + { + int other_pair_fst = (pair_fst ? 0 : 2); + int other_paired_i = other_pair_fst + !insn_i; + + int x_pair_fst = (i == pair_fst) ? i : other_paired_i; + int x_pair_snd = (i == pair_fst) ? other_paired_i : i; + + // Similarly, handle the case where we have a {load,store} + // or {store,load} alias hazard pair that prevents forming + // the pair. + if (alias_hazards[other_paired_i] + && *alias_hazards[x_pair_fst] <= *alias_hazards[x_pair_snd]) + return; + } + } + + if (walkers[i]) + { + walkers[i]->advance (); + + if (!walkers[i]->valid ()) + walkers[i] = nullptr; + } + + i = next_walker (i); + } +} + +// Given INSNS (in program order) which are known to be adjacent, look +// to see if either insn has a suitable RTL (register) base that we can +// use to form a pair. Push these to BASE_CANDS if we find any. CAND_MEMs +// gives the relevant mems from the candidate insns, ACCESS_SIZE gives the +// size of a single candidate access, and REVERSED says whether the accesses +// are inverted in offset order. +// +// Returns an integer where bit (1 << i) is set if INSNS[i] uses writeback +// addressing. +static int +get_viable_bases (insn_info *insns[2], + vec &base_cands, + rtx cand_mems[2], + unsigned access_size, + bool reversed) +{ + // We discovered this pair through a common base. Need to ensure that + // we have a common base register that is live at both locations. + def_info *base_defs[2] = {}; + int writeback = 0; + for (int i = 0; i < 2; i++) + { + const bool is_lower = (i == reversed); + poly_int64 poly_off; + rtx base = ldp_strip_offset (cand_mems[i], &poly_off); + if (GET_RTX_CLASS (GET_CODE (XEXP (cand_mems[i], 0))) == RTX_AUTOINC) + writeback |= (1 << i); + + if (!REG_P (base) || !poly_off.is_constant ()) + continue; + + // Punt on accesses relative to eliminable regs. Since we don't know the + // elimination offset pre-RA, we should postpone forming pairs on such + // accesses until after RA. + if (!reload_completed + && (REGNO (base) == FRAME_POINTER_REGNUM + || REGNO (base) == ARG_POINTER_REGNUM)) + continue; + + HOST_WIDE_INT base_off = poly_off.to_constant (); + + // It should be unlikely that we ever punt here, since MEM_EXPR offset + // alignment should be a good proxy for register offset alignment. + if (base_off % access_size != 0) + { + if (dump_file) + fprintf (dump_file, + "base not viable, offset misaligned (insn %d)\n", + insns[i]->uid ()); + continue; + } + + base_off /= access_size; + + if (!is_lower) + base_off--; + + if (base_off < LDP_MIN_IMM || base_off > LDP_MAX_IMM) + continue; + + for (auto use : insns[i]->uses ()) + if (use->is_reg () && use->regno () == REGNO (base)) + { + base_defs[i] = use->def (); + break; + } + } + + if (!base_defs[0] && !base_defs[1]) + { + if (dump_file) + fprintf (dump_file, "no viable base register for pair (%d,%d)\n", + insns[0]->uid (), insns[1]->uid ()); + return writeback; + } + + for (int i = 0; i < 2; i++) + if ((writeback & (1 << i)) && !base_defs[i]) + { + if (dump_file) + fprintf (dump_file, "insn %d has writeback but base isn't viable\n", + insns[i]->uid ()); + return writeback; + } + + if (writeback == 3 + && base_defs[0]->regno () != base_defs[1]->regno ()) + { + if (dump_file) + fprintf (dump_file, + "pair (%d,%d): double writeback with distinct regs (%d,%d): " + "punting\n", + insns[0]->uid (), insns[1]->uid (), + base_defs[0]->regno (), base_defs[1]->regno ()); + return writeback; + } + + if (base_defs[0] && base_defs[1] + && base_defs[0]->regno () == base_defs[1]->regno ()) + { + // Easy case: insns already share the same base reg. + base_cands.quick_push (base_defs[0]); + return writeback; + } + + // Otherwise, we know that one of the bases must change. + // + // Note that if there is writeback we must use the writeback base + // (we know now there is exactly one). + for (int i = 0; i < 2; i++) + if (base_defs[i] && (!writeback || (writeback & (1 << i)))) + base_cands.quick_push (base_cand { base_defs[i], i }); + + return writeback; +} + +// Given two adjacent memory accesses of the same size, I1 and I2, try +// and see if we can merge them into a ldp or stp. +// +// ACCESS_SIZE gives the (common) size of a single access, LOAD_P is true +// if the accesses are both loads, otherwise they are both stores. +bool +ldp_bb_info::try_fuse_pair (bool load_p, unsigned access_size, + insn_info *i1, insn_info *i2) +{ + if (dump_file) + fprintf (dump_file, "analyzing pair (load=%d): (%d,%d)\n", + load_p, i1->uid (), i2->uid ()); + + insn_info *insns[2]; + bool reversed = false; + if (*i1 < *i2) + { + insns[0] = i1; + insns[1] = i2; + } + else + { + insns[0] = i2; + insns[1] = i1; + reversed = true; + } + + rtx cand_mems[2]; + rtx reg_ops[2]; + rtx pats[2]; + for (int i = 0; i < 2; i++) + { + pats[i] = PATTERN (insns[i]->rtl ()); + cand_mems[i] = XEXP (pats[i], load_p); + reg_ops[i] = XEXP (pats[i], !load_p); + } + + if (load_p && reg_overlap_mentioned_p (reg_ops[0], reg_ops[1])) + { + if (dump_file) + fprintf (dump_file, + "punting on ldp due to reg conflcits (%d,%d)\n", + insns[0]->uid (), insns[1]->uid ()); + return false; + } + + if (cfun->can_throw_non_call_exceptions + && (find_reg_note (insns[0]->rtl (), REG_EH_REGION, NULL_RTX) + || find_reg_note (insns[1]->rtl (), REG_EH_REGION, NULL_RTX)) + && insn_could_throw_p (insns[0]->rtl ()) + && insn_could_throw_p (insns[1]->rtl ())) + { + if (dump_file) + fprintf (dump_file, + "can't combine insns with EH side effects (%d,%d)\n", + insns[0]->uid (), insns[1]->uid ()); + return false; + } + + auto_vec base_cands (2); + + int writeback = get_viable_bases (insns, base_cands, cand_mems, + access_size, reversed); + if (base_cands.is_empty ()) + { + if (dump_file) + fprintf (dump_file, "no viable base for pair (%d,%d)\n", + insns[0]->uid (), insns[1]->uid ()); + return false; + } + + rtx *ignore = &XEXP (pats[1], load_p); + for (auto use : insns[1]->uses ()) + if (!use->is_mem () + && refers_to_regno_p (use->regno (), use->regno () + 1, pats[1], ignore) + && use->def () && use->def ()->insn () == insns[0]) + { + // N.B. we allow a true dependence on the base address, as this + // happens in the case of auto-inc accesses. Consider a post-increment + // load followed by a regular indexed load, for example. + if (dump_file) + fprintf (dump_file, + "%d has non-address true dependence on %d, rejecting pair\n", + insns[1]->uid (), insns[0]->uid ()); + return false; + } + + unsigned i = 0; + while (i < base_cands.length ()) + { + base_cand &cand = base_cands[i]; + + rtx *ignore[2] = {}; + for (int j = 0; j < 2; j++) + if (cand.from_insn == !j) + ignore[j] = &XEXP (cand_mems[j], 0); + + insn_info *h = first_hazard_after (insns[0], ignore[0]); + if (h && *h <= *insns[1]) + cand.hazards[0] = h; + + h = latest_hazard_before (insns[1], ignore[1]); + if (h && *h >= *insns[0]) + cand.hazards[1] = h; + + if (!cand.viable ()) + { + if (dump_file) + fprintf (dump_file, + "pair (%d,%d): rejecting base %d due to dataflow " + "hazards (%d,%d)\n", + insns[0]->uid (), + insns[1]->uid (), + cand.def->regno (), + cand.hazards[0]->uid (), + cand.hazards[1]->uid ()); + + base_cands.ordered_remove (i); + } + else + i++; + } + + if (base_cands.is_empty ()) + { + if (dump_file) + fprintf (dump_file, + "can't form pair (%d,%d) due to dataflow hazards\n", + insns[0]->uid (), insns[1]->uid ()); + return false; + } + + insn_info *alias_hazards[4] = {}; + + // First def of memory after the first insn, and last def of memory + // before the second insn, respectively. + def_info *mem_defs[2] = {}; + if (load_p) + { + if (!MEM_READONLY_P (cand_mems[0])) + { + mem_defs[0] = memory_access (insns[0]->uses ())->def (); + gcc_checking_assert (mem_defs[0]); + mem_defs[0] = mem_defs[0]->next_def (); + } + if (!MEM_READONLY_P (cand_mems[1])) + { + mem_defs[1] = memory_access (insns[1]->uses ())->def (); + gcc_checking_assert (mem_defs[1]); + } + } + else + { + mem_defs[0] = memory_access (insns[0]->defs ())->next_def (); + mem_defs[1] = memory_access (insns[1]->defs ())->prev_def (); + gcc_checking_assert (mem_defs[0]); + gcc_checking_assert (mem_defs[1]); + } + + auto tombstone_p = [&](insn_info *insn) -> bool { + return m_emitted_tombstone + && bitmap_bit_p (&m_tombstone_bitmap, insn->uid ()); + }; + + store_walker + forward_store_walker (mem_defs[0], cand_mems[0], insns[1], tombstone_p); + + store_walker + backward_store_walker (mem_defs[1], cand_mems[1], insns[0], tombstone_p); + + alias_walker *walkers[4] = {}; + if (mem_defs[0]) + walkers[0] = &forward_store_walker; + if (mem_defs[1]) + walkers[1] = &backward_store_walker; + + if (load_p && (mem_defs[0] || mem_defs[1])) + do_alias_analysis (alias_hazards, walkers, load_p); + else + { + // We want to find any loads hanging off the first store. + mem_defs[0] = memory_access (insns[0]->defs ()); + load_walker forward_load_walker (mem_defs[0], insns[0], insns[1]); + load_walker backward_load_walker (mem_defs[1], insns[1], insns[0]); + walkers[2] = &forward_load_walker; + walkers[3] = &backward_load_walker; + do_alias_analysis (alias_hazards, walkers, load_p); + // Now consolidate hazards back down. + if (alias_hazards[2] + && (!alias_hazards[0] || (*alias_hazards[2] < *alias_hazards[0]))) + alias_hazards[0] = alias_hazards[2]; + + if (alias_hazards[3] + && (!alias_hazards[1] || (*alias_hazards[3] > *alias_hazards[1]))) + alias_hazards[1] = alias_hazards[3]; + } + + if (alias_hazards[0] && alias_hazards[1] + && *alias_hazards[0] <= *alias_hazards[1]) + { + if (dump_file) + fprintf (dump_file, + "cannot form pair (%d,%d) due to alias conflicts (%d,%d)\n", + i1->uid (), i2->uid (), + alias_hazards[0]->uid (), alias_hazards[1]->uid ()); + return false; + } + + // Now narrow the hazards on each base candidate using + // the alias hazards. + i = 0; + while (i < base_cands.length ()) + { + base_cand &cand = base_cands[i]; + if (alias_hazards[0] && (!cand.hazards[0] + || *alias_hazards[0] < *cand.hazards[0])) + cand.hazards[0] = alias_hazards[0]; + if (alias_hazards[1] && (!cand.hazards[1] + || *alias_hazards[1] > *cand.hazards[1])) + cand.hazards[1] = alias_hazards[1]; + + if (cand.viable ()) + i++; + else + { + if (dump_file) + fprintf (dump_file, "pair (%d,%d): rejecting base %d due to " + "alias/dataflow hazards (%d,%d)", + insns[0]->uid (), insns[1]->uid (), + cand.def->regno (), + cand.hazards[0]->uid (), + cand.hazards[1]->uid ()); + + base_cands.ordered_remove (i); + } + } + + if (base_cands.is_empty ()) + { + if (dump_file) + fprintf (dump_file, + "cannot form pair (%d,%d) due to alias/dataflow hazards", + insns[0]->uid (), insns[1]->uid ()); + + return false; + } + + base_cand *base = &base_cands[0]; + if (base_cands.length () > 1) + { + // If there are still multiple viable bases, it makes sense + // to choose one that allows us to reduce register pressure, + // for loads this means moving further down, for stores this + // means moving further up. + gcc_checking_assert (base_cands.length () == 2); + const int hazard_i = !load_p; + if (base->hazards[hazard_i]) + { + if (!base_cands[1].hazards[hazard_i]) + base = &base_cands[1]; + else if (load_p + && *base_cands[1].hazards[hazard_i] + > *(base->hazards[hazard_i])) + base = &base_cands[1]; + else if (!load_p + && *base_cands[1].hazards[hazard_i] + < *(base->hazards[hazard_i])) + base = &base_cands[1]; + } + } + + // Otherwise, hazards[0] > hazards[1]. + // Pair can be formed anywhere in (hazards[1], hazards[0]). + insn_range_info range (insns[0], insns[1]); + if (base->hazards[1]) + range.first = base->hazards[1]; + if (base->hazards[0]) + range.last = base->hazards[0]->prev_nondebug_insn (); + + // Placement strategy: push loads down and pull stores up, this should + // help register pressure by reducing live ranges. + if (load_p) + range.first = range.last; + else + range.last = range.first; + + if (dump_file) + { + auto print_hazard = [](insn_info *i) + { + if (i) + fprintf (dump_file, "%d", i->uid ()); + else + fprintf (dump_file, "-"); + }; + auto print_pair = [print_hazard](insn_info **i) + { + print_hazard (i[0]); + fprintf (dump_file, ","); + print_hazard (i[1]); + }; + + fprintf (dump_file, "fusing pair [L=%d] (%d,%d), base=%d, hazards: (", + load_p, insns[0]->uid (), insns[1]->uid (), + base->def->regno ()); + print_pair (base->hazards); + fprintf (dump_file, "), move_range: (%d,%d)\n", + range.first->uid (), range.last->uid ()); + } + + return fuse_pair (load_p, access_size, writeback, + i1, i2, *base, range); +} + +// Erase [l.begin (), i] inclusive, return the new value of l.begin (). +static insn_iter_t +erase_prefix (insn_list_t &l, insn_iter_t i) +{ + l.erase (l.begin (), std::next (i)); + return l.begin (); +} + +// Remove the insn at iterator I from the list. If it was the first insn +// in the list, return the next one. Otherwise, return the previous one. +static insn_iter_t +erase_one (insn_list_t &l, insn_iter_t i) +{ + auto prev_or_next = (i == l.begin ()) ? std::next (i) : std::prev (i); + l.erase (i); + return prev_or_next; +} + +static void +dump_insn_list (FILE *f, const insn_list_t &l) +{ + fprintf (f, "("); + + auto i = l.begin (); + auto end = l.end (); + + if (i != end) + fprintf (f, "%d", (*i)->uid ()); + i++; + + for (; i != end; i++) + fprintf (f, ", %d", (*i)->uid ()); + + fprintf (f, ")"); +} + +DEBUG_FUNCTION void +debug (const insn_list_t &l) +{ + dump_insn_list (stderr, l); + fprintf (stderr, "\n"); +} + +// LEFT_LIST and RIGHT_LIST are lists of candidate instructions +// where all insns in LEFT_LIST are known to be adjacent to those +// in RIGHT_LIST. +// +// This function traverses the resulting 2D matrix of possible pair +// candidates and attempts to merge them into pairs. +// +// The algorithm is straightforward: if we consider a combined list +// of candidates X obtained by merging LEFT_LIST and RIGHT_LIST in +// program order, then we advance through X until we +// reach a crossing point (where X[i] and X[i+1] come from different +// source lists). +// +// At this point we know X[i] and X[i+1] are adjacent accesses, and +// we try to fuse them into a pair. If this succeeds, we remove X[i] +// and X[i+1] from their original lists and continue as above. We +// queue the access that came from RIGHT_LIST for deletion by adding +// it to TO_DELETE, so that we don't try and merge it in subsequent +// iterations of transform_for_base. See below for a description of the +// handling in the failure case. +void +ldp_bb_info::merge_pairs (insn_list_t &left_list, + insn_list_t &right_list, + hash_set &to_delete, + bool load_p, + unsigned access_size) +{ + auto iter_l = left_list.begin (); + auto iter_r = right_list.begin (); + + while (!left_list.empty () && !right_list.empty ()) + { + auto next_l = std::next (iter_l); + auto next_r = std::next (iter_r); + if (**iter_l < **iter_r + && next_l != left_list.end () + && **next_l < **iter_r) + { + iter_l = next_l; + continue; + } + else if (**iter_r < **iter_l + && next_r != right_list.end () + && **next_r < **iter_l) + { + iter_r = next_r; + continue; + } + + if (try_fuse_pair (load_p, access_size, *iter_l, *iter_r)) + { + if (to_delete.add (*iter_r)) + gcc_unreachable (); // Shouldn't get added twice. + + iter_l = erase_one (left_list, iter_l); + iter_r = erase_one (right_list, iter_r); + } + else + { + // If we failed to merge the pair, then we delete the entire + // prefix of insns that originated from the same source list. + // The rationale for this is as follows. + // + // In the store case, the insns in the prefix can't be + // re-ordered over each other as they are guaranteed to store + // to the same location, so we're guaranteed not to lose + // opportunities by doing this. + // + // In the load case, subsequent loads from the same location + // are either redundant (in which case they should have been + // cleaned up by an earlier optimization pass) or there is an + // intervening aliasing hazard, in which case we can't + // re-order them anyway, so provided earlier passes have + // cleaned up redundant loads, we shouldn't miss opportunities + // by doing this. + if (**iter_l < **iter_r) + // Delete everything from l_begin to iter_l, inclusive. + iter_l = erase_prefix (left_list, iter_l); + else + // Delete everything from r_begin to iter_r, inclusive. + iter_r = erase_prefix (right_list, iter_r); + } + } +} + +// Given a list of insns LEFT_ORIG with all accesses adjacent to +// those in RIGHT_ORIG, try and form them into pairs. +// +// Return true iff we formed all the RIGHT_ORIG candidates into +// pairs. +bool +ldp_bb_info::try_form_pairs (insn_list_t *left_orig, + insn_list_t *right_orig, + bool load_p, unsigned access_size) +{ + // Make a copy of the right list which we can modify to + // exclude candidates locally for this invocation. + insn_list_t right_copy (*right_orig); + + if (dump_file) + { + fprintf (dump_file, "try_form_pairs [L=%d], cand vecs ", load_p); + dump_insn_list (dump_file, *left_orig); + fprintf (dump_file, " x "); + dump_insn_list (dump_file, right_copy); + fprintf (dump_file, "\n"); + } + + // List of candidate insns to delete from the original right_list + // (because they were formed into a pair). + hash_set to_delete; + + // Now we have a 2D matrix of candidates, traverse it to try and + // find a pair of insns that are already adjacent (within the + // merged list of accesses). + merge_pairs (*left_orig, right_copy, to_delete, load_p, access_size); + + // If we formed all right candidates into pairs, + // then we can skip the next iteration. + if (to_delete.elements () == right_orig->size ()) + return true; + + // Delete items from to_delete. + auto right_iter = right_orig->begin (); + auto right_end = right_orig->end (); + while (right_iter != right_end) + { + auto right_next = std::next (right_iter); + + if (to_delete.contains (*right_iter)) + { + right_orig->erase (right_iter); + right_end = right_orig->end (); + } + + right_iter = right_next; + } + + return false; +} + +// Iterate over the accesses in GROUP, looking for adjacent sets +// of accesses. If we find two sets of adjacent accesses, call +// try_form_pairs. +void +ldp_bb_info::transform_for_base (int encoded_lfs, + access_group &group) +{ + const auto lfs = decode_lfs (encoded_lfs); + const unsigned access_size = lfs.size; + + bool skip_next = true; + access_record *prev_access = nullptr; + + for (auto &access : group.list) + { + if (skip_next) + skip_next = false; + else if (known_eq (access.offset, prev_access->offset + access_size)) + skip_next = try_form_pairs (&prev_access->cand_insns, + &access.cand_insns, + lfs.load_p, access_size); + + prev_access = &access; + } +} + +// If we emitted tombstone insns for this BB, iterate through the BB +// and remove all the tombstone insns, being sure to reparent any uses +// of mem to previous defs when we do this. +void +ldp_bb_info::cleanup_tombstones () +{ + // No need to do anything if we didn't emit a tombstone insn for this BB. + if (!m_emitted_tombstone) + return; + + insn_info *insn = m_bb->head_insn (); + while (insn) + { + insn_info *next = insn->next_nondebug_insn (); + if (!insn->is_real () + || !bitmap_bit_p (&m_tombstone_bitmap, insn->uid ())) + { + insn = next; + continue; + } + + auto def = memory_access (insn->defs ()); + auto set = dyn_cast (def); + if (set && set->has_any_uses ()) + { + def_info *prev_def = def->prev_def (); + auto prev_set = dyn_cast (prev_def); + if (!prev_set) + gcc_unreachable (); + + while (set->first_use ()) + crtl->ssa->reparent_use (set->first_use (), prev_set); + } + + // Now set has no uses, we can delete it. + insn_change change (insn, insn_change::DELETE); + crtl->ssa->change_insn (change); + insn = next; + } +} + +template +void +ldp_bb_info::traverse_base_map (Map &map) +{ + for (auto kv : map) + { + const auto &key = kv.first; + auto &value = kv.second; + transform_for_base (key.second, value); + } +} + +void +ldp_bb_info::transform () +{ + traverse_base_map (expr_map); + traverse_base_map (def_map); +} + +static void +ldp_fusion_init () +{ + calculate_dominance_info (CDI_DOMINATORS); + df_analyze (); + crtl->ssa = new rtl_ssa::function_info (cfun); +} + +static void +ldp_fusion_destroy () +{ + if (crtl->ssa->perform_pending_updates ()) + cleanup_cfg (0); + + free_dominance_info (CDI_DOMINATORS); + + delete crtl->ssa; + crtl->ssa = nullptr; +} + +// Given a load pair insn in PATTERN, unpack the insn, storing +// the registers in REGS and returning the mem. +static rtx +aarch64_destructure_load_pair (rtx regs[2], rtx pattern) +{ + rtx mem = NULL_RTX; + + for (int i = 0; i < 2; i++) + { + rtx pat = XVECEXP (pattern, 0, i); + regs[i] = XEXP (pat, 0); + rtx unspec = XEXP (pat, 1); + gcc_checking_assert (GET_CODE (unspec) == UNSPEC); + rtx this_mem = XVECEXP (unspec, 0, 0); + if (mem) + gcc_checking_assert (rtx_equal_p (mem, this_mem)); + else + { + gcc_checking_assert (MEM_P (this_mem)); + mem = this_mem; + } + } + + return mem; +} + +// Given a store pair insn in PATTERN, unpack the insn, storing +// the register operands in REGS, and returning the mem. +static rtx +aarch64_destructure_store_pair (rtx regs[2], rtx pattern) +{ + rtx mem = XEXP (pattern, 0); + rtx unspec = XEXP (pattern, 1); + gcc_checking_assert (GET_CODE (unspec) == UNSPEC); + for (int i = 0; i < 2; i++) + regs[i] = XVECEXP (unspec, 0, i); + return mem; +} + +// Given a pair mem in PAIR_MEM, register operands in REGS, and an rtx +// representing the effect of writeback on the base register in WB_EFFECT, +// return an insn representing a writeback variant of this pair. +// LOAD_P is true iff the pair is a load. +// +// This is used when promoting existing non-writeback pairs to writeback +// variants. +static rtx +aarch64_gen_writeback_pair (rtx wb_effect, rtx pair_mem, rtx regs[2], + bool load_p) +{ + auto op_mode = aarch64_operand_mode_for_pair_mode (GET_MODE (pair_mem)); + + machine_mode modes[2]; + for (int i = 0; i < 2; i++) + { + machine_mode mode = GET_MODE (regs[i]); + if (load_p) + gcc_checking_assert (mode != VOIDmode); + else if (mode == VOIDmode) + mode = op_mode; + + modes[i] = mode; + } + + const auto op_size = GET_MODE_SIZE (modes[0]); + gcc_checking_assert (known_eq (op_size, GET_MODE_SIZE (modes[1]))); + + rtx pats[2]; + for (int i = 0; i < 2; i++) + { + rtx mem = adjust_address_nv (pair_mem, modes[i], op_size * i); + pats[i] = load_p + ? gen_rtx_SET (regs[i], mem) + : gen_rtx_SET (mem, regs[i]); + } + + return gen_rtx_PARALLEL (VOIDmode, + gen_rtvec (3, wb_effect, pats[0], pats[1])); +} + +// Given an existing pair insn INSN, look for a trailing update of +// the base register which we can fold in to make this pair use +// a writeback addressing mode. +static void +try_promote_writeback (insn_info *insn) +{ + auto rti = insn->rtl (); + const auto attr = get_attr_ldpstp (rti); + if (attr == LDPSTP_NONE) + return; + + bool load_p = (attr == LDPSTP_LDP); + gcc_checking_assert (load_p || attr == LDPSTP_STP); + + rtx regs[2]; + rtx mem = NULL_RTX; + if (load_p) + mem = aarch64_destructure_load_pair (regs, PATTERN (rti)); + else + mem = aarch64_destructure_store_pair (regs, PATTERN (rti)); + gcc_checking_assert (MEM_P (mem)); + + poly_int64 offset; + rtx base = strip_offset (XEXP (mem, 0), &offset); + gcc_assert (REG_P (base)); + + const auto access_size = GET_MODE_SIZE (GET_MODE (mem)).to_constant () / 2; + + if (find_access (insn->defs (), REGNO (base))) + { + gcc_assert (load_p); + if (dump_file) + fprintf (dump_file, + "ldp %d clobbers base r%d, can't promote to writeback\n", + insn->uid (), REGNO (base)); + return; + } + + auto base_use = find_access (insn->uses (), REGNO (base)); + gcc_assert (base_use); + + if (!base_use->def ()) + { + if (dump_file) + fprintf (dump_file, + "found pair (i%d, L=%d): but base r%d is upwards exposed\n", + insn->uid (), load_p, REGNO (base)); + return; + } + + auto base_def = base_use->def (); + + rtx wb_effect = NULL_RTX; + def_info *add_def; + const insn_range_info pair_range (insn->prev_nondebug_insn ()); + insn_info *insns[2] = { nullptr, insn }; + insn_info *trailing_add = find_trailing_add (insns, pair_range, 0, &wb_effect, + &add_def, base_def, offset, + access_size); + if (!trailing_add) + return; + + auto attempt = crtl->ssa->new_change_attempt (); + + insn_change pair_change (insn); + insn_change del_change (trailing_add, insn_change::DELETE); + insn_change *changes[] = { &pair_change, &del_change }; + + rtx pair_pat = aarch64_gen_writeback_pair (wb_effect, mem, regs, load_p); + gcc_assert (validate_unshare_change (rti, &PATTERN (rti), pair_pat, true)); + + // The pair must gain the def of the base register from the add. + pair_change.new_defs = insert_access (attempt, + add_def, + pair_change.new_defs); + gcc_assert (pair_change.new_defs.is_valid ()); + + pair_change.move_range = insn_range_info (insn->prev_nondebug_insn ()); + + auto is_changing = insn_is_changing (changes); + for (unsigned i = 0; i < ARRAY_SIZE (changes); i++) + gcc_assert (rtl_ssa::restrict_movement_ignoring (*changes[i], is_changing)); + + gcc_assert (rtl_ssa::recog_ignoring (attempt, pair_change, is_changing)); + gcc_assert (crtl->ssa->verify_insn_changes (changes)); + confirm_change_group (); + crtl->ssa->change_insns (changes); +} + +// Main function for the pass. Iterate over the insns in BB looking +// for load/store candidates. If running after RA, also try and promote +// non-writeback pairs to use writeback addressing. Then try to fuse +// candidates into pairs. +void ldp_fusion_bb (bb_info *bb) +{ + const bool track_loads + = aarch64_tune_params.ldp_policy_model != AARCH64_LDP_STP_POLICY_NEVER; + const bool track_stores + = aarch64_tune_params.stp_policy_model != AARCH64_LDP_STP_POLICY_NEVER; + + ldp_bb_info bb_state (bb); + + for (auto insn : bb->nondebug_insns ()) + { + rtx_insn *rti = insn->rtl (); + + if (!rti || !INSN_P (rti)) + continue; + + rtx pat = PATTERN (rti); + if (reload_completed + && aarch64_ldp_writeback > 1 + && GET_CODE (pat) == PARALLEL + && XVECLEN (pat, 0) == 2) + try_promote_writeback (insn); + + if (GET_CODE (pat) != SET) + continue; + + if (track_stores && MEM_P (XEXP (pat, 0))) + bb_state.track_access (insn, false, XEXP (pat, 0)); + else if (track_loads && MEM_P (XEXP (pat, 1))) + bb_state.track_access (insn, true, XEXP (pat, 1)); + } + + bb_state.transform (); + bb_state.cleanup_tombstones (); +} + +void ldp_fusion () +{ + ldp_fusion_init (); + + for (auto bb : crtl->ssa->bbs ()) + ldp_fusion_bb (bb); + + ldp_fusion_destroy (); +} + +namespace { + +const pass_data pass_data_ldp_fusion = +{ + RTL_PASS, /* type */ + "ldp_fusion", /* name */ + OPTGROUP_NONE, /* optinfo_flags */ + TV_NONE, /* tv_id */ + 0, /* properties_required */ + 0, /* properties_provided */ + 0, /* properties_destroyed */ + 0, /* todo_flags_start */ + TODO_df_finish, /* todo_flags_finish */ +}; + +class pass_ldp_fusion : public rtl_opt_pass +{ +public: + pass_ldp_fusion (gcc::context *ctx) + : rtl_opt_pass (pass_data_ldp_fusion, ctx) + {} + + opt_pass *clone () override { return new pass_ldp_fusion (m_ctxt); } + + bool gate (function *) final override + { + if (!optimize || optimize_debug) + return false; + + // If the tuning policy says never to form ldps or stps, don't run + // the pass. + if ((aarch64_tune_params.ldp_policy_model + == AARCH64_LDP_STP_POLICY_NEVER) + && (aarch64_tune_params.stp_policy_model + == AARCH64_LDP_STP_POLICY_NEVER)) + return false; + + if (reload_completed) + return flag_aarch64_late_ldp_fusion; + else + return flag_aarch64_early_ldp_fusion; + } + + unsigned execute (function *) final override + { + ldp_fusion (); + return 0; + } +}; + +} // anon namespace + +rtl_opt_pass * +make_pass_ldp_fusion (gcc::context *ctx) +{ + return new pass_ldp_fusion (ctx); +} diff --git a/gcc/config/aarch64/aarch64-passes.def b/gcc/config/aarch64/aarch64-passes.def index 662a13fd5e6..59be2464f1a 100644 --- a/gcc/config/aarch64/aarch64-passes.def +++ b/gcc/config/aarch64/aarch64-passes.def @@ -24,3 +24,5 @@ INSERT_PASS_BEFORE (pass_late_thread_prologue_and_epilogue, 1, pass_switch_pstat INSERT_PASS_AFTER (pass_machine_reorg, 1, pass_tag_collision_avoidance); INSERT_PASS_BEFORE (pass_shorten_branches, 1, pass_insert_bti); INSERT_PASS_AFTER (pass_if_after_combine, 1, pass_cc_fusion); +INSERT_PASS_BEFORE (pass_early_remat, 1, pass_ldp_fusion); +INSERT_PASS_BEFORE (pass_peephole2, 1, pass_ldp_fusion); diff --git a/gcc/config/aarch64/aarch64-protos.h b/gcc/config/aarch64/aarch64-protos.h index eb3dff22bf0..38a66383916 100644 --- a/gcc/config/aarch64/aarch64-protos.h +++ b/gcc/config/aarch64/aarch64-protos.h @@ -1074,6 +1074,7 @@ rtl_opt_pass *make_pass_tag_collision_avoidance (gcc::context *); rtl_opt_pass *make_pass_insert_bti (gcc::context *ctxt); rtl_opt_pass *make_pass_cc_fusion (gcc::context *ctxt); rtl_opt_pass *make_pass_switch_pstate_sm (gcc::context *ctxt); +rtl_opt_pass *make_pass_ldp_fusion (gcc::context *); poly_uint64 aarch64_regmode_natural_size (machine_mode); diff --git a/gcc/config/aarch64/aarch64.opt b/gcc/config/aarch64/aarch64.opt index f5a518202a1..116ec1892dc 100644 --- a/gcc/config/aarch64/aarch64.opt +++ b/gcc/config/aarch64/aarch64.opt @@ -271,6 +271,16 @@ mtrack-speculation Target Var(aarch64_track_speculation) Generate code to track when the CPU might be speculating incorrectly. +mearly-ldp-fusion +Target Var(flag_aarch64_early_ldp_fusion) Optimization Init(1) +Enable the pre-RA AArch64-specific pass to fuse loads and stores into +ldp and stp instructions. + +mlate-ldp-fusion +Target Var(flag_aarch64_late_ldp_fusion) Optimization Init(1) +Enable the post-RA AArch64-specific pass to fuse loads and stores into +ldp and stp instructions. + mstack-protector-guard= Target RejectNegative Joined Enum(stack_protector_guard) Var(aarch64_stack_protector_guard) Init(SSP_GLOBAL) Use given stack-protector guard. @@ -360,3 +370,16 @@ Enum(aarch64_ldp_stp_policy) String(never) Value(AARCH64_LDP_STP_POLICY_NEVER) EnumValue Enum(aarch64_ldp_stp_policy) String(aligned) Value(AARCH64_LDP_STP_POLICY_ALIGNED) + +-param=aarch64-ldp-alias-check-limit= +Target Joined UInteger Var(aarch64_ldp_alias_check_limit) Init(8) IntegerRange(0, 65536) Param +Limit on number of alias checks performed when attempting to form an ldp/stp. + +-param=aarch64-ldp-writeback= +Target Joined UInteger Var(aarch64_ldp_writeback) Init(2) IntegerRange(0,2) Param +Param to control which writeback opportunities we try to handle in the +load/store pair fusion pass. A value of zero disables writeback +handling. One means we try to form pairs involving one or more existing +individual writeback accesses where possible. A value of two means we +also try to opportunistically form writeback opportunities by folding in +trailing destructive updates of the base register used by a pair. diff --git a/gcc/config/aarch64/t-aarch64 b/gcc/config/aarch64/t-aarch64 index 0d96ae3d0b2..f7b24256b4d 100644 --- a/gcc/config/aarch64/t-aarch64 +++ b/gcc/config/aarch64/t-aarch64 @@ -194,6 +194,13 @@ aarch64-cc-fusion.o: $(srcdir)/config/aarch64/aarch64-cc-fusion.cc \ $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \ $(srcdir)/config/aarch64/aarch64-cc-fusion.cc +aarch64-ldp-fusion.o: $(srcdir)/config/aarch64/aarch64-ldp-fusion.cc \ + $(CONFIG_H) $(SYSTEM_H) $(CORETYPES_H) $(BACKEND_H) $(RTL_H) $(DF_H) \ + $(RTL_SSA_H) cfgcleanup.h tree-pass.h ordered-hash-map.h tree-dfa.h \ + fold-const.h tree-hash-traits.h print-tree.h + $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \ + $(srcdir)/config/aarch64/aarch64-ldp-fusion.cc + comma=, MULTILIB_OPTIONS = $(subst $(comma),/, $(patsubst %, mabi=%, $(subst $(comma),$(comma)mabi=,$(TM_MULTILIB_CONFIG)))) MULTILIB_DIRNAMES = $(subst $(comma), ,$(TM_MULTILIB_CONFIG)) diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi index 32f535e1ed4..29b4a337549 100644 --- a/gcc/doc/invoke.texi +++ b/gcc/doc/invoke.texi @@ -801,7 +801,7 @@ Objective-C and Objective-C++ Dialects}. -moverride=@var{string} -mverbose-cost-dump -mstack-protector-guard=@var{guard} -mstack-protector-guard-reg=@var{sysreg} -mstack-protector-guard-offset=@var{offset} -mtrack-speculation --moutline-atomics } +-moutline-atomics -mearly-ldp-fusion -mlate-ldp-fusion} @emph{Adapteva Epiphany Options} @gccoptlist{-mhalf-reg-file -mprefer-short-insn-regs @@ -16774,6 +16774,20 @@ With @option{--param=aarch64-stp-policy=never}, do not emit stp. With @option{--param=aarch64-stp-policy=aligned}, emit stp only if the source pointer is aligned to at least double the alignment of the type. +@item aarch64-ldp-alias-check-limit +Limit on the number of alias checks performed by the AArch64 load/store pair +fusion pass when attempting to form an ldp/stp. Higher values make the pass +more aggressive at re-ordering loads over stores, at the expense of increased +compile time. + +@item aarch64-ldp-writeback +Param to control which writeback opportunities we try to handle in the AArch64 +load/store pair fusion pass. A value of zero disables writeback handling. One +means we try to form pairs involving one or more existing individual writeback +accesses where possible. A value of two means we also try to opportunistically +form writeback opportunities by folding in trailing destructive updates of the +base register used by a pair. + @item aarch64-loop-vect-issue-rate-niters The tuning for some AArch64 CPUs tries to take both latencies and issue rates into account when deciding whether a loop should be vectorized @@ -21190,6 +21204,16 @@ Enable compiler hardening against straight line speculation (SLS). In addition, @samp{-mharden-sls=all} enables all SLS hardening while @samp{-mharden-sls=none} disables all SLS hardening. +@opindex mearly-ldp-fusion +@item -mearly-ldp-fusion +Enable the copy of the AArch64 load/store pair fusion pass that runs before +register allocation. Enabled by default at @samp{-O} and above. + +@opindex mlate-ldp-fusion +@item -mlate-ldp-fusion +Enable the copy of the AArch64 load/store pair fusion pass that runs after +register allocation. Enabled by default at @samp{-O} and above. + @opindex msve-vector-bits @item -msve-vector-bits=@var{bits} Specify the number of bits in an SVE vector register. This option only has