From patchwork Wed Sep 6 17:19:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Stamatis Markianos-Wright X-Patchwork-Id: 75382 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 1F4623858288 for ; Wed, 6 Sep 2023 17:20:21 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 1F4623858288 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1694020821; bh=1TD2Z2OGqlY61JcnyGrVlJZUpcGm9L3lb3kUSsYF2RY=; h=Date:Subject:References:To:In-Reply-To:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=f6X0VpF/7Hwhk+5mN/SZi9bAKa+jb4ASoHsGWxvwGGW5mAjA0yrX2xchSIJpCBTvc rY1wwyFjfo4SWjCsXLa2hXW27ed4qjs/fXxJZSGNoT6GjxHBRwWRBk9gk6gZPKsiSS iDxA3zzBGGENrWd2UrxJ5IssfZ/Ajl7c+lU5ACEw= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR04-HE1-obe.outbound.protection.outlook.com (mail-he1eur04on2058.outbound.protection.outlook.com [40.107.7.58]) by sourceware.org (Postfix) with ESMTPS id BFBCA3858410 for ; Wed, 6 Sep 2023 17:19:32 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org BFBCA3858410 Received: from AS8P250CA0022.EURP250.PROD.OUTLOOK.COM (2603:10a6:20b:330::27) by DB3PR08MB8841.eurprd08.prod.outlook.com (2603:10a6:10:43c::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6745.34; Wed, 6 Sep 2023 17:19:28 +0000 Received: from AM7EUR03FT033.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:330:cafe::80) by AS8P250CA0022.outlook.office365.com (2603:10a6:20b:330::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6745.34 via Frontend Transport; Wed, 6 Sep 2023 17:19:28 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM7EUR03FT033.mail.protection.outlook.com (100.127.140.129) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6768.28 via Frontend Transport; Wed, 6 Sep 2023 17:19:28 +0000 Received: ("Tessian outbound d084e965c4eb:v175"); Wed, 06 Sep 2023 17:19:28 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 360799f5f28f406c X-CR-MTA-TID: 64aa7808 Received: from e70821fcc6a3.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id AED659D8-616D-4A49-8816-8D5C8ACE52FD.1; Wed, 06 Sep 2023 17:19:17 +0000 Received: from EUR05-DB8-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e70821fcc6a3.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Wed, 06 Sep 2023 17:19:17 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Y26ruPxPLvcDuKzzJfVCjiJzBXgSe84HE5cbaF2hO5NwLxdp1Mz3zD2IdCoOLvVk+Eq+IiWGZqWtGVTkFF+loH7XGsId8zmr5owwo/GMnYr3NIFxDYcXGu2f3lMtMTv+WPI85PWDgD84aIii0sFuuWibPRYMwl5leRliMNSndHXHr0Iri0SDs/4UJrYj0RYGFVVy7XccGcZD8YZ2M6fq5S5Md3O+weHxlyMFYSZ3C7YmN29Jv/5SU+dTEwqXQq18iEMsHdVj3cy7WHzZXTN7CC6R9xfCqzBEvVEm1pfbPNhwOhBsXtb2H1RkBxsiEOAKninVQFu5NqJbit2BkjUvVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1TD2Z2OGqlY61JcnyGrVlJZUpcGm9L3lb3kUSsYF2RY=; b=KVFPOa43gXzcAsyay4h90Cqqymb0CkQGygk+NHLGWObi+WID+SXncSpAhY2SlRan/I2j3eCcS+B3Ph+f0dg4UF1e/y1tUyL+7Ofi/GcOUOEY+pfi2/ss7qfGKPQtHMyTNeFJcWcVq8iP9bFwO/HvjI4Q8Xu/4KvCzbs0z2JCCY3cgkS7jVORxcw13qVSf4kkc8N5lnk4aAgLf0sjWxfXIZnblOZ0B4VGSLERveSJ06+3ROp5vJEKN1PkD9Tw9G3LFVA0x8Aql3wHLpm7Tcd3UC3uuOFyqXjUU/JzCqgIj6UWgZewgBc97T9VXHrOlNnipR9LL1q7gCIb96bB3kH7gw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; Received: from DB9PR08MB6507.eurprd08.prod.outlook.com (2603:10a6:10:25a::6) by PAXPR08MB6544.eurprd08.prod.outlook.com (2603:10a6:102:157::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6745.34; Wed, 6 Sep 2023 17:19:14 +0000 Received: from DB9PR08MB6507.eurprd08.prod.outlook.com ([fe80::9580:520e:6b52:f3d8]) by DB9PR08MB6507.eurprd08.prod.outlook.com ([fe80::9580:520e:6b52:f3d8%3]) with mapi id 15.20.6745.030; Wed, 6 Sep 2023 17:19:13 +0000 Message-ID: <977be071-0361-1868-26cb-532e06dc25f9@arm.com> Date: Wed, 6 Sep 2023 18:19:12 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.13.0 Subject: [PING][PATCH 1/2] arm: Add define_attr to to create a mapping between MVE predicated and unpredicated insns Content-Language: en-US References: <3e2cd7fe-8fed-e793-a62f-0f33b9c12e88@arm.com> To: "gcc-patches@gcc.gnu.org" In-Reply-To: <3e2cd7fe-8fed-e793-a62f-0f33b9c12e88@arm.com> X-Forwarded-Message-Id: <3e2cd7fe-8fed-e793-a62f-0f33b9c12e88@arm.com> X-ClientProxiedBy: LO4P123CA0066.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:153::17) To DB9PR08MB6507.eurprd08.prod.outlook.com (2603:10a6:10:25a::6) MIME-Version: 1.0 X-MS-TrafficTypeDiagnostic: DB9PR08MB6507:EE_|PAXPR08MB6544:EE_|AM7EUR03FT033:EE_|DB3PR08MB8841:EE_ X-MS-Office365-Filtering-Correlation-Id: 6977ac66-23d9-43e7-a926-08dbaefd6d5e x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: oqZg/PNi9HW9Ea1kJYsDkMkqZ70w2Ip5ybYm4tDmAOpJMXoeROGkQ4KfGJqKSRNEFLJE8Kxrk0y2msdNA3TI8Jroa6i/OLByOg1agpoAy/RjXiw0s7pumTWepLjuK+JQp4HN6A92nV59yiWjQyi+gRkMz+TJ93prmjxFvrc3vJRzbJ7NWcwcAE61S7E8DQtOhXmqTS4PQ3KX9K5VApjWrF7vSERwNZj2/sbl1uxOv/Rk/g7nKUNAL4waoi2yVvt5RRC1CxOlx9c/q7Ne6RGwP0f0mm/F+R5vSazGpWNvmoaLrwlocCMQ+PPw0UCcR9UEQ0N1N8aLlrv5pJlxhtWzuktvpaaguykWPyT64KbtaurLPiV7yGTJay9QBax1aTfO7jd3VbifpiCoFqt7gv5oiMMxZmehAPbuINpT2BfS3wCqWbgeJ3ygEs9PNBMfRbg01rFkz7GfhN2Q209yC+s2xGMSgpIMNTA65ZdK884bh2aTV7JDpTcbHwYN7YphsMSdWDBuxMfazYnZyl5kni7k+lkhCZsCQrMvGGR31UgenBVIySnDQLWSmkU+xE0Kk7mRWdNKGpZCXx6raSDCOsVp/8lgfXaunRNDMOdrl7N0xUdkt62P5mSJUhUqdWW7Og5dsFXh+o2m1UQhywbs260m5w== X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB9PR08MB6507.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(396003)(136003)(346002)(376002)(366004)(39860400002)(451199024)(186009)(1800799009)(8936002)(6486002)(6506007)(33964004)(966005)(6512007)(478600001)(83380400001)(2906002)(26005)(66476007)(235185007)(6916009)(54906003)(41300700001)(66556008)(30864003)(4326008)(66946007)(316002)(8676002)(5660300002)(36756003)(2616005)(86362001)(38100700002)(31696002)(31686004)(43740500002)(45980500001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6544 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM7EUR03FT033.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 5568eb2a-798a-490a-c8d8-08dbaefd6445 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: gJmT0CsJdWiuftekYsgjPZa4+duw5cBj+IyeLP4J7X1O/FjiIpuNdYSqCBySqGUShVjFKu9Vu0DFGJl3OkW2rah82GRf1a/sM3BP1BwvD7PYuON6d4ko3s8r4aCYjB8YFsh14osjhfKCnXkcY7HZ8pGAhDXiiAoj/ppzBdz4QHG6lmsMPM+kNDjGdr4vz3Zlii1fITbrRsB7lkDuRrmsYcgVJUVDDeNdtnKPHSljykziVufmJGMz4Ie6s0ZHb7L3BqDmvqjPog73hlVo8IBqiw2AzfHvICkx7fyZDCdaPYLlslFR5qAngQKZJSJUOZvGeCGue2Q2dSve2oUgGPwo8S3e00BBrCF024KnaZhGuDsSsom/9ILNpV3xOzWtcdd0AxGFZ1lLlJduMT6e/NWfsUxqmGdzkoy6SqhKIbxjVGl1XKriovxRha/sKmkZtUF9eNgC6bjxZmBF+ab0D19D5QoHnk1VgaXBEO9E0Lcz3pF6/Us3Fm16Uwg8t2633+FyzegWw+bBc70BhCxNFQTrMkHUk/Kuf7tYmeD5sjfnWK6cqp8jFelqVbOtG7AxNeDMIs8bPutf6KcAFrdi/OF5sRpb0Aw21A1hhzssODrU17lk9pgvaIuxh0xqGrRdB+9pG+ji80msY8VetqbbELLC/raif+7dtrFMgqNV/D4NmfSz1vpLCAueoxEt/lamEnmSK+TsDd9+mfj1ram+RbFJlDkYYQuYRxhsZZHOKtUrOjsuME+rV1a9JkCeNf0XAXzUd9EWmSZATo+xgvGpV9vvxw== X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230031)(4636009)(396003)(346002)(39860400002)(136003)(376002)(451199024)(82310400011)(186009)(1800799009)(40470700004)(36840700001)(46966006)(41300700001)(36756003)(8676002)(4326008)(8936002)(86362001)(31696002)(81166007)(54906003)(70206006)(6916009)(70586007)(316002)(235185007)(82740400003)(5660300002)(356005)(966005)(30864003)(478600001)(31686004)(2906002)(36860700001)(33964004)(6512007)(47076005)(6506007)(6486002)(2616005)(26005)(40460700003)(336012)(40480700001)(83380400001)(43740500002); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Sep 2023 17:19:28.7155 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6977ac66-23d9-43e7-a926-08dbaefd6d5e X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM7EUR03FT033.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB3PR08MB8841 X-Spam-Status: No, score=-10.9 required=5.0 tests=BAYES_00, BODY_8BITS, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Stamatis Markianos-Wright via Gcc-patches From: Stamatis Markianos-Wright Reply-To: Stamatis Markianos-Wright Cc: Richard Earnshaw Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" Hi all, I'd like to submit two patches that add support for Arm's MVE Tail Predicated Low Overhead Loop feature. --- Introduction --- The M-class Arm-ARM: https://developer.arm.com/documentation/ddi0553/bu/?lang=en Section B5.5.1 "Loop tail predication" describes the feature we are adding support for with this patch (although we only add codegen for DLSTP/LETP instruction loops). Previously with commit d2ed233cb94 we'd added support for non-MVE DLS/LE loops through the loop-doloop pass, which, given a standard MVE loop like: ``` void  __attribute__ ((noinline)) test (int16_t *a, int16_t *b, int16_t *c, int n) {   while (n > 0)     {       mve_pred16_t p = vctp16q (n);       int16x8_t va = vldrhq_z_s16 (a, p);       int16x8_t vb = vldrhq_z_s16 (b, p);       int16x8_t vc = vaddq_x_s16 (va, vb, p);       vstrhq_p_s16 (c, vc, p);       c+=8;       a+=8;       b+=8;       n-=8;     } } ``` .. would output: ```                 dls     lr, lr .L3:         vctp.16 r3         vmrs    ip, P0  @ movhi         sxth    ip, ip         vmsr     P0, ip @ movhi         mov     r4, r0         vpst         vldrht.16       q2, [r4]         mov     r4, r1         vmov    q3, q0         vpst         vldrht.16       q1, [r4]         mov     r4, r2         vpst         vaddt.i16       q3, q2, q1         subs    r3, r3, #8         vpst         vstrht.16       q3, [r4]         adds    r0, r0, #16         adds    r1, r1, #16         adds    r2, r2, #16         le      lr, .L3 ``` where the LE instruction will decrement LR by 1, compare and branch if needed. (there are also other inefficiencies with the above code, like the pointless vmrs/sxth/vmsr on the VPR and the adds not being merged into the vldrht/vstrht as a #16 offsets and some random movs! But that's different problems...) The MVE version is similar, except that: * Instead of DLS/LE the instructions are DLSTP/LETP. * Instead of pre-calculating the number of iterations of the   loop, we place the number of elements to be processed by the   loop into LR. * Instead of decrementing the LR by one, LETP will decrement it   by FPSCR.LTPSIZE, which is the number of elements being   processed in each iteration: 16 for 8-bit elements, 5 for 16-bit   elements, etc. * On the final iteration, automatic Loop Tail Predication is   performed, as if the instructions within the loop had been VPT   predicated with a VCTP generating the VPR predicate in every   loop iteration. The dlstp/letp loop now looks like: ```                 dlstp.16        lr, r3 .L14:         mov     r3, r0         vldrh.16        q3, [r3]         mov     r3, r1         vldrh.16        q2, [r3]         mov     r3, r2         vadd.i16  q3, q3, q2         adds    r0, r0, #16         vstrh.16        q3, [r3]         adds    r1, r1, #16         adds    r2, r2, #16         letp    lr, .L14 ``` Since the loop tail predication is automatic, we have eliminated the VCTP that had been specified by the user in the intrinsic and converted the VPT-predicated instructions into their unpredicated equivalents (which also saves us from VPST insns). The LE instruction here decrements LR by 8 in each iteration. --- This 1/2 patch --- This first patch lays some groundwork by adding an attribute to md patterns, and then the second patch contains the functional changes. One major difficulty in implementing MVE Tail-Predicated Low Overhead Loops was the need to transform VPT-predicated insns in the insn chain into their unpredicated equivalents, like: `mve_vldrbq_z_ -> mve_vldrbq_`. This requires us to have a deterministic link between two different patterns in mve.md -- this _could_ be done by re-ordering the entirety of mve.md such that the patterns are at some constant icode proximity (e.g. having the _z immediately after the unpredicated version would mean that to map from the former to the latter you could use icode-1), but that is a very messy solution that would lead to complex unknown dependencies between the ordering of patterns. This patch proves an alternative way of doing that: using an insn attribute to encode the icode of the unpredicated instruction. No regressions on arm-none-eabi with an MVE target. Thank you, Stam Markianos-Wright gcc/ChangeLog:         * config/arm/arm.md (mve_unpredicated_insn): New attribute.         * config/arm/arm.h (MVE_VPT_PREDICATED_INSN_P): New define.     (MVE_VPT_UNPREDICATED_INSN_P): Likewise.     (MVE_VPT_PREDICABLE_INSN_P): Likewise.         * config/arm/vec-common.md (mve_vshlq_): Add attribute.         * config/arm/mve.md (arm_vcx1q_p_v16qi): Add attribute.     (arm_vcx1qv16qi): Likewise.     (arm_vcx1qav16qi): Likewise.     (arm_vcx1qv16qi): Likewise.     (arm_vcx2q_p_v16qi): Likewise.     (arm_vcx2qv16qi): Likewise.     (arm_vcx2qav16qi): Likewise.     (arm_vcx2qv16qi): Likewise.     (arm_vcx3q_p_v16qi): Likewise.     (arm_vcx3qv16qi): Likewise.     (arm_vcx3qav16qi): Likewise.     (arm_vcx3qv16qi): Likewise.     (mve_vabavq_): Likewise.     (mve_vabavq_p_): Likewise.     (mve_vabdq_): Likewise.     (mve_vabdq_f): Likewise.     (mve_vabdq_m_): Likewise.     (mve_vabdq_m_f): Likewise.     (mve_vabsq_f): Likewise.     (mve_vabsq_m_f): Likewise.     (mve_vabsq_m_s): Likewise.     (mve_vabsq_s): Likewise.     (mve_vadciq_v4si): Likewise.     (mve_vadciq_m_v4si): Likewise.     (mve_vadcq_v4si): Likewise.     (mve_vadcq_m_v4si): Likewise.     (mve_vaddlvaq_v4si): Likewise.     (mve_vaddlvaq_p_v4si): Likewise.     (mve_vaddlvq_v4si): Likewise.     (mve_vaddlvq_p_v4si): Likewise.     (mve_vaddq_f): Likewise.     (mve_vaddq_m_): Likewise.     (mve_vaddq_m_f): Likewise.     (mve_vaddq_m_n_): Likewise.     (mve_vaddq_m_n_f): Likewise.     (mve_vaddq_n_): Likewise.     (mve_vaddq_n_f): Likewise.     (mve_vaddq): Likewise.     (mve_vaddvaq_): Likewise.     (mve_vaddvaq_p_): Likewise.     (mve_vaddvq_): Likewise.     (mve_vaddvq_p_): Likewise.     (mve_vandq_): Likewise.     (mve_vandq_f): Likewise.     (mve_vandq_m_): Likewise.     (mve_vandq_m_f): Likewise.     (mve_vandq_s): Likewise.     (mve_vandq_u): Likewise.     (mve_vbicq_): Likewise.     (mve_vbicq_f): Likewise.     (mve_vbicq_m_): Likewise.     (mve_vbicq_m_f): Likewise.     (mve_vbicq_m_n_): Likewise.     (mve_vbicq_n_): Likewise.     (mve_vbicq_s): Likewise.     (mve_vbicq_u): Likewise.     (mve_vbrsrq_m_n_): Likewise.     (mve_vbrsrq_m_n_f): Likewise.     (mve_vbrsrq_n_): Likewise.     (mve_vbrsrq_n_f): Likewise.     (mve_vcaddq_rot270_m_): Likewise.     (mve_vcaddq_rot270_m_f): Likewise.     (mve_vcaddq_rot270): Likewise.     (mve_vcaddq_rot270): Likewise.     (mve_vcaddq_rot90_m_): Likewise.     (mve_vcaddq_rot90_m_f): Likewise.     (mve_vcaddq_rot90): Likewise.     (mve_vcaddq_rot90): Likewise.     (mve_vcaddq): Likewise.     (mve_vcaddq): Likewise.     (mve_vclsq_m_s): Likewise.     (mve_vclsq_s): Likewise.     (mve_vclzq_): Likewise.     (mve_vclzq_m_): Likewise.     (mve_vclzq_s): Likewise.     (mve_vclzq_u): Likewise.     (mve_vcmlaq_m_f): Likewise.     (mve_vcmlaq_rot180_m_f): Likewise.     (mve_vcmlaq_rot180): Likewise.     (mve_vcmlaq_rot270_m_f): Likewise.     (mve_vcmlaq_rot270): Likewise.     (mve_vcmlaq_rot90_m_f): Likewise.     (mve_vcmlaq_rot90): Likewise.     (mve_vcmlaq): Likewise.     (mve_vcmlaq): Likewise.     (mve_vcmpq_): Likewise.     (mve_vcmpq_f): Likewise.     (mve_vcmpq_n_): Likewise.     (mve_vcmpq_n_f): Likewise.     (mve_vcmpcsq_): Likewise.     (mve_vcmpcsq_m_n_u): Likewise.     (mve_vcmpcsq_m_u): Likewise.     (mve_vcmpcsq_n_): Likewise.     (mve_vcmpeqq_): Likewise.     (mve_vcmpeqq_f): Likewise.     (mve_vcmpeqq_m_): Likewise.     (mve_vcmpeqq_m_f): Likewise.     (mve_vcmpeqq_m_n_): Likewise.     (mve_vcmpeqq_m_n_f): Likewise.     (mve_vcmpeqq_n_): Likewise.     (mve_vcmpeqq_n_f): Likewise.     (mve_vcmpgeq_): Likewise.     (mve_vcmpgeq_f): Likewise.     (mve_vcmpgeq_m_f): Likewise.     (mve_vcmpgeq_m_n_f): Likewise.     (mve_vcmpgeq_m_n_s): Likewise.     (mve_vcmpgeq_m_s): Likewise.     (mve_vcmpgeq_n_): Likewise.     (mve_vcmpgeq_n_f): Likewise.     (mve_vcmpgtq_): Likewise.     (mve_vcmpgtq_f): Likewise.     (mve_vcmpgtq_m_f): Likewise.     (mve_vcmpgtq_m_n_f): Likewise.     (mve_vcmpgtq_m_n_s): Likewise.     (mve_vcmpgtq_m_s): Likewise.     (mve_vcmpgtq_n_): Likewise.     (mve_vcmpgtq_n_f): Likewise.     (mve_vcmphiq_): Likewise.     (mve_vcmphiq_m_n_u): Likewise.     (mve_vcmphiq_m_u): Likewise.     (mve_vcmphiq_n_): Likewise.     (mve_vcmpleq_): Likewise.     (mve_vcmpleq_f): Likewise.     (mve_vcmpleq_m_f): Likewise.     (mve_vcmpleq_m_n_f): Likewise.     (mve_vcmpleq_m_n_s): Likewise.     (mve_vcmpleq_m_s): Likewise.     (mve_vcmpleq_n_): Likewise.     (mve_vcmpleq_n_f): Likewise.     (mve_vcmpltq_): Likewise.     (mve_vcmpltq_f): Likewise.     (mve_vcmpltq_m_f): Likewise.     (mve_vcmpltq_m_n_f): Likewise.     (mve_vcmpltq_m_n_s): Likewise.     (mve_vcmpltq_m_s): Likewise.     (mve_vcmpltq_n_): Likewise.     (mve_vcmpltq_n_f): Likewise.     (mve_vcmpneq_): Likewise.     (mve_vcmpneq_f): Likewise.     (mve_vcmpneq_m_): Likewise.     (mve_vcmpneq_m_f): Likewise.     (mve_vcmpneq_m_n_): Likewise.     (mve_vcmpneq_m_n_f): Likewise.     (mve_vcmpneq_n_): Likewise.     (mve_vcmpneq_n_f): Likewise.     (mve_vcmulq_m_f): Likewise.     (mve_vcmulq_rot180_m_f): Likewise.     (mve_vcmulq_rot180): Likewise.     (mve_vcmulq_rot270_m_f): Likewise.     (mve_vcmulq_rot270): Likewise.     (mve_vcmulq_rot90_m_f): Likewise.     (mve_vcmulq_rot90): Likewise.     (mve_vcmulq): Likewise.     (mve_vcmulq): Likewise.     (mve_vctpq_mhi): Likewise.     (mve_vctpqhi): Likewise.     (mve_vcvtaq_): Likewise.     (mve_vcvtaq_m_): Likewise.     (mve_vcvtbq_f16_f32v8hf): Likewise.     (mve_vcvtbq_f32_f16v4sf): Likewise.     (mve_vcvtbq_m_f16_f32v8hf): Likewise.     (mve_vcvtbq_m_f32_f16v4sf): Likewise.     (mve_vcvtmq_): Likewise.     (mve_vcvtmq_m_): Likewise.     (mve_vcvtnq_): Likewise.     (mve_vcvtnq_m_): Likewise.     (mve_vcvtpq_): Likewise.     (mve_vcvtpq_m_): Likewise.     (mve_vcvtq_from_f_): Likewise.     (mve_vcvtq_m_from_f_): Likewise.     (mve_vcvtq_m_n_from_f_): Likewise.     (mve_vcvtq_m_n_to_f_): Likewise.     (mve_vcvtq_m_to_f_): Likewise.     (mve_vcvtq_n_from_f_): Likewise.     (mve_vcvtq_n_to_f_): Likewise.     (mve_vcvtq_to_f_): Likewise.     (mve_vcvttq_f16_f32v8hf): Likewise.     (mve_vcvttq_f32_f16v4sf): Likewise.     (mve_vcvttq_m_f16_f32v8hf): Likewise.     (mve_vcvttq_m_f32_f16v4sf): Likewise.     (mve_vddupq_m_wb_u_insn): Likewise.     (mve_vddupq_u_insn): Likewise.     (mve_vdupq_m_n_): Likewise.     (mve_vdupq_m_n_f): Likewise.     (mve_vdupq_n_): Likewise.     (mve_vdupq_n_f): Likewise.     (mve_vdwdupq_m_wb_u_insn): Likewise.     (mve_vdwdupq_wb_u_insn): Likewise.     (mve_veorq_): Likewise.     (mve_veorq_f): Likewise.     (mve_veorq_m_): Likewise.     (mve_veorq_m_f): Likewise.     (mve_veorq_s): Likewise.     (mve_veorq_u): Likewise.     (mve_vfmaq_f): Likewise.     (mve_vfmaq_m_f): Likewise.     (mve_vfmaq_m_n_f): Likewise.     (mve_vfmaq_n_f): Likewise.     (mve_vfmasq_m_n_f): Likewise.     (mve_vfmasq_n_f): Likewise.     (mve_vfmsq_f): Likewise.     (mve_vfmsq_m_f): Likewise.     (mve_vhaddq_): Likewise.     (mve_vhaddq_m_): Likewise.     (mve_vhaddq_m_n_): Likewise.     (mve_vhaddq_n_): Likewise.     (mve_vhcaddq_rot270_m_s): Likewise.     (mve_vhcaddq_rot270_s): Likewise.     (mve_vhcaddq_rot90_m_s): Likewise.     (mve_vhcaddq_rot90_s): Likewise.     (mve_vhsubq_): Likewise.     (mve_vhsubq_m_): Likewise.     (mve_vhsubq_m_n_): Likewise.     (mve_vhsubq_n_): Likewise.     (mve_vidupq_m_wb_u_insn): Likewise.     (mve_vidupq_u_insn): Likewise.     (mve_viwdupq_m_wb_u_insn): Likewise.     (mve_viwdupq_wb_u_insn): Likewise.     (mve_vldrbq_): Likewise.     (mve_vldrbq_gather_offset_): Likewise.     (mve_vldrbq_gather_offset_z_): Likewise.     (mve_vldrbq_z_): Likewise.     (mve_vldrdq_gather_base_v2di): Likewise.     (mve_vldrdq_gather_base_wb_v2di_insn): Likewise.     (mve_vldrdq_gather_base_wb_z_v2di_insn): Likewise.     (mve_vldrdq_gather_base_z_v2di): Likewise.     (mve_vldrdq_gather_offset_v2di): Likewise.     (mve_vldrdq_gather_offset_z_v2di): Likewise.     (mve_vldrdq_gather_shifted_offset_v2di): Likewise.     (mve_vldrdq_gather_shifted_offset_z_v2di): Likewise.     (mve_vldrhq_): Likewise.     (mve_vldrhq_fv8hf): Likewise.     (mve_vldrhq_gather_offset_): Likewise.     (mve_vldrhq_gather_offset_fv8hf): Likewise.     (mve_vldrhq_gather_offset_z_): Likewise.     (mve_vldrhq_gather_offset_z_fv8hf): Likewise.     (mve_vldrhq_gather_shifted_offset_): Likewise.     (mve_vldrhq_gather_shifted_offset_fv8hf): Likewise.     (mve_vldrhq_gather_shifted_offset_z_): Likewise.     (mve_vldrhq_gather_shifted_offset_z_fv8hf): Likewise.     (mve_vldrhq_z_): Likewise.     (mve_vldrhq_z_fv8hf): Likewise.     (mve_vldrwq_v4si): Likewise.     (mve_vldrwq_fv4sf): Likewise.     (mve_vldrwq_gather_base_v4si): Likewise.     (mve_vldrwq_gather_base_fv4sf): Likewise.     (mve_vldrwq_gather_base_wb_v4si_insn): Likewise.     (mve_vldrwq_gather_base_wb_fv4sf_insn): Likewise.     (mve_vldrwq_gather_base_wb_z_v4si_insn): Likewise.     (mve_vldrwq_gather_base_wb_z_fv4sf_insn): Likewise.     (mve_vldrwq_gather_base_z_v4si): Likewise.     (mve_vldrwq_gather_base_z_fv4sf): Likewise.     (mve_vldrwq_gather_offset_v4si): Likewise.     (mve_vldrwq_gather_offset_fv4sf): Likewise.     (mve_vldrwq_gather_offset_z_v4si): Likewise.     (mve_vldrwq_gather_offset_z_fv4sf): Likewise.     (mve_vldrwq_gather_shifted_offset_v4si): Likewise.     (mve_vldrwq_gather_shifted_offset_fv4sf): Likewise.     (mve_vldrwq_gather_shifted_offset_z_v4si): Likewise.     (mve_vldrwq_gather_shifted_offset_z_fv4sf): Likewise.     (mve_vldrwq_z_v4si): Likewise.     (mve_vldrwq_z_fv4sf): Likewise.     (mve_vmaxaq_m_s): Likewise.     (mve_vmaxaq_s): Likewise.     (mve_vmaxavq_p_s): Likewise.     (mve_vmaxavq_s): Likewise.     (mve_vmaxnmaq_f): Likewise.     (mve_vmaxnmaq_m_f): Likewise.     (mve_vmaxnmavq_f): Likewise.     (mve_vmaxnmavq_p_f): Likewise.     (mve_vmaxnmq_f): Likewise.     (mve_vmaxnmq_m_f): Likewise.     (mve_vmaxnmvq_f): Likewise.     (mve_vmaxnmvq_p_f): Likewise.     (mve_vmaxq_): Likewise.     (mve_vmaxq_m_): Likewise.     (mve_vmaxq_s): Likewise.     (mve_vmaxq_u): Likewise.     (mve_vmaxvq_): Likewise.     (mve_vmaxvq_p_): Likewise.     (mve_vminaq_m_s): Likewise.     (mve_vminaq_s): Likewise.     (mve_vminavq_p_s): Likewise.     (mve_vminavq_s): Likewise.     (mve_vminnmaq_f): Likewise.     (mve_vminnmaq_m_f): Likewise.     (mve_vminnmavq_f): Likewise.     (mve_vminnmavq_p_f): Likewise.     (mve_vminnmq_f): Likewise.     (mve_vminnmq_m_f): Likewise.     (mve_vminnmvq_f): Likewise.     (mve_vminnmvq_p_f): Likewise.     (mve_vminq_): Likewise.     (mve_vminq_m_): Likewise.     (mve_vminq_s): Likewise.     (mve_vminq_u): Likewise.     (mve_vminvq_): Likewise.     (mve_vminvq_p_): Likewise.     (mve_vmladavaq_): Likewise.     (mve_vmladavaq_p_): Likewise.     (mve_vmladavaxq_p_s): Likewise.     (mve_vmladavaxq_s): Likewise.     (mve_vmladavq_): Likewise.     (mve_vmladavq_p_): Likewise.     (mve_vmladavxq_p_s): Likewise.     (mve_vmladavxq_s): Likewise.     (mve_vmlaldavaq_): Likewise.     (mve_vmlaldavaq_p_): Likewise.     (mve_vmlaldavaxq_): Likewise.     (mve_vmlaldavaxq_p_): Likewise.     (mve_vmlaldavaxq_s): Likewise.     (mve_vmlaldavq_): Likewise.     (mve_vmlaldavq_p_): Likewise.     (mve_vmlaldavxq_p_s): Likewise.     (mve_vmlaldavxq_s): Likewise.     (mve_vmlaq_m_n_): Likewise.     (mve_vmlaq_n_): Likewise.     (mve_vmlasq_m_n_): Likewise.     (mve_vmlasq_n_): Likewise.     (mve_vmlsdavaq_p_s): Likewise.     (mve_vmlsdavaq_s): Likewise.     (mve_vmlsdavaxq_p_s): Likewise.     (mve_vmlsdavaxq_s): Likewise.     (mve_vmlsdavq_p_s): Likewise.     (mve_vmlsdavq_s): Likewise.     (mve_vmlsdavxq_p_s): Likewise.     (mve_vmlsdavxq_s): Likewise.     (mve_vmlsldavaq_p_s): Likewise.     (mve_vmlsldavaq_s): Likewise.     (mve_vmlsldavaxq_p_s): Likewise.     (mve_vmlsldavaxq_s): Likewise.     (mve_vmlsldavq_p_s): Likewise.     (mve_vmlsldavq_s): Likewise.     (mve_vmlsldavxq_p_s): Likewise.     (mve_vmlsldavxq_s): Likewise.     (mve_vmovlbq_): Likewise.     (mve_vmovlbq_m_): Likewise.     (mve_vmovltq_): Likewise.     (mve_vmovltq_m_): Likewise.     (mve_vmovnbq_): Likewise.     (mve_vmovnbq_m_): Likewise.     (mve_vmovntq_): Likewise.     (mve_vmovntq_m_): Likewise.     (mve_vmulhq_): Likewise.     (mve_vmulhq_m_): Likewise.     (mve_vmullbq_int_): Likewise.     (mve_vmullbq_int_m_): Likewise.     (mve_vmullbq_poly_m_p): Likewise.     (mve_vmullbq_poly_p): Likewise.     (mve_vmulltq_int_): Likewise.     (mve_vmulltq_int_m_): Likewise.     (mve_vmulltq_poly_m_p): Likewise.     (mve_vmulltq_poly_p): Likewise.     (mve_vmulq_): Likewise.     (mve_vmulq_f): Likewise.     (mve_vmulq_m_): Likewise.     (mve_vmulq_m_f): Likewise.     (mve_vmulq_m_n_): Likewise.     (mve_vmulq_m_n_f): Likewise.     (mve_vmulq_n_): Likewise.     (mve_vmulq_n_f): Likewise.     (mve_vmvnq_): Likewise.     (mve_vmvnq_m_): Likewise.     (mve_vmvnq_m_n_): Likewise.     (mve_vmvnq_n_): Likewise.     (mve_vmvnq_s): Likewise.     (mve_vmvnq_u): Likewise.     (mve_vnegq_f): Likewise.     (mve_vnegq_m_f): Likewise.     (mve_vnegq_m_s): Likewise.     (mve_vnegq_s): Likewise.     (mve_vornq_): Likewise.     (mve_vornq_f): Likewise.     (mve_vornq_m_): Likewise.     (mve_vornq_m_f): Likewise.     (mve_vornq_s): Likewise.     (mve_vornq_u): Likewise.     (mve_vorrq_): Likewise.     (mve_vorrq_f): Likewise.     (mve_vorrq_m_): Likewise.     (mve_vorrq_m_f): Likewise.     (mve_vorrq_m_n_): Likewise.     (mve_vorrq_n_): Likewise.     (mve_vorrq_s): Likewise.     (mve_vorrq_s): Likewise.     (mve_vqabsq_m_s): Likewise.     (mve_vqabsq_s): Likewise.     (mve_vqaddq_): Likewise.     (mve_vqaddq_m_): Likewise.     (mve_vqaddq_m_n_): Likewise.     (mve_vqaddq_n_): Likewise.     (mve_vqdmladhq_m_s): Likewise.     (mve_vqdmladhq_s): Likewise.     (mve_vqdmladhxq_m_s): Likewise.     (mve_vqdmladhxq_s): Likewise.     (mve_vqdmlahq_m_n_s): Likewise.     (mve_vqdmlahq_n_): Likewise.     (mve_vqdmlahq_n_s): Likewise.     (mve_vqdmlashq_m_n_s): Likewise.     (mve_vqdmlashq_n_): Likewise.     (mve_vqdmlashq_n_s): Likewise.     (mve_vqdmlsdhq_m_s): Likewise.     (mve_vqdmlsdhq_s): Likewise.     (mve_vqdmlsdhxq_m_s): Likewise.     (mve_vqdmlsdhxq_s): Likewise.     (mve_vqdmulhq_m_n_s): Likewise.     (mve_vqdmulhq_m_s): Likewise.     (mve_vqdmulhq_n_s): Likewise.     (mve_vqdmulhq_s): Likewise.     (mve_vqdmullbq_m_n_s): Likewise.     (mve_vqdmullbq_m_s): Likewise.     (mve_vqdmullbq_n_s): Likewise.     (mve_vqdmullbq_s): Likewise.     (mve_vqdmulltq_m_n_s): Likewise.     (mve_vqdmulltq_m_s): Likewise.     (mve_vqdmulltq_n_s): Likewise.     (mve_vqdmulltq_s): Likewise.     (mve_vqmovnbq_): Likewise.     (mve_vqmovnbq_m_): Likewise.     (mve_vqmovntq_): Likewise.     (mve_vqmovntq_m_): Likewise.     (mve_vqmovunbq_m_s): Likewise.     (mve_vqmovunbq_s): Likewise.     (mve_vqmovuntq_m_s): Likewise.     (mve_vqmovuntq_s): Likewise.     (mve_vqnegq_m_s): Likewise.     (mve_vqnegq_s): Likewise.     (mve_vqrdmladhq_m_s): Likewise.     (mve_vqrdmladhq_s): Likewise.     (mve_vqrdmladhxq_m_s): Likewise.     (mve_vqrdmladhxq_s): Likewise.     (mve_vqrdmlahq_m_n_s): Likewise.     (mve_vqrdmlahq_n_): Likewise.     (mve_vqrdmlahq_n_s): Likewise.     (mve_vqrdmlashq_m_n_s): Likewise.     (mve_vqrdmlashq_n_): Likewise.     (mve_vqrdmlashq_n_s): Likewise.     (mve_vqrdmlsdhq_m_s): Likewise.     (mve_vqrdmlsdhq_s): Likewise.     (mve_vqrdmlsdhxq_m_s): Likewise.     (mve_vqrdmlsdhxq_s): Likewise.     (mve_vqrdmulhq_m_n_s): Likewise.     (mve_vqrdmulhq_m_s): Likewise.     (mve_vqrdmulhq_n_s): Likewise.     (mve_vqrdmulhq_s): Likewise.     (mve_vqrshlq_): Likewise.     (mve_vqrshlq_m_): Likewise.     (mve_vqrshlq_m_n_): Likewise.     (mve_vqrshlq_n_): Likewise.     (mve_vqrshrnbq_m_n_): Likewise.     (mve_vqrshrnbq_n_): Likewise.     (mve_vqrshrntq_m_n_): Likewise.     (mve_vqrshrntq_n_): Likewise.     (mve_vqrshrunbq_m_n_s): Likewise.     (mve_vqrshrunbq_n_s): Likewise.     (mve_vqrshruntq_m_n_s): Likewise.     (mve_vqrshruntq_n_s): Likewise.     (mve_vqshlq_): Likewise.     (mve_vqshlq_m_): Likewise.     (mve_vqshlq_m_n_): Likewise.     (mve_vqshlq_m_r_): Likewise.     (mve_vqshlq_n_): Likewise.     (mve_vqshlq_r_): Likewise.     (mve_vqshluq_m_n_s): Likewise.     (mve_vqshluq_n_s): Likewise.     (mve_vqshrnbq_m_n_): Likewise.     (mve_vqshrnbq_n_): Likewise.     (mve_vqshrntq_m_n_): Likewise.     (mve_vqshrntq_n_): Likewise.     (mve_vqshrunbq_m_n_s): Likewise.     (mve_vqshrunbq_n_s): Likewise.     (mve_vqshruntq_m_n_s): Likewise.     (mve_vqshruntq_n_s): Likewise.     (mve_vqsubq_): Likewise.     (mve_vqsubq_m_): Likewise.     (mve_vqsubq_m_n_): Likewise.     (mve_vqsubq_n_): Likewise.     (mve_vrev16q_v16qi): Likewise.     (mve_vrev16q_m_v16qi): Likewise.     (mve_vrev32q_): Likewise.     (mve_vrev32q_fv8hf): Likewise.     (mve_vrev32q_m_): Likewise.     (mve_vrev32q_m_fv8hf): Likewise.     (mve_vrev64q_): Likewise.     (mve_vrev64q_f): Likewise.     (mve_vrev64q_m_): Likewise.     (mve_vrev64q_m_f): Likewise.     (mve_vrhaddq_): Likewise.     (mve_vrhaddq_m_): Likewise.     (mve_vrmlaldavhaq_v4si): Likewise.     (mve_vrmlaldavhaq_p_sv4si): Likewise.     (mve_vrmlaldavhaq_p_uv4si): Likewise.     (mve_vrmlaldavhaq_sv4si): Likewise.     (mve_vrmlaldavhaq_uv4si): Likewise.     (mve_vrmlaldavhaxq_p_sv4si): Likewise.     (mve_vrmlaldavhaxq_sv4si): Likewise.     (mve_vrmlaldavhq_v4si): Likewise.     (mve_vrmlaldavhq_p_v4si): Likewise.     (mve_vrmlaldavhxq_p_sv4si): Likewise.     (mve_vrmlaldavhxq_sv4si): Likewise.     (mve_vrmlsldavhaq_p_sv4si): Likewise.     (mve_vrmlsldavhaq_sv4si): Likewise.     (mve_vrmlsldavhaxq_p_sv4si): Likewise.     (mve_vrmlsldavhaxq_sv4si): Likewise.     (mve_vrmlsldavhq_p_sv4si): Likewise.     (mve_vrmlsldavhq_sv4si): Likewise.     (mve_vrmlsldavhxq_p_sv4si): Likewise.     (mve_vrmlsldavhxq_sv4si): Likewise.     (mve_vrmulhq_): Likewise.     (mve_vrmulhq_m_): Likewise.     (mve_vrndaq_f): Likewise.     (mve_vrndaq_m_f): Likewise.     (mve_vrndmq_f): Likewise.     (mve_vrndmq_m_f): Likewise.     (mve_vrndnq_f): Likewise.     (mve_vrndnq_m_f): Likewise.     (mve_vrndpq_f): Likewise.     (mve_vrndpq_m_f): Likewise.     (mve_vrndq_f): Likewise.     (mve_vrndq_m_f): Likewise.     (mve_vrndxq_f): Likewise.     (mve_vrndxq_m_f): Likewise.     (mve_vrshlq_): Likewise.     (mve_vrshlq_m_): Likewise.     (mve_vrshlq_m_n_): Likewise.     (mve_vrshlq_n_): Likewise.     (mve_vrshrnbq_m_n_): Likewise.     (mve_vrshrnbq_n_): Likewise.     (mve_vrshrntq_m_n_): Likewise.     (mve_vrshrntq_n_): Likewise.     (mve_vrshrq_m_n_): Likewise.     (mve_vrshrq_n_): Likewise.     (mve_vsbciq_v4si): Likewise.     (mve_vsbciq_m_v4si): Likewise.     (mve_vsbcq_v4si): Likewise.     (mve_vsbcq_m_v4si): Likewise.     (mve_vshlcq_): Likewise.     (mve_vshlcq_m_): Likewise.     (mve_vshllbq_m_n_): Likewise.     (mve_vshllbq_n_): Likewise.     (mve_vshlltq_m_n_): Likewise.     (mve_vshlltq_n_): Likewise.     (mve_vshlq_): Likewise.     (mve_vshlq_): Likewise.     (mve_vshlq_m_): Likewise.     (mve_vshlq_m_n_): Likewise.     (mve_vshlq_m_r_): Likewise.     (mve_vshlq_n_): Likewise.     (mve_vshlq_r_): Likewise.     (mve_vshrnbq_m_n_): Likewise.     (mve_vshrnbq_n_): Likewise.     (mve_vshrntq_m_n_): Likewise.     (mve_vshrntq_n_): Likewise.     (mve_vshrq_m_n_): Likewise.     (mve_vshrq_n_): Likewise.     (mve_vsliq_m_n_): Likewise.     (mve_vsliq_n_): Likewise.     (mve_vsriq_m_n_): Likewise.     (mve_vsriq_n_): Likewise.     (mve_vstrbq_): Likewise.     (mve_vstrbq_p_): Likewise.     (mve_vstrbq_scatter_offset__insn): Likewise.     (mve_vstrbq_scatter_offset_p__insn): Likewise.     (mve_vstrdq_scatter_base_v2di): Likewise.     (mve_vstrdq_scatter_base_p_v2di): Likewise.     (mve_vstrdq_scatter_base_wb_v2di): Likewise.     (mve_vstrdq_scatter_base_wb_p_v2di): Likewise.     (mve_vstrdq_scatter_offset_v2di_insn): Likewise.     (mve_vstrdq_scatter_offset_p_v2di_insn): Likewise.     (mve_vstrdq_scatter_shifted_offset_v2di_insn): Likewise.     (mve_vstrdq_scatter_shifted_offset_p_v2di_insn): Likewise.     (mve_vstrhq_): Likewise.     (mve_vstrhq_fv8hf): Likewise.     (mve_vstrhq_p_): Likewise.     (mve_vstrhq_p_fv8hf): Likewise.     (mve_vstrhq_scatter_offset__insn): Likewise.     (mve_vstrhq_scatter_offset_fv8hf_insn): Likewise.     (mve_vstrhq_scatter_offset_p__insn): Likewise.     (mve_vstrhq_scatter_offset_p_fv8hf_insn): Likewise.  (mve_vstrhq_scatter_shifted_offset__insn): Likewise.     (mve_vstrhq_scatter_shifted_offset_fv8hf_insn): Likewise.  (mve_vstrhq_scatter_shifted_offset_p__insn): Likewise.     (mve_vstrhq_scatter_shifted_offset_p_fv8hf_insn): Likewise.     (mve_vstrwq_v4si): Likewise.     (mve_vstrwq_fv4sf): Likewise.     (mve_vstrwq_p_v4si): Likewise.     (mve_vstrwq_p_fv4sf): Likewise.     (mve_vstrwq_scatter_base_v4si): Likewise.     (mve_vstrwq_scatter_base_fv4sf): Likewise.     (mve_vstrwq_scatter_base_p_v4si): Likewise.     (mve_vstrwq_scatter_base_p_fv4sf): Likewise.     (mve_vstrwq_scatter_base_wb_v4si): Likewise.     (mve_vstrwq_scatter_base_wb_fv4sf): Likewise.     (mve_vstrwq_scatter_base_wb_p_v4si): Likewise.     (mve_vstrwq_scatter_base_wb_p_fv4sf): Likewise.     (mve_vstrwq_scatter_offset_v4si_insn): Likewise.     (mve_vstrwq_scatter_offset_fv4sf_insn): Likewise.     (mve_vstrwq_scatter_offset_p_v4si_insn): Likewise.     (mve_vstrwq_scatter_offset_p_fv4sf_insn): Likewise.     (mve_vstrwq_scatter_shifted_offset_v4si_insn): Likewise.     (mve_vstrwq_scatter_shifted_offset_fv4sf_insn): Likewise.     (mve_vstrwq_scatter_shifted_offset_p_v4si_insn): Likewise.     (mve_vstrwq_scatter_shifted_offset_p_fv4sf_insn): Likewise.     (mve_vsubq_): Likewise.     (mve_vsubq_f): Likewise.     (mve_vsubq_m_): Likewise.     (mve_vsubq_m_f): Likewise.     (mve_vsubq_m_n_): Likewise.     (mve_vsubq_m_n_f): Likewise.     (mve_vsubq_n_): Likewise.     (mve_vsubq_n_f): Likewise. commit 7a25d85f91d84e53e707bb36d052f8196e49e147 Author: Stam Markianos-Wright Date: Tue Oct 18 17:42:56 2022 +0100 arm: Add define_attr to to create a mapping between MVE predicated and unpredicated insns I'd like to submit two patches that add support for Arm's MVE Tail Predicated Low Overhead Loop feature. --- Introduction --- The M-class Arm-ARM: https://developer.arm.com/documentation/ddi0553/bu/?lang=en Section B5.5.1 "Loop tail predication" describes the feature we are adding support for with this patch (although we only add codegen for DLSTP/LETP instruction loops). Previously with commit d2ed233cb94 we'd added support for non-MVE DLS/LE loops through the loop-doloop pass, which, given a standard MVE loop like: ``` void __attribute__ ((noinline)) test (int16_t *a, int16_t *b, int16_t *c, int n) { while (n > 0) { mve_pred16_t p = vctp16q (n); int16x8_t va = vldrhq_z_s16 (a, p); int16x8_t vb = vldrhq_z_s16 (b, p); int16x8_t vc = vaddq_x_s16 (va, vb, p); vstrhq_p_s16 (c, vc, p); c+=8; a+=8; b+=8; n-=8; } } ``` .. would output: ``` dls lr, lr .L3: vctp.16 r3 vmrs ip, P0 @ movhi sxth ip, ip vmsr P0, ip @ movhi mov r4, r0 vpst vldrht.16 q2, [r4] mov r4, r1 vmov q3, q0 vpst vldrht.16 q1, [r4] mov r4, r2 vpst vaddt.i16 q3, q2, q1 subs r3, r3, #8 vpst vstrht.16 q3, [r4] adds r0, r0, #16 adds r1, r1, #16 adds r2, r2, #16 le lr, .L3 ``` where the LE instruction will decrement LR by 1, compare and branch if needed. (there are also other inefficiencies with the above code, like the pointless vmrs/sxth/vmsr on the VPR and the adds not being merged into the vldrht/vstrht as a #16 offsets and some random movs! But that's different problems...) The MVE version is similar, except that: * Instead of DLS/LE the instructions are DLSTP/LETP. * Instead of pre-calculating the number of iterations of the loop, we place the number of elements to be processed by the loop into LR. * Instead of decrementing the LR by one, LETP will decrement it by FPSCR.LTPSIZE, which is the number of elements being processed in each iteration: 16 for 8-bit elements, 5 for 16-bit elements, etc. * On the final iteration, automatic Loop Tail Predication is performed, as if the instructions within the loop had been VPT predicated with a VCTP generating the VPR predicate in every loop iteration. The dlstp/letp loop now looks like: ``` dlstp.16 lr, r3 .L14: mov r3, r0 vldrh.16 q3, [r3] mov r3, r1 vldrh.16 q2, [r3] mov r3, r2 vadd.i16 q3, q3, q2 adds r0, r0, #16 vstrh.16 q3, [r3] adds r1, r1, #16 adds r2, r2, #16 letp lr, .L14 ``` Since the loop tail predication is automatic, we have eliminated the VCTP that had been specified by the user in the intrinsic and converted the VPT-predicated instructions into their unpredicated equivalents (which also saves us from VPST insns). The LE instruction here decrements LR by 8 in each iteration. --- This 1/2 patch --- This first patch lays some groundwork by adding an attribute to md patterns, and then the second patch contains the functional changes. One major difficulty in implementing MVE Tail-Predicated Low Overhead Loops was the need to transform VPT-predicated insns in the insn chain into their unpredicated equivalents, like: `mve_vldrbq_z_ -> mve_vldrbq_`. This requires us to have a deterministic link between two different patterns in mve.md -- this _could_ be done by re-ordering the entirety of mve.md such that the patterns are at some constant icode proximity (e.g. having the _z immediately after the unpredicated version would mean that to map from the former to the latter you could use icode-1), but that is a very messy solution that would lead to complex unknown dependencies between the ordering of patterns. This patch proves an alternative way of doing that: using an insn attribute to encode the icode of the unpredicated instruction. No regressions on arm-none-eabi with an MVE target. Thank you, Stam Markianos-Wright gcc/ChangeLog: * config/arm/arm.md (mve_unpredicated_insn): New attribute. * config/arm/arm.h (MVE_VPT_PREDICATED_INSN_P): New define. (MVE_VPT_UNPREDICATED_INSN_P): Likewise. (MVE_VPT_PREDICABLE_INSN_P): Likewise. * config/arm/vec-common.md (mve_vshlq_): Add attribute. * config/arm/mve.md (arm_vcx1q_p_v16qi): Add attribute. (arm_vcx1qv16qi): Likewise. (arm_vcx1qav16qi): Likewise. (arm_vcx1qv16qi): Likewise. (arm_vcx2q_p_v16qi): Likewise. (arm_vcx2qv16qi): Likewise. (arm_vcx2qav16qi): Likewise. (arm_vcx2qv16qi): Likewise. (arm_vcx3q_p_v16qi): Likewise. (arm_vcx3qv16qi): Likewise. (arm_vcx3qav16qi): Likewise. (arm_vcx3qv16qi): Likewise. (mve_vabavq_): Likewise. (mve_vabavq_p_): Likewise. (mve_vabdq_): Likewise. (mve_vabdq_f): Likewise. (mve_vabdq_m_): Likewise. (mve_vabdq_m_f): Likewise. (mve_vabsq_f): Likewise. (mve_vabsq_m_f): Likewise. (mve_vabsq_m_s): Likewise. (mve_vabsq_s): Likewise. (mve_vadciq_v4si): Likewise. (mve_vadciq_m_v4si): Likewise. (mve_vadcq_v4si): Likewise. (mve_vadcq_m_v4si): Likewise. (mve_vaddlvaq_v4si): Likewise. (mve_vaddlvaq_p_v4si): Likewise. (mve_vaddlvq_v4si): Likewise. (mve_vaddlvq_p_v4si): Likewise. (mve_vaddq_f): Likewise. (mve_vaddq_m_): Likewise. (mve_vaddq_m_f): Likewise. (mve_vaddq_m_n_): Likewise. (mve_vaddq_m_n_f): Likewise. (mve_vaddq_n_): Likewise. (mve_vaddq_n_f): Likewise. (mve_vaddq): Likewise. (mve_vaddvaq_): Likewise. (mve_vaddvaq_p_): Likewise. (mve_vaddvq_): Likewise. (mve_vaddvq_p_): Likewise. (mve_vandq_): Likewise. (mve_vandq_f): Likewise. (mve_vandq_m_): Likewise. (mve_vandq_m_f): Likewise. (mve_vandq_s): Likewise. (mve_vandq_u): Likewise. (mve_vbicq_): Likewise. (mve_vbicq_f): Likewise. (mve_vbicq_m_): Likewise. (mve_vbicq_m_f): Likewise. (mve_vbicq_m_n_): Likewise. (mve_vbicq_n_): Likewise. (mve_vbicq_s): Likewise. (mve_vbicq_u): Likewise. (mve_vbrsrq_m_n_): Likewise. (mve_vbrsrq_m_n_f): Likewise. (mve_vbrsrq_n_): Likewise. (mve_vbrsrq_n_f): Likewise. (mve_vcaddq_rot270_m_): Likewise. (mve_vcaddq_rot270_m_f): Likewise. (mve_vcaddq_rot270): Likewise. (mve_vcaddq_rot270): Likewise. (mve_vcaddq_rot90_m_): Likewise. (mve_vcaddq_rot90_m_f): Likewise. (mve_vcaddq_rot90): Likewise. (mve_vcaddq_rot90): Likewise. (mve_vcaddq): Likewise. (mve_vcaddq): Likewise. (mve_vclsq_m_s): Likewise. (mve_vclsq_s): Likewise. (mve_vclzq_): Likewise. (mve_vclzq_m_): Likewise. (mve_vclzq_s): Likewise. (mve_vclzq_u): Likewise. (mve_vcmlaq_m_f): Likewise. (mve_vcmlaq_rot180_m_f): Likewise. (mve_vcmlaq_rot180): Likewise. (mve_vcmlaq_rot270_m_f): Likewise. (mve_vcmlaq_rot270): Likewise. (mve_vcmlaq_rot90_m_f): Likewise. (mve_vcmlaq_rot90): Likewise. (mve_vcmlaq): Likewise. (mve_vcmlaq): Likewise. (mve_vcmpq_): Likewise. (mve_vcmpq_f): Likewise. (mve_vcmpq_n_): Likewise. (mve_vcmpq_n_f): Likewise. (mve_vcmpcsq_): Likewise. (mve_vcmpcsq_m_n_u): Likewise. (mve_vcmpcsq_m_u): Likewise. (mve_vcmpcsq_n_): Likewise. (mve_vcmpeqq_): Likewise. (mve_vcmpeqq_f): Likewise. (mve_vcmpeqq_m_): Likewise. (mve_vcmpeqq_m_f): Likewise. (mve_vcmpeqq_m_n_): Likewise. (mve_vcmpeqq_m_n_f): Likewise. (mve_vcmpeqq_n_): Likewise. (mve_vcmpeqq_n_f): Likewise. (mve_vcmpgeq_): Likewise. (mve_vcmpgeq_f): Likewise. (mve_vcmpgeq_m_f): Likewise. (mve_vcmpgeq_m_n_f): Likewise. (mve_vcmpgeq_m_n_s): Likewise. (mve_vcmpgeq_m_s): Likewise. (mve_vcmpgeq_n_): Likewise. (mve_vcmpgeq_n_f): Likewise. (mve_vcmpgtq_): Likewise. (mve_vcmpgtq_f): Likewise. (mve_vcmpgtq_m_f): Likewise. (mve_vcmpgtq_m_n_f): Likewise. (mve_vcmpgtq_m_n_s): Likewise. (mve_vcmpgtq_m_s): Likewise. (mve_vcmpgtq_n_): Likewise. (mve_vcmpgtq_n_f): Likewise. (mve_vcmphiq_): Likewise. (mve_vcmphiq_m_n_u): Likewise. (mve_vcmphiq_m_u): Likewise. (mve_vcmphiq_n_): Likewise. (mve_vcmpleq_): Likewise. (mve_vcmpleq_f): Likewise. (mve_vcmpleq_m_f): Likewise. (mve_vcmpleq_m_n_f): Likewise. (mve_vcmpleq_m_n_s): Likewise. (mve_vcmpleq_m_s): Likewise. (mve_vcmpleq_n_): Likewise. (mve_vcmpleq_n_f): Likewise. (mve_vcmpltq_): Likewise. (mve_vcmpltq_f): Likewise. (mve_vcmpltq_m_f): Likewise. (mve_vcmpltq_m_n_f): Likewise. (mve_vcmpltq_m_n_s): Likewise. (mve_vcmpltq_m_s): Likewise. (mve_vcmpltq_n_): Likewise. (mve_vcmpltq_n_f): Likewise. (mve_vcmpneq_): Likewise. (mve_vcmpneq_f): Likewise. (mve_vcmpneq_m_): Likewise. (mve_vcmpneq_m_f): Likewise. (mve_vcmpneq_m_n_): Likewise. (mve_vcmpneq_m_n_f): Likewise. (mve_vcmpneq_n_): Likewise. (mve_vcmpneq_n_f): Likewise. (mve_vcmulq_m_f): Likewise. (mve_vcmulq_rot180_m_f): Likewise. (mve_vcmulq_rot180): Likewise. (mve_vcmulq_rot270_m_f): Likewise. (mve_vcmulq_rot270): Likewise. (mve_vcmulq_rot90_m_f): Likewise. (mve_vcmulq_rot90): Likewise. (mve_vcmulq): Likewise. (mve_vcmulq): Likewise. (mve_vctpq_mhi): Likewise. (mve_vctpqhi): Likewise. (mve_vcvtaq_): Likewise. (mve_vcvtaq_m_): Likewise. (mve_vcvtbq_f16_f32v8hf): Likewise. (mve_vcvtbq_f32_f16v4sf): Likewise. (mve_vcvtbq_m_f16_f32v8hf): Likewise. (mve_vcvtbq_m_f32_f16v4sf): Likewise. (mve_vcvtmq_): Likewise. (mve_vcvtmq_m_): Likewise. (mve_vcvtnq_): Likewise. (mve_vcvtnq_m_): Likewise. (mve_vcvtpq_): Likewise. (mve_vcvtpq_m_): Likewise. (mve_vcvtq_from_f_): Likewise. (mve_vcvtq_m_from_f_): Likewise. (mve_vcvtq_m_n_from_f_): Likewise. (mve_vcvtq_m_n_to_f_): Likewise. (mve_vcvtq_m_to_f_): Likewise. (mve_vcvtq_n_from_f_): Likewise. (mve_vcvtq_n_to_f_): Likewise. (mve_vcvtq_to_f_): Likewise. (mve_vcvttq_f16_f32v8hf): Likewise. (mve_vcvttq_f32_f16v4sf): Likewise. (mve_vcvttq_m_f16_f32v8hf): Likewise. (mve_vcvttq_m_f32_f16v4sf): Likewise. (mve_vddupq_m_wb_u_insn): Likewise. (mve_vddupq_u_insn): Likewise. (mve_vdupq_m_n_): Likewise. (mve_vdupq_m_n_f): Likewise. (mve_vdupq_n_): Likewise. (mve_vdupq_n_f): Likewise. (mve_vdwdupq_m_wb_u_insn): Likewise. (mve_vdwdupq_wb_u_insn): Likewise. (mve_veorq_): Likewise. (mve_veorq_f): Likewise. (mve_veorq_m_): Likewise. (mve_veorq_m_f): Likewise. (mve_veorq_s): Likewise. (mve_veorq_u): Likewise. (mve_vfmaq_f): Likewise. (mve_vfmaq_m_f): Likewise. (mve_vfmaq_m_n_f): Likewise. (mve_vfmaq_n_f): Likewise. (mve_vfmasq_m_n_f): Likewise. (mve_vfmasq_n_f): Likewise. (mve_vfmsq_f): Likewise. (mve_vfmsq_m_f): Likewise. (mve_vhaddq_): Likewise. (mve_vhaddq_m_): Likewise. (mve_vhaddq_m_n_): Likewise. (mve_vhaddq_n_): Likewise. (mve_vhcaddq_rot270_m_s): Likewise. (mve_vhcaddq_rot270_s): Likewise. (mve_vhcaddq_rot90_m_s): Likewise. (mve_vhcaddq_rot90_s): Likewise. (mve_vhsubq_): Likewise. (mve_vhsubq_m_): Likewise. (mve_vhsubq_m_n_): Likewise. (mve_vhsubq_n_): Likewise. (mve_vidupq_m_wb_u_insn): Likewise. (mve_vidupq_u_insn): Likewise. (mve_viwdupq_m_wb_u_insn): Likewise. (mve_viwdupq_wb_u_insn): Likewise. (mve_vldrbq_): Likewise. (mve_vldrbq_gather_offset_): Likewise. (mve_vldrbq_gather_offset_z_): Likewise. (mve_vldrbq_z_): Likewise. (mve_vldrdq_gather_base_v2di): Likewise. (mve_vldrdq_gather_base_wb_v2di_insn): Likewise. (mve_vldrdq_gather_base_wb_z_v2di_insn): Likewise. (mve_vldrdq_gather_base_z_v2di): Likewise. (mve_vldrdq_gather_offset_v2di): Likewise. (mve_vldrdq_gather_offset_z_v2di): Likewise. (mve_vldrdq_gather_shifted_offset_v2di): Likewise. (mve_vldrdq_gather_shifted_offset_z_v2di): Likewise. (mve_vldrhq_): Likewise. (mve_vldrhq_fv8hf): Likewise. (mve_vldrhq_gather_offset_): Likewise. (mve_vldrhq_gather_offset_fv8hf): Likewise. (mve_vldrhq_gather_offset_z_): Likewise. (mve_vldrhq_gather_offset_z_fv8hf): Likewise. (mve_vldrhq_gather_shifted_offset_): Likewise. (mve_vldrhq_gather_shifted_offset_fv8hf): Likewise. (mve_vldrhq_gather_shifted_offset_z_): Likewise. (mve_vldrhq_gather_shifted_offset_z_fv8hf): Likewise. (mve_vldrhq_z_): Likewise. (mve_vldrhq_z_fv8hf): Likewise. (mve_vldrwq_v4si): Likewise. (mve_vldrwq_fv4sf): Likewise. (mve_vldrwq_gather_base_v4si): Likewise. (mve_vldrwq_gather_base_fv4sf): Likewise. (mve_vldrwq_gather_base_wb_v4si_insn): Likewise. (mve_vldrwq_gather_base_wb_fv4sf_insn): Likewise. (mve_vldrwq_gather_base_wb_z_v4si_insn): Likewise. (mve_vldrwq_gather_base_wb_z_fv4sf_insn): Likewise. (mve_vldrwq_gather_base_z_v4si): Likewise. (mve_vldrwq_gather_base_z_fv4sf): Likewise. (mve_vldrwq_gather_offset_v4si): Likewise. (mve_vldrwq_gather_offset_fv4sf): Likewise. (mve_vldrwq_gather_offset_z_v4si): Likewise. (mve_vldrwq_gather_offset_z_fv4sf): Likewise. (mve_vldrwq_gather_shifted_offset_v4si): Likewise. (mve_vldrwq_gather_shifted_offset_fv4sf): Likewise. (mve_vldrwq_gather_shifted_offset_z_v4si): Likewise. (mve_vldrwq_gather_shifted_offset_z_fv4sf): Likewise. (mve_vldrwq_z_v4si): Likewise. (mve_vldrwq_z_fv4sf): Likewise. (mve_vmaxaq_m_s): Likewise. (mve_vmaxaq_s): Likewise. (mve_vmaxavq_p_s): Likewise. (mve_vmaxavq_s): Likewise. (mve_vmaxnmaq_f): Likewise. (mve_vmaxnmaq_m_f): Likewise. (mve_vmaxnmavq_f): Likewise. (mve_vmaxnmavq_p_f): Likewise. (mve_vmaxnmq_f): Likewise. (mve_vmaxnmq_m_f): Likewise. (mve_vmaxnmvq_f): Likewise. (mve_vmaxnmvq_p_f): Likewise. (mve_vmaxq_): Likewise. (mve_vmaxq_m_): Likewise. (mve_vmaxq_s): Likewise. (mve_vmaxq_u): Likewise. (mve_vmaxvq_): Likewise. (mve_vmaxvq_p_): Likewise. (mve_vminaq_m_s): Likewise. (mve_vminaq_s): Likewise. (mve_vminavq_p_s): Likewise. (mve_vminavq_s): Likewise. (mve_vminnmaq_f): Likewise. (mve_vminnmaq_m_f): Likewise. (mve_vminnmavq_f): Likewise. (mve_vminnmavq_p_f): Likewise. (mve_vminnmq_f): Likewise. (mve_vminnmq_m_f): Likewise. (mve_vminnmvq_f): Likewise. (mve_vminnmvq_p_f): Likewise. (mve_vminq_): Likewise. (mve_vminq_m_): Likewise. (mve_vminq_s): Likewise. (mve_vminq_u): Likewise. (mve_vminvq_): Likewise. (mve_vminvq_p_): Likewise. (mve_vmladavaq_): Likewise. (mve_vmladavaq_p_): Likewise. (mve_vmladavaxq_p_s): Likewise. (mve_vmladavaxq_s): Likewise. (mve_vmladavq_): Likewise. (mve_vmladavq_p_): Likewise. (mve_vmladavxq_p_s): Likewise. (mve_vmladavxq_s): Likewise. (mve_vmlaldavaq_): Likewise. (mve_vmlaldavaq_p_): Likewise. (mve_vmlaldavaxq_): Likewise. (mve_vmlaldavaxq_p_): Likewise. (mve_vmlaldavaxq_s): Likewise. (mve_vmlaldavq_): Likewise. (mve_vmlaldavq_p_): Likewise. (mve_vmlaldavxq_p_s): Likewise. (mve_vmlaldavxq_s): Likewise. (mve_vmlaq_m_n_): Likewise. (mve_vmlaq_n_): Likewise. (mve_vmlasq_m_n_): Likewise. (mve_vmlasq_n_): Likewise. (mve_vmlsdavaq_p_s): Likewise. (mve_vmlsdavaq_s): Likewise. (mve_vmlsdavaxq_p_s): Likewise. (mve_vmlsdavaxq_s): Likewise. (mve_vmlsdavq_p_s): Likewise. (mve_vmlsdavq_s): Likewise. (mve_vmlsdavxq_p_s): Likewise. (mve_vmlsdavxq_s): Likewise. (mve_vmlsldavaq_p_s): Likewise. (mve_vmlsldavaq_s): Likewise. (mve_vmlsldavaxq_p_s): Likewise. (mve_vmlsldavaxq_s): Likewise. (mve_vmlsldavq_p_s): Likewise. (mve_vmlsldavq_s): Likewise. (mve_vmlsldavxq_p_s): Likewise. (mve_vmlsldavxq_s): Likewise. (mve_vmovlbq_): Likewise. (mve_vmovlbq_m_): Likewise. (mve_vmovltq_): Likewise. (mve_vmovltq_m_): Likewise. (mve_vmovnbq_): Likewise. (mve_vmovnbq_m_): Likewise. (mve_vmovntq_): Likewise. (mve_vmovntq_m_): Likewise. (mve_vmulhq_): Likewise. (mve_vmulhq_m_): Likewise. (mve_vmullbq_int_): Likewise. (mve_vmullbq_int_m_): Likewise. (mve_vmullbq_poly_m_p): Likewise. (mve_vmullbq_poly_p): Likewise. (mve_vmulltq_int_): Likewise. (mve_vmulltq_int_m_): Likewise. (mve_vmulltq_poly_m_p): Likewise. (mve_vmulltq_poly_p): Likewise. (mve_vmulq_): Likewise. (mve_vmulq_f): Likewise. (mve_vmulq_m_): Likewise. (mve_vmulq_m_f): Likewise. (mve_vmulq_m_n_): Likewise. (mve_vmulq_m_n_f): Likewise. (mve_vmulq_n_): Likewise. (mve_vmulq_n_f): Likewise. (mve_vmvnq_): Likewise. (mve_vmvnq_m_): Likewise. (mve_vmvnq_m_n_): Likewise. (mve_vmvnq_n_): Likewise. (mve_vmvnq_s): Likewise. (mve_vmvnq_u): Likewise. (mve_vnegq_f): Likewise. (mve_vnegq_m_f): Likewise. (mve_vnegq_m_s): Likewise. (mve_vnegq_s): Likewise. (mve_vornq_): Likewise. (mve_vornq_f): Likewise. (mve_vornq_m_): Likewise. (mve_vornq_m_f): Likewise. (mve_vornq_s): Likewise. (mve_vornq_u): Likewise. (mve_vorrq_): Likewise. (mve_vorrq_f): Likewise. (mve_vorrq_m_): Likewise. (mve_vorrq_m_f): Likewise. (mve_vorrq_m_n_): Likewise. (mve_vorrq_n_): Likewise. (mve_vorrq_s): Likewise. (mve_vorrq_s): Likewise. (mve_vqabsq_m_s): Likewise. (mve_vqabsq_s): Likewise. (mve_vqaddq_): Likewise. (mve_vqaddq_m_): Likewise. (mve_vqaddq_m_n_): Likewise. (mve_vqaddq_n_): Likewise. (mve_vqdmladhq_m_s): Likewise. (mve_vqdmladhq_s): Likewise. (mve_vqdmladhxq_m_s): Likewise. (mve_vqdmladhxq_s): Likewise. (mve_vqdmlahq_m_n_s): Likewise. (mve_vqdmlahq_n_): Likewise. (mve_vqdmlahq_n_s): Likewise. (mve_vqdmlashq_m_n_s): Likewise. (mve_vqdmlashq_n_): Likewise. (mve_vqdmlashq_n_s): Likewise. (mve_vqdmlsdhq_m_s): Likewise. (mve_vqdmlsdhq_s): Likewise. (mve_vqdmlsdhxq_m_s): Likewise. (mve_vqdmlsdhxq_s): Likewise. (mve_vqdmulhq_m_n_s): Likewise. (mve_vqdmulhq_m_s): Likewise. (mve_vqdmulhq_n_s): Likewise. (mve_vqdmulhq_s): Likewise. (mve_vqdmullbq_m_n_s): Likewise. (mve_vqdmullbq_m_s): Likewise. (mve_vqdmullbq_n_s): Likewise. (mve_vqdmullbq_s): Likewise. (mve_vqdmulltq_m_n_s): Likewise. (mve_vqdmulltq_m_s): Likewise. (mve_vqdmulltq_n_s): Likewise. (mve_vqdmulltq_s): Likewise. (mve_vqmovnbq_): Likewise. (mve_vqmovnbq_m_): Likewise. (mve_vqmovntq_): Likewise. (mve_vqmovntq_m_): Likewise. (mve_vqmovunbq_m_s): Likewise. (mve_vqmovunbq_s): Likewise. (mve_vqmovuntq_m_s): Likewise. (mve_vqmovuntq_s): Likewise. (mve_vqnegq_m_s): Likewise. (mve_vqnegq_s): Likewise. (mve_vqrdmladhq_m_s): Likewise. (mve_vqrdmladhq_s): Likewise. (mve_vqrdmladhxq_m_s): Likewise. (mve_vqrdmladhxq_s): Likewise. (mve_vqrdmlahq_m_n_s): Likewise. (mve_vqrdmlahq_n_): Likewise. (mve_vqrdmlahq_n_s): Likewise. (mve_vqrdmlashq_m_n_s): Likewise. (mve_vqrdmlashq_n_): Likewise. (mve_vqrdmlashq_n_s): Likewise. (mve_vqrdmlsdhq_m_s): Likewise. (mve_vqrdmlsdhq_s): Likewise. (mve_vqrdmlsdhxq_m_s): Likewise. (mve_vqrdmlsdhxq_s): Likewise. (mve_vqrdmulhq_m_n_s): Likewise. (mve_vqrdmulhq_m_s): Likewise. (mve_vqrdmulhq_n_s): Likewise. (mve_vqrdmulhq_s): Likewise. (mve_vqrshlq_): Likewise. (mve_vqrshlq_m_): Likewise. (mve_vqrshlq_m_n_): Likewise. (mve_vqrshlq_n_): Likewise. (mve_vqrshrnbq_m_n_): Likewise. (mve_vqrshrnbq_n_): Likewise. (mve_vqrshrntq_m_n_): Likewise. (mve_vqrshrntq_n_): Likewise. (mve_vqrshrunbq_m_n_s): Likewise. (mve_vqrshrunbq_n_s): Likewise. (mve_vqrshruntq_m_n_s): Likewise. (mve_vqrshruntq_n_s): Likewise. (mve_vqshlq_): Likewise. (mve_vqshlq_m_): Likewise. (mve_vqshlq_m_n_): Likewise. (mve_vqshlq_m_r_): Likewise. (mve_vqshlq_n_): Likewise. (mve_vqshlq_r_): Likewise. (mve_vqshluq_m_n_s): Likewise. (mve_vqshluq_n_s): Likewise. (mve_vqshrnbq_m_n_): Likewise. (mve_vqshrnbq_n_): Likewise. (mve_vqshrntq_m_n_): Likewise. (mve_vqshrntq_n_): Likewise. (mve_vqshrunbq_m_n_s): Likewise. (mve_vqshrunbq_n_s): Likewise. (mve_vqshruntq_m_n_s): Likewise. (mve_vqshruntq_n_s): Likewise. (mve_vqsubq_): Likewise. (mve_vqsubq_m_): Likewise. (mve_vqsubq_m_n_): Likewise. (mve_vqsubq_n_): Likewise. (mve_vrev16q_v16qi): Likewise. (mve_vrev16q_m_v16qi): Likewise. (mve_vrev32q_): Likewise. (mve_vrev32q_fv8hf): Likewise. (mve_vrev32q_m_): Likewise. (mve_vrev32q_m_fv8hf): Likewise. (mve_vrev64q_): Likewise. (mve_vrev64q_f): Likewise. (mve_vrev64q_m_): Likewise. (mve_vrev64q_m_f): Likewise. (mve_vrhaddq_): Likewise. (mve_vrhaddq_m_): Likewise. (mve_vrmlaldavhaq_v4si): Likewise. (mve_vrmlaldavhaq_p_sv4si): Likewise. (mve_vrmlaldavhaq_p_uv4si): Likewise. (mve_vrmlaldavhaq_sv4si): Likewise. (mve_vrmlaldavhaq_uv4si): Likewise. (mve_vrmlaldavhaxq_p_sv4si): Likewise. (mve_vrmlaldavhaxq_sv4si): Likewise. (mve_vrmlaldavhq_v4si): Likewise. (mve_vrmlaldavhq_p_v4si): Likewise. (mve_vrmlaldavhxq_p_sv4si): Likewise. (mve_vrmlaldavhxq_sv4si): Likewise. (mve_vrmlsldavhaq_p_sv4si): Likewise. (mve_vrmlsldavhaq_sv4si): Likewise. (mve_vrmlsldavhaxq_p_sv4si): Likewise. (mve_vrmlsldavhaxq_sv4si): Likewise. (mve_vrmlsldavhq_p_sv4si): Likewise. (mve_vrmlsldavhq_sv4si): Likewise. (mve_vrmlsldavhxq_p_sv4si): Likewise. (mve_vrmlsldavhxq_sv4si): Likewise. (mve_vrmulhq_): Likewise. (mve_vrmulhq_m_): Likewise. (mve_vrndaq_f): Likewise. (mve_vrndaq_m_f): Likewise. (mve_vrndmq_f): Likewise. (mve_vrndmq_m_f): Likewise. (mve_vrndnq_f): Likewise. (mve_vrndnq_m_f): Likewise. (mve_vrndpq_f): Likewise. (mve_vrndpq_m_f): Likewise. (mve_vrndq_f): Likewise. (mve_vrndq_m_f): Likewise. (mve_vrndxq_f): Likewise. (mve_vrndxq_m_f): Likewise. (mve_vrshlq_): Likewise. (mve_vrshlq_m_): Likewise. (mve_vrshlq_m_n_): Likewise. (mve_vrshlq_n_): Likewise. (mve_vrshrnbq_m_n_): Likewise. (mve_vrshrnbq_n_): Likewise. (mve_vrshrntq_m_n_): Likewise. (mve_vrshrntq_n_): Likewise. (mve_vrshrq_m_n_): Likewise. (mve_vrshrq_n_): Likewise. (mve_vsbciq_v4si): Likewise. (mve_vsbciq_m_v4si): Likewise. (mve_vsbcq_v4si): Likewise. (mve_vsbcq_m_v4si): Likewise. (mve_vshlcq_): Likewise. (mve_vshlcq_m_): Likewise. (mve_vshllbq_m_n_): Likewise. (mve_vshllbq_n_): Likewise. (mve_vshlltq_m_n_): Likewise. (mve_vshlltq_n_): Likewise. (mve_vshlq_): Likewise. (mve_vshlq_): Likewise. (mve_vshlq_m_): Likewise. (mve_vshlq_m_n_): Likewise. (mve_vshlq_m_r_): Likewise. (mve_vshlq_n_): Likewise. (mve_vshlq_r_): Likewise. (mve_vshrnbq_m_n_): Likewise. (mve_vshrnbq_n_): Likewise. (mve_vshrntq_m_n_): Likewise. (mve_vshrntq_n_): Likewise. (mve_vshrq_m_n_): Likewise. (mve_vshrq_n_): Likewise. (mve_vsliq_m_n_): Likewise. (mve_vsliq_n_): Likewise. (mve_vsriq_m_n_): Likewise. (mve_vsriq_n_): Likewise. (mve_vstrbq_): Likewise. (mve_vstrbq_p_): Likewise. (mve_vstrbq_scatter_offset__insn): Likewise. (mve_vstrbq_scatter_offset_p__insn): Likewise. (mve_vstrdq_scatter_base_v2di): Likewise. (mve_vstrdq_scatter_base_p_v2di): Likewise. (mve_vstrdq_scatter_base_wb_v2di): Likewise. (mve_vstrdq_scatter_base_wb_p_v2di): Likewise. (mve_vstrdq_scatter_offset_v2di_insn): Likewise. (mve_vstrdq_scatter_offset_p_v2di_insn): Likewise. (mve_vstrdq_scatter_shifted_offset_v2di_insn): Likewise. (mve_vstrdq_scatter_shifted_offset_p_v2di_insn): Likewise. (mve_vstrhq_): Likewise. (mve_vstrhq_fv8hf): Likewise. (mve_vstrhq_p_): Likewise. (mve_vstrhq_p_fv8hf): Likewise. (mve_vstrhq_scatter_offset__insn): Likewise. (mve_vstrhq_scatter_offset_fv8hf_insn): Likewise. (mve_vstrhq_scatter_offset_p__insn): Likewise. (mve_vstrhq_scatter_offset_p_fv8hf_insn): Likewise. (mve_vstrhq_scatter_shifted_offset__insn): Likewise. (mve_vstrhq_scatter_shifted_offset_fv8hf_insn): Likewise. (mve_vstrhq_scatter_shifted_offset_p__insn): Likewise. (mve_vstrhq_scatter_shifted_offset_p_fv8hf_insn): Likewise. (mve_vstrwq_v4si): Likewise. (mve_vstrwq_fv4sf): Likewise. (mve_vstrwq_p_v4si): Likewise. (mve_vstrwq_p_fv4sf): Likewise. (mve_vstrwq_scatter_base_v4si): Likewise. (mve_vstrwq_scatter_base_fv4sf): Likewise. (mve_vstrwq_scatter_base_p_v4si): Likewise. (mve_vstrwq_scatter_base_p_fv4sf): Likewise. (mve_vstrwq_scatter_base_wb_v4si): Likewise. (mve_vstrwq_scatter_base_wb_fv4sf): Likewise. (mve_vstrwq_scatter_base_wb_p_v4si): Likewise. (mve_vstrwq_scatter_base_wb_p_fv4sf): Likewise. (mve_vstrwq_scatter_offset_v4si_insn): Likewise. (mve_vstrwq_scatter_offset_fv4sf_insn): Likewise. (mve_vstrwq_scatter_offset_p_v4si_insn): Likewise. (mve_vstrwq_scatter_offset_p_fv4sf_insn): Likewise. (mve_vstrwq_scatter_shifted_offset_v4si_insn): Likewise. (mve_vstrwq_scatter_shifted_offset_fv4sf_insn): Likewise. (mve_vstrwq_scatter_shifted_offset_p_v4si_insn): Likewise. (mve_vstrwq_scatter_shifted_offset_p_fv4sf_insn): Likewise. (mve_vsubq_): Likewise. (mve_vsubq_f): Likewise. (mve_vsubq_m_): Likewise. (mve_vsubq_m_f): Likewise. (mve_vsubq_m_n_): Likewise. (mve_vsubq_m_n_f): Likewise. (mve_vsubq_n_): Likewise. (mve_vsubq_n_f): Likewise. diff --git a/gcc/config/arm/arm.h b/gcc/config/arm/arm.h index 4f54530adcb..f06e5c2cda4 100644 --- a/gcc/config/arm/arm.h +++ b/gcc/config/arm/arm.h @@ -2358,6 +2358,21 @@ extern int making_const_table; else if (TARGET_THUMB1) \ thumb1_final_prescan_insn (INSN) +/* These defines are useful to refer to the value of the mve_unpredicated_insn + insn attribute. Note that, because these use the get_attr_* function, these + will change recog_data if (INSN) isn't current_insn. */ +#define MVE_VPT_PREDICABLE_INSN_P(INSN) \ + (recog_memoized (INSN) >= 0 \ + && get_attr_mve_unpredicated_insn (INSN) != 0) \ + +#define MVE_VPT_PREDICATED_INSN_P(INSN) \ + (MVE_VPT_PREDICABLE_INSN_P (INSN) \ + && recog_memoized (INSN) != get_attr_mve_unpredicated_insn (INSN)) \ + +#define MVE_VPT_UNPREDICATED_INSN_P(INSN) \ + (MVE_VPT_PREDICABLE_INSN_P (INSN) \ + && recog_memoized (INSN) == get_attr_mve_unpredicated_insn (INSN)) \ + #define ARM_SIGN_EXTEND(x) ((HOST_WIDE_INT) \ (HOST_BITS_PER_WIDE_INT <= 32 ? (unsigned HOST_WIDE_INT) (x) \ : ((((unsigned HOST_WIDE_INT)(x)) & (unsigned HOST_WIDE_INT) 0xffffffff) |\ diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md index 2ac97232ffd..ee931ad6ebd 100644 --- a/gcc/config/arm/arm.md +++ b/gcc/config/arm/arm.md @@ -124,6 +124,8 @@ ; and not all ARM insns do. (define_attr "predicated" "yes,no" (const_string "no")) +(define_attr "mve_unpredicated_insn" "" (const_int 0)) + ; LENGTH of an instruction (in bytes) (define_attr "length" "" (const_int 4)) diff --git a/gcc/config/arm/iterators.md b/gcc/config/arm/iterators.md index 2edd0b06370..71e43539616 100644 --- a/gcc/config/arm/iterators.md +++ b/gcc/config/arm/iterators.md @@ -2296,6 +2296,7 @@ (define_int_attr mmla_sfx [(UNSPEC_MATMUL_S "s8") (UNSPEC_MATMUL_U "u8") (UNSPEC_MATMUL_US "s8")]) + ;;MVE int attribute. (define_int_attr supf [(VCVTQ_TO_F_S "s") (VCVTQ_TO_F_U "u") (VREV16Q_S "s") (VREV16Q_U "u") (VMVNQ_N_S "s") (VMVNQ_N_U "u") diff --git a/gcc/config/arm/mve.md b/gcc/config/arm/mve.md index 6e4b143affa..87cbf6c1726 100644 --- a/gcc/config/arm/mve.md +++ b/gcc/config/arm/mve.md @@ -17,7 +17,7 @@ ;; along with GCC; see the file COPYING3. If not see ;; . -(define_insn "*mve_mov" +(define_insn "mve_mov" [(set (match_operand:MVE_types 0 "nonimmediate_operand" "=w,w,r,w , w, r,Ux,w") (match_operand:MVE_types 1 "general_operand" " w,r,w,DnDm,UxUi,r,w, Ul"))] "TARGET_HAVE_MVE || TARGET_HAVE_MVE_FLOAT" @@ -81,18 +81,27 @@ return ""; } } - [(set_attr "type" "mve_move,mve_move,mve_move,mve_move,mve_load,multiple,mve_store,mve_load") + [(set_attr_alternative "mve_unpredicated_insn" [(symbol_ref "CODE_FOR_mve_mov") + (symbol_ref "CODE_FOR_nothing") + (symbol_ref "CODE_FOR_nothing") + (symbol_ref "CODE_FOR_mve_mov") + (symbol_ref "CODE_FOR_mve_mov") + (symbol_ref "CODE_FOR_nothing") + (symbol_ref "CODE_FOR_mve_mov") + (symbol_ref "CODE_FOR_nothing")]) + (set_attr "type" "mve_move,mve_move,mve_move,mve_move,mve_load,multiple,mve_store,mve_load") (set_attr "length" "4,8,8,4,4,8,4,8") (set_attr "thumb2_pool_range" "*,*,*,*,1018,*,*,*") (set_attr "neg_pool_range" "*,*,*,*,996,*,*,*")]) -(define_insn "*mve_vdup" +(define_insn "mve_vdup" [(set (match_operand:MVE_vecs 0 "s_register_operand" "=w") (vec_duplicate:MVE_vecs (match_operand: 1 "s_register_operand" "r")))] "TARGET_HAVE_MVE || TARGET_HAVE_MVE_FLOAT" "vdup.\t%q0, %1" - [(set_attr "length" "4") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vdup")) + (set_attr "length" "4") (set_attr "type" "mve_move")]) ;; @@ -145,7 +154,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" ".f%#\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") ]) ;; @@ -159,7 +169,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" ".%#\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") ]) ;; @@ -173,7 +184,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "v.f%#\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vq_f")) + (set_attr "type" "mve_move") ]) ;; @@ -187,7 +199,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" ".%#\t%q0, %1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_f")) + (set_attr "type" "mve_move") ]) ;; @@ -201,7 +214,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" ".\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") ]) ;; ;; [vcvttq_f32_f16]) @@ -214,7 +228,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vcvtt.f32.f16\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f32_f16v4sf")) + (set_attr "type" "mve_move") ]) ;; @@ -228,7 +243,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vcvtb.f32.f16\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f32_f16v4sf")) + (set_attr "type" "mve_move") ]) ;; @@ -242,7 +258,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vcvt.f%#.%#\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_to_f_")) + (set_attr "type" "mve_move") ]) ;; @@ -256,7 +273,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; @@ -270,7 +288,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vcvt.%#.f%#\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_from_f_")) + (set_attr "type" "mve_move") ]) ;; @@ -284,7 +303,8 @@ ] "TARGET_HAVE_MVE" "v.s%#\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vq_s")) + (set_attr "type" "mve_move") ]) ;; @@ -297,7 +317,8 @@ ] "TARGET_HAVE_MVE" "vmvn\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmvnq_u")) + (set_attr "type" "mve_move") ]) (define_expand "mve_vmvnq_s" [ @@ -318,7 +339,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") ]) ;; @@ -331,7 +353,8 @@ ] "TARGET_HAVE_MVE" "vclz.i%#\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vclzq_s")) + (set_attr "type" "mve_move") ]) (define_expand "mve_vclzq_u" [ @@ -354,7 +377,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; @@ -368,7 +392,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; @@ -382,7 +407,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; @@ -397,7 +423,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; @@ -411,7 +438,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vcvtp.%#.f%#\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtpq_")) + (set_attr "type" "mve_move") ]) ;; @@ -425,7 +453,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vcvtn.%#.f%#\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtnq_")) + (set_attr "type" "mve_move") ]) ;; @@ -439,7 +468,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vcvtm.%#.f%#\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtmq_")) + (set_attr "type" "mve_move") ]) ;; @@ -453,7 +483,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vcvta.%#.f%#\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtaq_")) + (set_attr "type" "mve_move") ]) ;; @@ -467,7 +498,8 @@ ] "TARGET_HAVE_MVE" ".i%#\t%q0, %1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") ]) ;; @@ -481,7 +513,8 @@ ] "TARGET_HAVE_MVE" ".\t%q0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; @@ -495,7 +528,8 @@ ] "TARGET_HAVE_MVE" ".32\t%Q0, %R0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_v4si")) + (set_attr "type" "mve_move") ]) ;; @@ -509,7 +543,8 @@ ] "TARGET_HAVE_MVE" "vctp.\t%1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vctpq")) + (set_attr "type" "mve_move") ]) ;; @@ -523,7 +558,8 @@ ] "TARGET_HAVE_MVE" "vpnot" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vpnotv16bi")) + (set_attr "type" "mve_move") ]) ;; @@ -538,7 +574,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" ".\t%q0, %q1, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_f")) + (set_attr "type" "mve_move") ]) ;; @@ -553,7 +590,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vcvt.f.\t%q0, %q1, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_to_f_")) + (set_attr "type" "mve_move") ]) ;; [vcreateq_f]) @@ -599,7 +637,8 @@ ] "TARGET_HAVE_MVE" ".\t%q0, %q1, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") ]) ;; Versions that take constant vectors as operand 2 (with all elements @@ -617,7 +656,8 @@ VALID_NEON_QREG_MODE (mode), true); } - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshrq_n_s_imm")) + (set_attr "type" "mve_move") ]) (define_insn "mve_vshrq_n_u_imm" [ @@ -632,7 +672,8 @@ VALID_NEON_QREG_MODE (mode), true); } - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshrq_n_u_imm")) + (set_attr "type" "mve_move") ]) ;; @@ -647,7 +688,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vcvt..f\t%q0, %q1, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_from_f_")) + (set_attr "type" "mve_move") ]) ;; @@ -662,8 +704,9 @@ ] "TARGET_HAVE_MVE" "vpst\;t.32\t%Q0, %R0, %q1" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_v4si")) + (set_attr "type" "mve_move") + (set_attr "length""8")]) ;; ;; [vcmpneq_, vcmpcsq_, vcmpeqq_, vcmpgeq_, vcmpgtq_, vcmphiq_, vcmpleq_, vcmpltq_]) @@ -676,7 +719,8 @@ ] "TARGET_HAVE_MVE" "vcmp.%#\t, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpq_")) + (set_attr "type" "mve_move") ]) ;; @@ -691,7 +735,8 @@ ] "TARGET_HAVE_MVE" "vcmp.%# , %q1, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpq_n_")) + (set_attr "type" "mve_move") ]) ;; @@ -722,7 +767,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; @@ -739,7 +785,8 @@ ] "TARGET_HAVE_MVE" ".i%#\t%q0, %q1, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") ]) ;; @@ -754,7 +801,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; @@ -769,7 +817,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%0, %q1" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -789,8 +838,11 @@ "@ vand\t%q0, %q1, %q2 * return neon_output_logic_immediate (\"vand\", &operands[2], mode, 1, VALID_NEON_QREG_MODE (mode));" - [(set_attr "type" "mve_move") + [(set_attr_alternative "mve_unpredicated_insn" [(symbol_ref "CODE_FOR_mve_vandq_u") + (symbol_ref "CODE_FOR_nothing")]) + (set_attr "type" "mve_move") ]) + (define_expand "mve_vandq_s" [ (set (match_operand:MVE_2 0 "s_register_operand") @@ -811,7 +863,8 @@ ] "TARGET_HAVE_MVE" "vbic\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vbicq_u")) + (set_attr "type" "mve_move") ]) (define_expand "mve_vbicq_s" @@ -835,7 +888,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %q1, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") ]) ;; @@ -853,7 +907,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %q1, %q2, #" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; Auto vectorizer pattern for int vcadd @@ -876,7 +931,8 @@ ] "TARGET_HAVE_MVE" "veor\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_veorq_u")) + (set_attr "type" "mve_move") ]) (define_expand "mve_veorq_s" [ @@ -904,7 +960,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %q1, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") ]) ;; @@ -920,7 +977,8 @@ ] "TARGET_HAVE_MVE" ".s%#\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; @@ -935,7 +993,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) @@ -954,7 +1013,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; @@ -972,7 +1032,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; @@ -987,7 +1048,8 @@ ] "TARGET_HAVE_MVE" "vmullb.%#\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmullbq_int_")) + (set_attr "type" "mve_move") ]) ;; @@ -1002,7 +1064,8 @@ ] "TARGET_HAVE_MVE" "vmullt.%#\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmulltq_int_")) + (set_attr "type" "mve_move") ]) ;; @@ -1018,7 +1081,8 @@ ] "TARGET_HAVE_MVE" ".i%#\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q")) + (set_attr "type" "mve_move") ]) ;; @@ -1032,7 +1096,8 @@ ] "TARGET_HAVE_MVE" "vorn\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_s")) + (set_attr "type" "mve_move") ]) (define_expand "mve_vornq_u" @@ -1061,7 +1126,8 @@ "@ vorr\t%q0, %q1, %q2 * return neon_output_logic_immediate (\"vorr\", &operands[2], mode, 0, VALID_NEON_QREG_MODE (mode));" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vorrq_s")) + (set_attr "type" "mve_move") ]) (define_expand "mve_vorrq_u" [ @@ -1085,7 +1151,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") ]) ;; @@ -1101,7 +1168,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %q1, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") ]) ;; @@ -1117,7 +1185,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_r_")) + (set_attr "type" "mve_move") ]) ;; @@ -1132,7 +1201,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %q1, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") ]) ;; @@ -1147,7 +1217,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" ".f%#\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") ]) ;; @@ -1162,7 +1233,8 @@ ] "TARGET_HAVE_MVE" ".32\t%Q0, %R0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_v4si")) + (set_attr "type" "mve_move") ]) ;; @@ -1179,7 +1251,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" ".f%#\t%q0, %q1, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_f")) + (set_attr "type" "mve_move") ]) ;; @@ -1193,7 +1266,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vand\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vandq_f")) + (set_attr "type" "mve_move") ]) ;; @@ -1207,7 +1281,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vbic\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vbicq_f")) + (set_attr "type" "mve_move") ]) ;; @@ -1223,7 +1298,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" ".f%#\t%q0, %q1, %q2, #" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") ]) ;; @@ -1237,7 +1313,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vcmp.f%# , %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpq_f")) + (set_attr "type" "mve_move") ]) ;; @@ -1252,7 +1329,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vcmp.f%# , %q1, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpq_n_f")) + (set_attr "type" "mve_move") ]) ;; @@ -1267,8 +1345,10 @@ ] "TARGET_HAVE_MVE" "vpst\;vctpt.\t%1" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vctpq")) + (set_attr "type" "mve_move") + (set_attr "length""8") +]) ;; ;; [vcvtbq_f16_f32]) @@ -1282,7 +1362,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vcvtb.f16.f32\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f16_f32v8hf")) + (set_attr "type" "mve_move") ]) ;; @@ -1297,7 +1378,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vcvtt.f16.f32\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f16_f32v8hf")) + (set_attr "type" "mve_move") ]) ;; @@ -1311,7 +1393,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "veor\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_veorq_f")) + (set_attr "type" "mve_move") ]) ;; @@ -1327,7 +1410,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" ".f%#\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") ]) ;; @@ -1345,7 +1429,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" ".f%#\t%0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") ]) ;; @@ -1360,7 +1445,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" ".f%# %q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") ]) ;; @@ -1378,7 +1464,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%Q0, %R0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; @@ -1398,7 +1485,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; @@ -1414,7 +1502,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" ".f%#\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") ]) ;; @@ -1428,7 +1517,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vorn\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_f")) + (set_attr "type" "mve_move") ]) ;; @@ -1442,7 +1532,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vorr\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vorrq_f")) + (set_attr "type" "mve_move") ]) ;; @@ -1458,7 +1549,8 @@ ] "TARGET_HAVE_MVE" ".i%# %q0, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") ]) ;; @@ -1474,7 +1566,8 @@ ] "TARGET_HAVE_MVE" ".s%#\t%q0, %q1, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") ]) ;; @@ -1490,7 +1583,8 @@ ] "TARGET_HAVE_MVE" ".s%#\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; @@ -1508,7 +1602,8 @@ ] "TARGET_HAVE_MVE" ".32\t%Q0, %R0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_v4si")) + (set_attr "type" "mve_move") ]) ;; @@ -1524,7 +1619,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %q1, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") ]) ;; @@ -1539,7 +1635,8 @@ ] "TARGET_HAVE_MVE" "vmullt.p%#\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmulltq_poly_p")) + (set_attr "type" "mve_move") ]) ;; @@ -1554,7 +1651,8 @@ ] "TARGET_HAVE_MVE" "vmullb.p%#\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmullbq_poly_p")) + (set_attr "type" "mve_move") ]) ;; @@ -1575,8 +1673,9 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;vcmpt.f%#\t, %q1, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpq_f")) + (set_attr "length""8")]) + ;; ;; [vcvtaq_m_u, vcvtaq_m_s]) ;; @@ -1590,8 +1689,10 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;vcvtat.%#.f%#\t%q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtaq_")) + (set_attr "type" "mve_move") + (set_attr "length""8")]) + ;; ;; [vcvtq_m_to_f_s, vcvtq_m_to_f_u]) ;; @@ -1605,8 +1706,9 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;vcvtt.f%#.%#\t%q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_to_f_")) + (set_attr "type" "mve_move") + (set_attr "length""8")]) ;; ;; [vqrshrnbq_n_u, vqrshrnbq_n_s] @@ -1632,7 +1734,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") ]) ;; @@ -1651,7 +1754,8 @@ ] "TARGET_HAVE_MVE" ".32\t%Q0, %R0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_v4si")) + (set_attr "type" "mve_move") ]) ;; @@ -1667,7 +1771,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; @@ -1713,7 +1818,10 @@ (match_dup 4)] VSHLCQ))] "TARGET_HAVE_MVE" - "vshlc\t%q0, %1, %4") + "vshlc\t%q0, %1, %4" + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshlcq_")) + (set_attr "type" "mve_move") +]) ;; ;; [vabsq_m_s] @@ -1733,7 +1841,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -1749,7 +1858,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -1772,7 +1882,8 @@ ] "TARGET_HAVE_MVE" "vpst\;vcmpt.%#\t, %q1, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpq_n_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -1795,7 +1906,8 @@ ] "TARGET_HAVE_MVE" "vpst\;vcmpt.%#\t, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpq_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -1811,7 +1923,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%q0, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -1828,7 +1941,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.s%#\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -1847,7 +1961,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -1866,7 +1981,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; @@ -1885,7 +2001,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -1906,7 +2023,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") ]) ;; @@ -1922,7 +2040,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -1938,7 +2057,8 @@ ] "TARGET_HAVE_MVE" "\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; @@ -1961,7 +2081,8 @@ ] "TARGET_HAVE_MVE" ".s%#\t%q0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; @@ -1978,7 +2099,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%q0, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -1995,7 +2117,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%q0, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_r_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2011,7 +2134,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2027,7 +2151,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") ]) ;; @@ -2043,7 +2168,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") ]) ;; @@ -2066,7 +2192,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;t.f%#\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2082,7 +2209,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.32\t%Q0, %R0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_v4si")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; ;; [vcmlaq, vcmlaq_rot90, vcmlaq_rot180, vcmlaq_rot270]) @@ -2100,7 +2228,9 @@ "@ vcmul.f%# %q0, %q2, %q3, # vcmla.f%# %q0, %q2, %q3, #" - [(set_attr "type" "mve_move") + [(set_attr_alternative "mve_unpredicated_insn" [(symbol_ref "CODE_FOR_mve_q_f") + (symbol_ref "CODE_FOR_mve_q_f")]) + (set_attr "type" "mve_move") ]) ;; @@ -2121,7 +2251,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;vcmpt.f%#\t, %q1, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpq_n_f")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2137,7 +2268,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;vcvtbt.f16.f32\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f16_f32v8hf")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2153,7 +2285,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;vcvtbt.f32.f16\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f32_f16v4sf")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2169,7 +2302,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;vcvttt.f16.f32\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f16_f32v8hf")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2185,8 +2319,9 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;vcvttt.f32.f16\t%q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f32_f16v4sf")) + (set_attr "type" "mve_move") + (set_attr "length""8")]) ;; ;; [vdupq_m_n_f]) @@ -2201,7 +2336,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;t.%#\t%q0, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_f")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2218,7 +2354,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" ".f%#\t%q0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") ]) ;; @@ -2235,7 +2372,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" ".f%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_f")) + (set_attr "type" "mve_move") ]) ;; @@ -2252,7 +2390,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;t.f%#\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2271,7 +2410,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;t.f%#\t%0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2290,7 +2430,8 @@ ] "TARGET_HAVE_MVE" ".%#\t%Q0, %R0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") ]) ;; @@ -2309,7 +2450,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%Q0, %R0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2326,7 +2468,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2347,7 +2490,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2363,7 +2507,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.i%#\t%q0, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2380,7 +2525,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.i%#\t%q0, %2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2396,7 +2542,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") ]) ;; @@ -2412,7 +2559,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;t.\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2428,7 +2576,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2444,7 +2593,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;t.%#\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2463,7 +2613,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.32\t%Q0, %R0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_v4si")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2479,7 +2630,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;vcvtmt.%#.f%#\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtmq_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2495,7 +2647,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;vcvtpt.%#.f%#\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtpq_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2511,7 +2664,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;vcvtnt.%#.f%#\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtnq_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2528,7 +2682,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;vcvtt.%#.f%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_from_f_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2544,7 +2699,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.\t%q0, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2560,8 +2716,9 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;vcvtt.%#.f%#\t%q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_from_f_")) + (set_attr "type" "mve_move") + (set_attr "length""8")]) ;; ;; [vabavq_p_s, vabavq_p_u]) @@ -2577,7 +2734,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length" "8")]) ;; @@ -2594,8 +2752,9 @@ ] "TARGET_HAVE_MVE" "vpst\n\tt.%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") - (set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") + (set_attr "length" "8")]) ;; ;; [vsriq_m_n_s, vsriq_m_n_u]) @@ -2611,8 +2770,9 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") - (set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") + (set_attr "length" "8")]) ;; ;; [vcvtq_m_n_to_f_u, vcvtq_m_n_to_f_s]) @@ -2628,7 +2788,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;vcvtt.f%#.%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_to_f_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2668,7 +2829,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%q0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2687,8 +2849,9 @@ ] "TARGET_HAVE_MVE" "vpst\;t.i%# %q0, %q2, %3" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") + (set_attr "length""8")]) ;; ;; [vaddq_m_u, vaddq_m_s] @@ -2706,7 +2869,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.i%#\t%q0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2726,7 +2890,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t\t%q0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2743,8 +2908,9 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") + (set_attr "length""8")]) ;; ;; [vcaddq_rot90_m_u, vcaddq_rot90_m_s] @@ -2763,7 +2929,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%q0, %q2, %q3, #" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2791,7 +2958,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2812,7 +2980,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2829,7 +2998,8 @@ ] "TARGET_HAVE_MVE" "vpst\;vmullbt.%# %q0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmullbq_int_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2846,7 +3016,8 @@ ] "TARGET_HAVE_MVE" "vpst\;vmulltt.%# %q0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmulltq_int_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2863,7 +3034,8 @@ ] "TARGET_HAVE_MVE" "vpst\;vornt\t%q0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2881,7 +3053,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2899,7 +3072,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2916,7 +3090,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2936,7 +3111,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%Q0, %R0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2964,7 +3140,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -2984,7 +3161,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.32\t%Q0, %R0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_v4si")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -3002,7 +3180,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -3019,7 +3198,8 @@ ] "TARGET_HAVE_MVE" "vpst\;vmullbt.p%#\t%q0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmullbq_poly_p")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -3036,7 +3216,8 @@ ] "TARGET_HAVE_MVE" "vpst\;vmulltt.p%#\t%q0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmulltq_poly_p")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -3054,7 +3235,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.s%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -3072,7 +3254,8 @@ ] "TARGET_HAVE_MVE" "vpst\;t.s%#\t%q0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -3096,7 +3279,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;t.f%# %q0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -3117,7 +3301,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;t.f%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_f")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -3137,7 +3322,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;t\t%q0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -3154,7 +3340,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;t.%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_n_f")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -3176,7 +3363,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;t.f%#\t%q0, %q2, %q3, #" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -3196,7 +3384,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;t.f%#\t%q0, %q2, %q3, #" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_f")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -3213,7 +3402,8 @@ ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;vornt\t%q0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_f")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -3233,7 +3423,8 @@ output_asm_insn("vstrb.\t%q1, %E0",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_")) + (set_attr "length" "4")]) ;; ;; [vstrbq_scatter_offset_s vstrbq_scatter_offset_u] @@ -3261,7 +3452,8 @@ VSTRBSOQ))] "TARGET_HAVE_MVE" "vstrb.\t%q2, [%0, %q1]" - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_scatter_offset__insn")) + (set_attr "length" "4")]) ;; ;; [vstrwq_scatter_base_s vstrwq_scatter_base_u] @@ -3283,7 +3475,8 @@ output_asm_insn("vstrw.u32\t%q2, [%q0, %1]",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_v4si")) + (set_attr "length" "4")]) ;; ;; [vldrbq_gather_offset_s vldrbq_gather_offset_u] @@ -3306,7 +3499,8 @@ output_asm_insn ("vldrb.\t%q0, [%m1, %q2]",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_gather_offset_")) + (set_attr "length" "4")]) ;; ;; [vldrbq_s vldrbq_u] @@ -3328,7 +3522,8 @@ output_asm_insn ("vldrb.\t%q0, %E1",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_")) + (set_attr "length" "4")]) ;; ;; [vldrwq_gather_base_s vldrwq_gather_base_u] @@ -3348,7 +3543,8 @@ output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_v4si")) + (set_attr "length" "4")]) ;; ;; [vstrbq_scatter_offset_p_s vstrbq_scatter_offset_p_u] @@ -3380,7 +3576,8 @@ VSTRBSOQ))] "TARGET_HAVE_MVE" "vpst\;vstrbt.\t%q2, [%0, %q1]" - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_scatter_offset__insn")) + (set_attr "length" "8")]) ;; ;; [vstrwq_scatter_base_p_s vstrwq_scatter_base_p_u] @@ -3403,7 +3600,8 @@ output_asm_insn ("vpst\n\tvstrwt.u32\t%q2, [%q0, %1]",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_v4si")) + (set_attr "length" "8")]) (define_insn "mve_vstrbq_p_" [(set (match_operand: 0 "mve_memory_operand" "=Ux") @@ -3421,7 +3619,8 @@ output_asm_insn ("vpst\;vstrbt.\t%q1, %E0",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_")) + (set_attr "length" "8")]) ;; ;; [vldrbq_gather_offset_z_s vldrbq_gather_offset_z_u] @@ -3446,7 +3645,8 @@ output_asm_insn ("vpst\n\tvldrbt.\t%q0, [%m1, %q2]",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_gather_offset_")) + (set_attr "length" "8")]) ;; ;; [vldrbq_z_s vldrbq_z_u] @@ -3469,7 +3669,8 @@ output_asm_insn ("vpst\;vldrbt.\t%q0, %E1",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_")) + (set_attr "length" "8")]) ;; ;; [vldrwq_gather_base_z_s vldrwq_gather_base_z_u] @@ -3490,7 +3691,8 @@ output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%q1, %2]",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_v4si")) + (set_attr "length" "8")]) ;; ;; [vldrhq_f] @@ -3509,7 +3711,8 @@ output_asm_insn ("vldrh.16\t%q0, %E1",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_fv8hf")) + (set_attr "length" "4")]) ;; ;; [vldrhq_gather_offset_s vldrhq_gather_offset_u] @@ -3532,7 +3735,8 @@ output_asm_insn ("vldrh.\t%q0, [%m1, %q2]",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_")) + (set_attr "length" "4")]) ;; ;; [vldrhq_gather_offset_z_s vldrhq_gather_offset_z_u] @@ -3557,7 +3761,8 @@ output_asm_insn ("vpst\n\tvldrht.\t%q0, [%m1, %q2]",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_")) + (set_attr "length" "8")]) ;; ;; [vldrhq_gather_shifted_offset_s vldrhq_gather_shifted_offset_u] @@ -3580,7 +3785,8 @@ output_asm_insn ("vldrh.\t%q0, [%m1, %q2, uxtw #1]",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_")) + (set_attr "length" "4")]) ;; ;; [vldrhq_gather_shifted_offset_z_s vldrhq_gather_shited_offset_z_u] @@ -3605,7 +3811,8 @@ output_asm_insn ("vpst\n\tvldrht.\t%q0, [%m1, %q2, uxtw #1]",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_")) + (set_attr "length" "8")]) ;; ;; [vldrhq_s, vldrhq_u] @@ -3627,7 +3834,8 @@ output_asm_insn ("vldrh.\t%q0, %E1",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_")) + (set_attr "length" "4")]) ;; ;; [vldrhq_z_f] @@ -3647,7 +3855,8 @@ output_asm_insn ("vpst\;vldrht.16\t%q0, %E1",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_fv8hf")) + (set_attr "length" "8")]) ;; ;; [vldrhq_z_s vldrhq_z_u] @@ -3670,7 +3879,8 @@ output_asm_insn ("vpst\;vldrht.\t%q0, %E1",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_")) + (set_attr "length" "8")]) ;; ;; [vldrwq_f] @@ -3689,7 +3899,8 @@ output_asm_insn ("vldrw.32\t%q0, %E1",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_fv4sf")) + (set_attr "length" "4")]) ;; ;; [vldrwq_s vldrwq_u] @@ -3708,7 +3919,8 @@ output_asm_insn ("vldrw.32\t%q0, %E1",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_v4si")) + (set_attr "length" "4")]) ;; ;; [vldrwq_z_f] @@ -3728,7 +3940,8 @@ output_asm_insn ("vpst\;vldrwt.32\t%q0, %E1",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_fv4sf")) + (set_attr "length" "8")]) ;; ;; [vldrwq_z_s vldrwq_z_u] @@ -3748,7 +3961,8 @@ output_asm_insn ("vpst\;vldrwt.32\t%q0, %E1",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_v4si")) + (set_attr "length" "8")]) (define_expand "mve_vld1q_f" [(match_operand:MVE_0 0 "s_register_operand") @@ -3788,7 +4002,8 @@ output_asm_insn ("vldrd.64\t%q0, [%q1, %2]",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_v2di")) + (set_attr "length" "4")]) ;; ;; [vldrdq_gather_base_z_s vldrdq_gather_base_z_u] @@ -3809,7 +4024,8 @@ output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%q1, %2]",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_v2di")) + (set_attr "length" "8")]) ;; ;; [vldrdq_gather_offset_s vldrdq_gather_offset_u] @@ -3829,7 +4045,8 @@ output_asm_insn ("vldrd.u64\t%q0, [%m1, %q2]",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_offset_v2di")) + (set_attr "length" "4")]) ;; ;; [vldrdq_gather_offset_z_s vldrdq_gather_offset_z_u] @@ -3850,7 +4067,8 @@ output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%m1, %q2]",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_offset_v2di")) + (set_attr "length" "8")]) ;; ;; [vldrdq_gather_shifted_offset_s vldrdq_gather_shifted_offset_u] @@ -3870,7 +4088,8 @@ output_asm_insn ("vldrd.u64\t%q0, [%m1, %q2, uxtw #3]",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_shifted_offset_v2di")) + (set_attr "length" "4")]) ;; ;; [vldrdq_gather_shifted_offset_z_s vldrdq_gather_shifted_offset_z_u] @@ -3891,7 +4110,8 @@ output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%m1, %q2, uxtw #3]",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_shifted_offset_v2di")) + (set_attr "length" "8")]) ;; ;; [vldrhq_gather_offset_f] @@ -3911,7 +4131,8 @@ output_asm_insn ("vldrh.f16\t%q0, [%m1, %q2]",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_fv8hf")) + (set_attr "length" "4")]) ;; ;; [vldrhq_gather_offset_z_f] @@ -3933,7 +4154,8 @@ output_asm_insn ("vpst\n\tvldrht.f16\t%q0, [%m1, %q2]",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_fv8hf")) + (set_attr "length" "8")]) ;; ;; [vldrhq_gather_shifted_offset_f] @@ -3953,7 +4175,8 @@ output_asm_insn ("vldrh.f16\t%q0, [%m1, %q2, uxtw #1]",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_fv8hf")) + (set_attr "length" "4")]) ;; ;; [vldrhq_gather_shifted_offset_z_f] @@ -3975,7 +4198,8 @@ output_asm_insn ("vpst\n\tvldrht.f16\t%q0, [%m1, %q2, uxtw #1]",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_fv8hf")) + (set_attr "length" "8")]) ;; ;; [vldrwq_gather_base_f] @@ -3995,7 +4219,8 @@ output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_fv4sf")) + (set_attr "length" "4")]) ;; ;; [vldrwq_gather_base_z_f] @@ -4016,7 +4241,8 @@ output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%q1, %2]",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_fv4sf")) + (set_attr "length" "8")]) ;; ;; [vldrwq_gather_offset_f] @@ -4036,7 +4262,8 @@ output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2]",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_fv4sf")) + (set_attr "length" "4")]) ;; ;; [vldrwq_gather_offset_s vldrwq_gather_offset_u] @@ -4056,7 +4283,8 @@ output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2]",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_v4si")) + (set_attr "length" "4")]) ;; ;; [vldrwq_gather_offset_z_f] @@ -4078,7 +4306,8 @@ output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2]",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_fv4sf")) + (set_attr "length" "8")]) ;; ;; [vldrwq_gather_offset_z_s vldrwq_gather_offset_z_u] @@ -4100,7 +4329,8 @@ output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2]",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_v4si")) + (set_attr "length" "8")]) ;; ;; [vldrwq_gather_shifted_offset_f] @@ -4120,7 +4350,8 @@ output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2, uxtw #2]",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_fv4sf")) + (set_attr "length" "4")]) ;; ;; [vldrwq_gather_shifted_offset_s vldrwq_gather_shifted_offset_u] @@ -4140,7 +4371,8 @@ output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2, uxtw #2]",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_v4si")) + (set_attr "length" "4")]) ;; ;; [vldrwq_gather_shifted_offset_z_f] @@ -4162,7 +4394,8 @@ output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2, uxtw #2]",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_fv4sf")) + (set_attr "length" "8")]) ;; ;; [vldrwq_gather_shifted_offset_z_s vldrwq_gather_shifted_offset_z_u] @@ -4184,7 +4417,8 @@ output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2, uxtw #2]",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_v4si")) + (set_attr "length" "8")]) ;; ;; [vstrhq_f] @@ -4203,7 +4437,8 @@ output_asm_insn ("vstrh.16\t%q1, %E0",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_fv8hf")) + (set_attr "length" "4")]) ;; ;; [vstrhq_p_f] @@ -4224,7 +4459,8 @@ output_asm_insn ("vpst\;vstrht.16\t%q1, %E0",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_fv8hf")) + (set_attr "length" "8")]) ;; ;; [vstrhq_p_s vstrhq_p_u] @@ -4246,7 +4482,8 @@ output_asm_insn ("vpst\;vstrht.\t%q1, %E0",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_")) + (set_attr "length" "8")]) ;; ;; [vstrhq_scatter_offset_p_s vstrhq_scatter_offset_p_u] @@ -4278,7 +4515,8 @@ VSTRHSOQ))] "TARGET_HAVE_MVE" "vpst\;vstrht.\t%q2, [%0, %q1]" - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset__insn")) + (set_attr "length" "8")]) ;; ;; [vstrhq_scatter_offset_s vstrhq_scatter_offset_u] @@ -4306,7 +4544,8 @@ VSTRHSOQ))] "TARGET_HAVE_MVE" "vstrh.\t%q2, [%0, %q1]" - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset__insn")) + (set_attr "length" "4")]) ;; ;; [vstrhq_scatter_shifted_offset_p_s vstrhq_scatter_shifted_offset_p_u] @@ -4338,7 +4577,8 @@ VSTRHSSOQ))] "TARGET_HAVE_MVE" "vpst\;vstrht.\t%q2, [%0, %q1, uxtw #1]" - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset__insn")) + (set_attr "length" "8")]) ;; ;; [vstrhq_scatter_shifted_offset_s vstrhq_scatter_shifted_offset_u] @@ -4367,7 +4607,8 @@ VSTRHSSOQ))] "TARGET_HAVE_MVE" "vstrh.\t%q2, [%0, %q1, uxtw #1]" - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset__insn")) + (set_attr "length" "4")]) ;; ;; [vstrhq_s, vstrhq_u] @@ -4386,7 +4627,8 @@ output_asm_insn ("vstrh.\t%q1, %E0",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_")) + (set_attr "length" "4")]) ;; ;; [vstrwq_f] @@ -4405,7 +4647,8 @@ output_asm_insn ("vstrw.32\t%q1, %E0",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_fv4sf")) + (set_attr "length" "4")]) ;; ;; [vstrwq_p_f] @@ -4426,7 +4669,8 @@ output_asm_insn ("vpst\;vstrwt.32\t%q1, %E0",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_fv4sf")) + (set_attr "length" "8")]) ;; ;; [vstrwq_p_s vstrwq_p_u] @@ -4447,7 +4691,8 @@ output_asm_insn ("vpst\;vstrwt.32\t%q1, %E0",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_v4si")) + (set_attr "length" "8")]) ;; ;; [vstrwq_s vstrwq_u] @@ -4466,7 +4711,8 @@ output_asm_insn ("vstrw.32\t%q1, %E0",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_v4si")) + (set_attr "length" "4")]) (define_expand "mve_vst1q_f" [(match_operand: 0 "mve_memory_operand") @@ -4509,7 +4755,8 @@ output_asm_insn ("vpst\;\tvstrdt.u64\t%q2, [%q0, %1]",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_v2di")) + (set_attr "length" "8")]) ;; ;; [vstrdq_scatter_base_s vstrdq_scatter_base_u] @@ -4531,7 +4778,8 @@ output_asm_insn ("vstrd.u64\t%q2, [%q0, %1]",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_v2di")) + (set_attr "length" "4")]) ;; ;; [vstrdq_scatter_offset_p_s vstrdq_scatter_offset_p_u] @@ -4562,7 +4810,8 @@ VSTRDSOQ))] "TARGET_HAVE_MVE" "vpst\;vstrdt.64\t%q2, [%0, %q1]" - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_offset_v2di_insn")) + (set_attr "length" "8")]) ;; ;; [vstrdq_scatter_offset_s vstrdq_scatter_offset_u] @@ -4590,7 +4839,8 @@ VSTRDSOQ))] "TARGET_HAVE_MVE" "vstrd.64\t%q2, [%0, %q1]" - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_offset_v2di_insn")) + (set_attr "length" "4")]) ;; ;; [vstrdq_scatter_shifted_offset_p_s vstrdq_scatter_shifted_offset_p_u] @@ -4622,7 +4872,8 @@ VSTRDSSOQ))] "TARGET_HAVE_MVE" "vpst\;vstrdt.64\t%q2, [%0, %q1, uxtw #3]" - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_shifted_offset_v2di_insn")) + (set_attr "length" "8")]) ;; ;; [vstrdq_scatter_shifted_offset_s vstrdq_scatter_shifted_offset_u] @@ -4651,7 +4902,8 @@ VSTRDSSOQ))] "TARGET_HAVE_MVE" "vstrd.64\t%q2, [%0, %q1, uxtw #3]" - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_shifted_offset_v2di_insn")) + (set_attr "length" "4")]) ;; ;; [vstrhq_scatter_offset_f] @@ -4679,7 +4931,8 @@ VSTRHQSO_F))] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vstrh.16\t%q2, [%0, %q1]" - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_fv8hf_insn")) + (set_attr "length" "4")]) ;; ;; [vstrhq_scatter_offset_p_f] @@ -4710,7 +4963,8 @@ VSTRHQSO_F))] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;vstrht.16\t%q2, [%0, %q1]" - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_fv8hf_insn")) + (set_attr "length" "8")]) ;; ;; [vstrhq_scatter_shifted_offset_f] @@ -4738,7 +4992,8 @@ VSTRHQSSO_F))] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vstrh.16\t%q2, [%0, %q1, uxtw #1]" - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_fv8hf_insn")) + (set_attr "length" "4")]) ;; ;; [vstrhq_scatter_shifted_offset_p_f] @@ -4770,7 +5025,8 @@ VSTRHQSSO_F))] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;vstrht.16\t%q2, [%0, %q1, uxtw #1]" - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_fv8hf_insn")) + (set_attr "length" "8")]) ;; ;; [vstrwq_scatter_base_f] @@ -4792,7 +5048,8 @@ output_asm_insn ("vstrw.u32\t%q2, [%q0, %1]",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_fv4sf")) + (set_attr "length" "4")]) ;; ;; [vstrwq_scatter_base_p_f] @@ -4815,7 +5072,8 @@ output_asm_insn ("vpst\n\tvstrwt.u32\t%q2, [%q0, %1]",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_fv4sf")) + (set_attr "length" "8")]) ;; ;; [vstrwq_scatter_offset_f] @@ -4843,7 +5101,8 @@ VSTRWQSO_F))] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vstrw.32\t%q2, [%0, %q1]" - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_fv4sf_insn")) + (set_attr "length" "4")]) ;; ;; [vstrwq_scatter_offset_p_f] @@ -4874,7 +5133,8 @@ VSTRWQSO_F))] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;vstrwt.32\t%q2, [%0, %q1]" - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_fv4sf_insn")) + (set_attr "length" "8")]) ;; ;; [vstrwq_scatter_offset_s vstrwq_scatter_offset_u] @@ -4905,7 +5165,8 @@ VSTRWSOQ))] "TARGET_HAVE_MVE" "vpst\;vstrwt.32\t%q2, [%0, %q1]" - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_v4si_insn")) + (set_attr "length" "8")]) ;; ;; [vstrwq_scatter_offset_s vstrwq_scatter_offset_u] @@ -4933,7 +5194,8 @@ VSTRWSOQ))] "TARGET_HAVE_MVE" "vstrw.32\t%q2, [%0, %q1]" - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_v4si_insn")) + (set_attr "length" "4")]) ;; ;; [vstrwq_scatter_shifted_offset_f] @@ -4961,7 +5223,8 @@ VSTRWQSSO_F))] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vstrw.32\t%q2, [%0, %q1, uxtw #2]" - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_fv4sf_insn")) + (set_attr "length" "8")]) ;; ;; [vstrwq_scatter_shifted_offset_p_f] @@ -4993,7 +5256,8 @@ VSTRWQSSO_F))] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" "vpst\;vstrwt.32\t%q2, [%0, %q1, uxtw #2]" - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_fv4sf_insn")) + (set_attr "length" "8")]) ;; ;; [vstrwq_scatter_shifted_offset_p_s vstrwq_scatter_shifted_offset_p_u] @@ -5025,7 +5289,8 @@ VSTRWSSOQ))] "TARGET_HAVE_MVE" "vpst\;vstrwt.32\t%q2, [%0, %q1, uxtw #2]" - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_v4si_insn")) + (set_attr "length" "8")]) ;; ;; [vstrwq_scatter_shifted_offset_s vstrwq_scatter_shifted_offset_u] @@ -5054,7 +5319,8 @@ VSTRWSSOQ))] "TARGET_HAVE_MVE" "vstrw.32\t%q2, [%0, %q1, uxtw #2]" - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_v4si_insn")) + (set_attr "length" "4")]) ;; ;; [vidupq_n_u]) @@ -5122,7 +5388,8 @@ (match_operand:SI 6 "immediate_operand" "i")))] "TARGET_HAVE_MVE" "vpst\;\tvidupt.u%#\t%q0, %2, %4" - [(set_attr "length""8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vidupq_u_insn")) + (set_attr "length""8")]) ;; ;; [vddupq_n_u]) @@ -5190,7 +5457,8 @@ (match_operand:SI 6 "immediate_operand" "i")))] "TARGET_HAVE_MVE" "vpst\;vddupt.u%#\t%q0, %2, %4" - [(set_attr "length""8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vddupq_u_insn")) + (set_attr "length""8")]) ;; ;; [vdwdupq_n_u]) @@ -5306,8 +5574,9 @@ ] "TARGET_HAVE_MVE" "vpst\;vdwdupt.u%#\t%q2, %3, %R4, %5" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vdwdupq_wb_u_insn")) + (set_attr "type" "mve_move") + (set_attr "length""8")]) ;; ;; [viwdupq_n_u]) @@ -5423,7 +5692,8 @@ ] "TARGET_HAVE_MVE" "vpst\;\tviwdupt.u%#\t%q2, %3, %R4, %5" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_viwdupq_wb_u_insn")) + (set_attr "type" "mve_move") (set_attr "length""8")]) ;; @@ -5449,7 +5719,8 @@ output_asm_insn ("vstrw.u32\t%q2, [%q0, %1]!",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_v4si")) + (set_attr "length" "4")]) ;; ;; [vstrwq_scatter_base_wb_p_s vstrwq_scatter_base_wb_p_u] @@ -5475,7 +5746,8 @@ output_asm_insn ("vpst\;\tvstrwt.u32\t%q2, [%q0, %1]!",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_v4si")) + (set_attr "length" "8")]) ;; ;; [vstrwq_scatter_base_wb_f] @@ -5500,7 +5772,8 @@ output_asm_insn ("vstrw.u32\t%q2, [%q0, %1]!",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_fv4sf")) + (set_attr "length" "4")]) ;; ;; [vstrwq_scatter_base_wb_p_f] @@ -5526,7 +5799,8 @@ output_asm_insn ("vpst\;vstrwt.u32\t%q2, [%q0, %1]!",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_fv4sf")) + (set_attr "length" "8")]) ;; ;; [vstrdq_scatter_base_wb_s vstrdq_scatter_base_wb_u] @@ -5551,7 +5825,8 @@ output_asm_insn ("vstrd.u64\t%q2, [%q0, %1]!",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_wb_v2di")) + (set_attr "length" "4")]) ;; ;; [vstrdq_scatter_base_wb_p_s vstrdq_scatter_base_wb_p_u] @@ -5577,7 +5852,8 @@ output_asm_insn ("vpst\;vstrdt.u64\t%q2, [%q0, %1]!",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_wb_v2di")) + (set_attr "length" "8")]) (define_expand "mve_vldrwq_gather_base_wb_v4si" [(match_operand:V4SI 0 "s_register_operand") @@ -5629,7 +5905,8 @@ output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]!",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_v4si_insn")) + (set_attr "length" "4")]) (define_expand "mve_vldrwq_gather_base_wb_z_v4si" [(match_operand:V4SI 0 "s_register_operand") @@ -5685,7 +5962,8 @@ output_asm_insn ("vpst\;vldrwt.u32\t%q0, [%q1, %2]!",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_v4si_insn")) + (set_attr "length" "8")]) (define_expand "mve_vldrwq_gather_base_wb_fv4sf" [(match_operand:V4SI 0 "s_register_operand") @@ -5737,7 +6015,8 @@ output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]!",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_fv4sf_insn")) + (set_attr "length" "4")]) (define_expand "mve_vldrwq_gather_base_wb_z_fv4sf" [(match_operand:V4SI 0 "s_register_operand") @@ -5794,7 +6073,8 @@ output_asm_insn ("vpst\;vldrwt.u32\t%q0, [%q1, %2]!",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_fv4sf_insn")) + (set_attr "length" "8")]) (define_expand "mve_vldrdq_gather_base_wb_v2di" [(match_operand:V2DI 0 "s_register_operand") @@ -5847,7 +6127,8 @@ output_asm_insn ("vldrd.64\t%q0, [%q1, %2]!",ops); return ""; } - [(set_attr "length" "4")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_wb_v2di_insn")) + (set_attr "length" "4")]) (define_expand "mve_vldrdq_gather_base_wb_z_v2di" [(match_operand:V2DI 0 "s_register_operand") @@ -5886,7 +6167,7 @@ (unspec_volatile:SI [(reg:SI VFPCC_REGNUM)] UNSPEC_GET_FPSCR_NZCVQC))] "TARGET_HAVE_MVE" "vmrs\\t%0, FPSCR_nzcvqc" - [(set_attr "type" "mve_move")]) + [(set_attr "type" "mve_move")]) (define_insn "set_fpscr_nzcvqc" [(set (reg:SI VFPCC_REGNUM) @@ -5894,7 +6175,7 @@ VUNSPEC_SET_FPSCR_NZCVQC))] "TARGET_HAVE_MVE" "vmsr\\tFPSCR_nzcvqc, %0" - [(set_attr "type" "mve_move")]) + [(set_attr "type" "mve_move")]) ;; ;; [vldrdq_gather_base_wb_z_s vldrdq_gather_base_wb_z_u] @@ -5919,7 +6200,8 @@ output_asm_insn ("vpst\;vldrdt.u64\t%q0, [%q1, %2]!",ops); return ""; } - [(set_attr "length" "8")]) + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_wb_v2di_insn")) + (set_attr "length" "8")]) ;; ;; [vadciq_m_s, vadciq_m_u]) ;; @@ -5936,7 +6218,8 @@ ] "TARGET_HAVE_MVE" "vpst\;vadcit.i32\t%q0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadciq_v4si")) + (set_attr "type" "mve_move") (set_attr "length" "8")]) ;; @@ -5953,7 +6236,8 @@ ] "TARGET_HAVE_MVE" "vadci.i32\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadciq_v4si")) + (set_attr "type" "mve_move") (set_attr "length" "4")]) ;; @@ -5972,7 +6256,8 @@ ] "TARGET_HAVE_MVE" "vpst\;vadct.i32\t%q0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadcq_v4si")) + (set_attr "type" "mve_move") (set_attr "length" "8")]) ;; @@ -5989,7 +6274,8 @@ ] "TARGET_HAVE_MVE" "vadc.i32\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadcq_v4si")) + (set_attr "type" "mve_move") (set_attr "length" "4") (set_attr "conds" "set")]) @@ -6009,7 +6295,8 @@ ] "TARGET_HAVE_MVE" "vpst\;vsbcit.i32\t%q0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbciq_v4si")) + (set_attr "type" "mve_move") (set_attr "length" "8")]) ;; @@ -6026,7 +6313,8 @@ ] "TARGET_HAVE_MVE" "vsbci.i32\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbciq_v4si")) + (set_attr "type" "mve_move") (set_attr "length" "4")]) ;; @@ -6045,7 +6333,8 @@ ] "TARGET_HAVE_MVE" "vpst\;vsbct.i32\t%q0, %q2, %q3" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbcq_v4si")) + (set_attr "type" "mve_move") (set_attr "length" "8")]) ;; @@ -6062,7 +6351,8 @@ ] "TARGET_HAVE_MVE" "vsbc.i32\t%q0, %q1, %q2" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbcq_v4si")) + (set_attr "type" "mve_move") (set_attr "length" "4")]) ;; @@ -6091,7 +6381,7 @@ "vst21.\t{%q0, %q1}, %3", ops); return ""; } - [(set_attr "length" "8")]) + [(set_attr "length" "8")]) ;; ;; [vld2q]) @@ -6119,7 +6409,7 @@ "vld21.\t{%q0, %q1}, %3", ops); return ""; } - [(set_attr "length" "8")]) + [(set_attr "length" "8")]) ;; ;; [vld4q]) @@ -6462,7 +6752,8 @@ ] "TARGET_HAVE_MVE" "vpst\;vshlct\t%q0, %1, %4" - [(set_attr "type" "mve_move") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshlcq_")) + (set_attr "type" "mve_move") (set_attr "length" "8")]) ;; CDE instructions on MVE registers. @@ -6474,7 +6765,8 @@ UNSPEC_VCDE))] "TARGET_CDE && TARGET_HAVE_MVE" "vcx1\\tp%c1, %q0, #%c2" - [(set_attr "type" "coproc")] + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx1qv16qi")) + (set_attr "type" "coproc")] ) (define_insn "arm_vcx1qav16qi" @@ -6485,7 +6777,8 @@ UNSPEC_VCDEA))] "TARGET_CDE && TARGET_HAVE_MVE" "vcx1a\\tp%c1, %q0, #%c3" - [(set_attr "type" "coproc")] + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx1qav16qi")) + (set_attr "type" "coproc")] ) (define_insn "arm_vcx2qv16qi" @@ -6496,7 +6789,8 @@ UNSPEC_VCDE))] "TARGET_CDE && TARGET_HAVE_MVE" "vcx2\\tp%c1, %q0, %q2, #%c3" - [(set_attr "type" "coproc")] + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx2qv16qi")) + (set_attr "type" "coproc")] ) (define_insn "arm_vcx2qav16qi" @@ -6508,7 +6802,8 @@ UNSPEC_VCDEA))] "TARGET_CDE && TARGET_HAVE_MVE" "vcx2a\\tp%c1, %q0, %q3, #%c4" - [(set_attr "type" "coproc")] + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx2qav16qi")) + (set_attr "type" "coproc")] ) (define_insn "arm_vcx3qv16qi" @@ -6520,7 +6815,8 @@ UNSPEC_VCDE))] "TARGET_CDE && TARGET_HAVE_MVE" "vcx3\\tp%c1, %q0, %q2, %q3, #%c4" - [(set_attr "type" "coproc")] + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx3qv16qi")) + (set_attr "type" "coproc")] ) (define_insn "arm_vcx3qav16qi" @@ -6533,7 +6829,8 @@ UNSPEC_VCDEA))] "TARGET_CDE && TARGET_HAVE_MVE" "vcx3a\\tp%c1, %q0, %q3, %q4, #%c5" - [(set_attr "type" "coproc")] + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx3qav16qi")) + (set_attr "type" "coproc")] ) (define_insn "arm_vcx1q_p_v16qi" @@ -6545,7 +6842,8 @@ CDE_VCX))] "TARGET_CDE && TARGET_HAVE_MVE" "vpst\;vcx1t\\tp%c1, %q0, #%c3" - [(set_attr "type" "coproc") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx1qv16qi")) + (set_attr "type" "coproc") (set_attr "length" "8")] ) @@ -6559,7 +6857,8 @@ CDE_VCX))] "TARGET_CDE && TARGET_HAVE_MVE" "vpst\;vcx2t\\tp%c1, %q0, %q3, #%c4" - [(set_attr "type" "coproc") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx2qv16qi")) + (set_attr "type" "coproc") (set_attr "length" "8")] ) @@ -6574,11 +6873,12 @@ CDE_VCX))] "TARGET_CDE && TARGET_HAVE_MVE" "vpst\;vcx3t\\tp%c1, %q0, %q3, %q4, #%c5" - [(set_attr "type" "coproc") + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx3qv16qi")) + (set_attr "type" "coproc") (set_attr "length" "8")] ) -(define_insn "*movmisalign_mve_store" +(define_insn "movmisalign_mve_store" [(set (match_operand:MVE_VLD_ST 0 "mve_memory_operand" "=Ux") (unspec:MVE_VLD_ST [(match_operand:MVE_VLD_ST 1 "s_register_operand" " w")] UNSPEC_MISALIGNED_ACCESS))] @@ -6586,11 +6886,12 @@ || (TARGET_HAVE_MVE_FLOAT && VALID_MVE_SF_MODE (mode))) && !BYTES_BIG_ENDIAN && unaligned_access" "vstr.\t%q1, %E0" - [(set_attr "type" "mve_store")] + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_movmisalign_mve_store")) + (set_attr "type" "mve_store")] ) -(define_insn "*movmisalign_mve_load" +(define_insn "movmisalign_mve_load" [(set (match_operand:MVE_VLD_ST 0 "s_register_operand" "=w") (unspec:MVE_VLD_ST [(match_operand:MVE_VLD_ST 1 "mve_memory_operand" " Ux")] UNSPEC_MISALIGNED_ACCESS))] @@ -6598,7 +6899,8 @@ || (TARGET_HAVE_MVE_FLOAT && VALID_MVE_SF_MODE (mode))) && !BYTES_BIG_ENDIAN && unaligned_access" "vldr.\t%q0, %E1" - [(set_attr "type" "mve_load")] + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_movmisalign_mve_load")) + (set_attr "type" "mve_load")] ) ;; Expander for VxBI moves @@ -6680,3 +6982,40 @@ } } ) + +;; Originally expanded by 'predicated_doloop_end'. +;; In the rare situation where the branch is too far, we do also need to +;; revert FPSCR.LTPSIZE back to 0x100 after the last iteration. +(define_insn "*predicated_doloop_end_internal" + [(set (pc) + (if_then_else + (ge (plus:SI (reg:SI LR_REGNUM) + (match_operand:SI 0 "const_int_operand" "")) + (const_int 0)) + (label_ref (match_operand 1 "" "")) + (pc))) + (set (reg:SI LR_REGNUM) + (plus:SI (reg:SI LR_REGNUM) (match_dup 0))) + (clobber (reg:CC CC_REGNUM))] + "TARGET_32BIT && TARGET_HAVE_LOB && TARGET_HAVE_MVE && TARGET_THUMB2" + { + if (get_attr_length (insn) == 4) + return "letp\t%|lr, %l1"; + else + return "subs\t%|lr, #%n0\n\tbgt\t%l1\n\tlctp"; + } + [(set (attr "length") + (if_then_else + (ltu (minus (pc) (match_dup 1)) (const_int 1024)) + (const_int 4) + (const_int 6))) + (set_attr "type" "branch")]) + +(define_insn "dlstp_insn" + [ + (set (reg:SI LR_REGNUM) + (unspec:SI [(match_operand:SI 0 "s_register_operand" "r")] + DLSTP)) + ] + "TARGET_32BIT && TARGET_HAVE_LOB && TARGET_HAVE_MVE && TARGET_THUMB2" + "dlstp.\t%|lr, %0") diff --git a/gcc/config/arm/vec-common.md b/gcc/config/arm/vec-common.md index 9af8429968d..74871cb984b 100644 --- a/gcc/config/arm/vec-common.md +++ b/gcc/config/arm/vec-common.md @@ -366,7 +366,8 @@ "@ .%#\t%0, %1, %2 * return neon_output_shift_immediate (\"vshl\", 'i', &operands[2], mode, VALID_NEON_QREG_MODE (mode), true);" - [(set_attr "type" "neon_shift_reg, neon_shift_imm")] + [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_q_")) + (set_attr "type" "neon_shift_reg, neon_shift_imm")] ) (define_expand "vashl3" From patchwork Wed Sep 6 17:19:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Stamatis Markianos-Wright X-Patchwork-Id: 75383 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 80E8538582BC for ; Wed, 6 Sep 2023 17:21:27 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 80E8538582BC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1694020887; bh=+zh6i6bsrO2vJXzY8ELa6r9hScnxGYrh5lzkWPOc5Uw=; h=Date:Subject:References:To:In-Reply-To:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=opFun1LORQojXHvhVUJTVRBwR8bVe9N0KnhM2l8B5jTl9hIx0ZDXisn51HrEemplw OpE4dIzCg0etk5aL3flxbIa0W/cHYPhFkoj+mx+OZ6k4cecl9p2ylJ9CL+L0kHYwp0 cn+knn4aucPdBddFUtPh94KzbUXk0d0jox4Xq86c= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR02-AM0-obe.outbound.protection.outlook.com (mail-am0eur02on2048.outbound.protection.outlook.com [40.107.247.48]) by sourceware.org (Postfix) with ESMTPS id 58A8A3858413 for ; Wed, 6 Sep 2023 17:19:40 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 58A8A3858413 Received: from DU7PR01CA0044.eurprd01.prod.exchangelabs.com (2603:10a6:10:50e::28) by VI1PR08MB10005.eurprd08.prod.outlook.com (2603:10a6:800:1bf::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6745.33; Wed, 6 Sep 2023 17:19:37 +0000 Received: from DBAEUR03FT053.eop-EUR03.prod.protection.outlook.com (2603:10a6:10:50e:cafe::1f) by DU7PR01CA0044.outlook.office365.com (2603:10a6:10:50e::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6745.34 via Frontend Transport; Wed, 6 Sep 2023 17:19:37 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DBAEUR03FT053.mail.protection.outlook.com (100.127.142.121) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6768.28 via Frontend Transport; Wed, 6 Sep 2023 17:19:37 +0000 Received: ("Tessian outbound b5a0f4347031:v175"); Wed, 06 Sep 2023 17:19:37 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 76ada55281c281d2 X-CR-MTA-TID: 64aa7808 Received: from addeb5f73d0a.2 by 64aa7808-outbound-1.mta.getcheckrecipient.com id FEAC1F55-A680-432D-B136-E3F401158DD2.1; Wed, 06 Sep 2023 17:19:30 +0000 Received: from EUR05-DB8-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id addeb5f73d0a.2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Wed, 06 Sep 2023 17:19:30 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bV6PYf6GA3bYN9uxOQzaolat11LECM4ClrLABd1JdvPgKtL9SRPwEdCUKdjBU98HGrsaZkpfrEaaZRJGjp7rIWBtfqS+/S6buUh8nxnkZAH1wzyLNx/l0CpIBDgMaIh8FEGATu2qWX+5uVz2fZ3HVZWa47t5YwwS+UooB9/JFF9N8JHUAxnAr335dpMap3gr5IBSWpeydAPJBNqRHEUPloxx6nlA0OgOsFr8vkzYIPiY3V4FixaMT2+KbTfGSLPSjEA+uMevyzmf2CTQeoGWOvMRauuV/3I3VKT7zR7o85uB8SrZ0GJ6GBo/Jg4SiCy/0f9h47c+/WW6ipNp+9e3rQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=+zh6i6bsrO2vJXzY8ELa6r9hScnxGYrh5lzkWPOc5Uw=; b=Mu+C5hpYZlU+LMui4ApOIcBbxcRtN0MhIEPDo0JEK6MT73Tu8jik4NBMzep+0m2GrROwlbZnQ5HLcc5atzFyZIPH++LfmHNQRzncneirpOwqa0FHcm1a/sVtQoP9NyexRqn6QDBHxrG+e5bMoK+vZJFdvSccwermmKdqwO1ourmRMhgi1HnOs4W3UEYXSKIfzLZcaKIXi03CUx+FWDPSZhtEIq4EhHDTzlBkd1ceDEkL4Reg2dZZODJNRFj44EvjbpYLOzrV+8bKrZEY/GNost4dfb0UKOzxnWZVSZ4jRsMM54Dow+kXCt4WkSShJuyANku07o7khJBxzfU5zTyZiA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; Received: from DB9PR08MB6507.eurprd08.prod.outlook.com (2603:10a6:10:25a::6) by PAXPR08MB6544.eurprd08.prod.outlook.com (2603:10a6:102:157::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6745.34; Wed, 6 Sep 2023 17:19:26 +0000 Received: from DB9PR08MB6507.eurprd08.prod.outlook.com ([fe80::9580:520e:6b52:f3d8]) by DB9PR08MB6507.eurprd08.prod.outlook.com ([fe80::9580:520e:6b52:f3d8%3]) with mapi id 15.20.6745.030; Wed, 6 Sep 2023 17:19:26 +0000 Message-ID: Date: Wed, 6 Sep 2023 18:19:24 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.13.0 Subject: [PING][PATCH 2/2] arm: Add support for MVE Tail-Predicated Low Overhead Loops Content-Language: en-US References: <949f5dd0-cdf0-715a-f04c-3de80c9b974f@arm.com> To: "gcc-patches@gcc.gnu.org" In-Reply-To: <949f5dd0-cdf0-715a-f04c-3de80c9b974f@arm.com> X-Forwarded-Message-Id: <949f5dd0-cdf0-715a-f04c-3de80c9b974f@arm.com> X-ClientProxiedBy: LO4P123CA0062.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:153::13) To DB9PR08MB6507.eurprd08.prod.outlook.com (2603:10a6:10:25a::6) MIME-Version: 1.0 X-MS-TrafficTypeDiagnostic: DB9PR08MB6507:EE_|PAXPR08MB6544:EE_|DBAEUR03FT053:EE_|VI1PR08MB10005:EE_ X-MS-Office365-Filtering-Correlation-Id: 0fd2575d-0737-41f3-2561-08dbaefd7261 x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: DKkp9m8ZDLO+T/QPnL8/iyLca8h3iSYF9ek/PXJqbwMQScEjuxCP2+qxqeslIrao30q2wmssoCHYLa8bB93FyHy8K1xSSHtX0QD4ez4CQMvDxvfSaY5vcJucw9I6SQR7F/t0iMVvFv4piZRMnx8Q5xE9+vvxasuf8ZdLD7C0RYTBi0pnqTTtD/mXEK0KkILCguXFdzBZaZneb0SwHsEutq7v2udV4slysG/RRRyUPmwWBVlXAPKzfLh/THIYVe0GLyIQs1csLtOd/JDWdkcJLS28td9M6J4pGhbdpHQe3bbfUmL1QVlB/6UETLZEfGsqleIUuA6Ermkvf2QJSqzscOngkTSoYr6rgQKs5Fjg+CNMqzGxVKP2jpbM95m+tzgdubCm3q4z2f9PUJBGEjoWg/iI6utzhDCfKddCfc0L9tkZzPuaiS2APBimGSiEw6KqqpEaZeU+rhIQdl7VNv7YzzxeyZuGPp9YQK22kOULHzT0fcRF8VscmxK+TXVCC5yja63xZsdtSwKoxxGkpGHIQMOLDl8F3n5g5AthUVzoC7M7WbWl09AdW48wsm9ST9gG6/SP040FuyCA3iyEjMIkjbSACFJyczCMLO4ORkbAx0E7BJepTBqUUmISqik0Hea+eNwKMkJzSZJp55uaQXT4fDspMfis7EjawB8GPHx4KCO0da5lxSHn9Y91SIRiCS1r X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB9PR08MB6507.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(396003)(136003)(346002)(376002)(366004)(39860400002)(451199024)(186009)(1800799009)(8936002)(6486002)(6506007)(33964004)(6512007)(478600001)(83380400001)(2906002)(26005)(66476007)(235185007)(6916009)(54906003)(41300700001)(66556008)(4326008)(66946007)(316002)(8676002)(5660300002)(36756003)(2616005)(86362001)(38100700002)(31696002)(84970400001)(31686004)(43740500002)(45980500001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6544 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DBAEUR03FT053.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: e1896dfc-2096-4a51-adb6-08dbaefd6c02 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: nDVn+U3ew8NpEGvCL3O40MqhS8qkmRowNA2UrbCv4my5pG0NgO892MqNiPyjlOqieeBDyOKyhOAuDcwuDjCPbL/A+EZZziFlvAeIkqRKw4D/A6x7KQLZZP/TUEayxu7EU+Sa5bS+IM3BPZp1rvpKIgp/e8EfbMdRxveqHjFAE5wp/7/4UeNT5ikv6SGYEC/asTjEY5Hmphb0QXb3VHqDBQZWusNR6elrgGQ3MChFhbVP98QdhDeZe+l2ByzykQnCoLypEiGRKT9u/6tianuIKia5p9VgvGUZ3oGAa6VrwOD4zATZTzTv+N91prHt9E7vmRFvgfO8DSUVRR2L+fkudUmMmzGkvLwgUvwNREwmfDWsB+Xyq7CuU8+5yvdhzMfvvYG1SltrxnpKdivAU5iJTI/0mLyD5FDLX4kaAjmlKvSmS85oLE7c1icQmXTQJUVyNm90rYqNCIwIamRinItPkSA9Wbp9wl5c67/nXLS3JLoHd44oLz/VCtQ4QQMTnu6c8t72NtbkjIwLLVY8o4YH1jjK561/HTqbe6/D4uuC1pQeeipcdAc/XNtpQyeqf0u7eGkNjKyKuvmRxzmk2cKI7MJqYzs6vWICG5PTH1+ea+6lcqx8YjG8OKUqQprMCpL9z76ziJztE2RoTxPC9UV6IJmIWp4FfsRUmjyo63ywaEdRKjuPjCiNG73lOQgTdKd1oLyjeBAcr8sOxXUCA3CsGTy2k75UxFUHvl/qnTsggxEXNVoKtvWW+ff51VGGYDDNGwm81S6bjgSkEQlzd/0n2F2i9ZM9BaN/Aco1d+hxEcLavXXxu1NoyxTfluPRr7b6 X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230031)(4636009)(136003)(396003)(39860400002)(346002)(376002)(1800799009)(82310400011)(186009)(451199024)(36840700001)(40470700004)(46966006)(40460700003)(36860700001)(47076005)(2906002)(36756003)(31696002)(86362001)(81166007)(82740400003)(356005)(40480700001)(2616005)(8936002)(4326008)(8676002)(26005)(5660300002)(70206006)(41300700001)(84970400001)(54906003)(70586007)(6506007)(6916009)(316002)(6512007)(33964004)(83380400001)(6486002)(235185007)(336012)(478600001)(31686004)(43740500002); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Sep 2023 17:19:37.1985 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0fd2575d-0737-41f3-2561-08dbaefd7261 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DBAEUR03FT053.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB10005 X-Spam-Status: No, score=-10.9 required=5.0 tests=BAYES_00, BODY_8BITS, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Stamatis Markianos-Wright via Gcc-patches From: Stamatis Markianos-Wright Reply-To: Stamatis Markianos-Wright Cc: Richard Earnshaw Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" Hi all, This is the 2/2 patch that contains the functional changes needed for MVE Tail Predicated Low Overhead Loops.  See my previous email for a general introduction of MVE LOLs. This support is added through the already existing loop-doloop mechanisms that are used for non-MVE dls/le looping. Mid-end changes are: 1) Relax the loop-doloop mechanism in the mid-end to allow for    decrement numbers other that -1 and for `count` to be an    rtx containing a simple REG (which in this case will contain    the number of elements to be processed), rather    than an expression for calculating the number of iterations. 2) Added a new df utility function: `df_bb_regno_only_def_find` that    will return the DEF of a REG if it is DEF-ed only once within the    basic block. And many things in the backend to implement the above optimisation: 3)  Implement the `arm_predict_doloop_p` target hook to instruct the     mid-end about Low Overhead Loops (MVE or not), as well as     `arm_loop_unroll_adjust` which will prevent unrolling of any loops     that are valid for becoming MVE Tail_Predicated Low Overhead Loops     (unrolling can transform a loop in ways that invalidate the dlstp/     letp tranformation logic and the benefit of the dlstp/letp loop     would be considerably higher than that of unrolling) 4)  Appropriate changes to the define_expand of doloop_end, new     patterns for dlstp and letp, new iterators,  unspecs, etc. 5) `arm_mve_loop_valid_for_dlstp` and a number of checking functions:    * `arm_mve_dlstp_check_dec_counter`    * `arm_mve_dlstp_check_inc_counter`    * `arm_mve_check_reg_origin_is_num_elems`    * `arm_mve_check_df_chain_back_for_implic_predic`    * `arm_mve_check_df_chain_fwd_for_implic_predic_impact`    This all, in smoe way or another, are running checks on the loop    structure in order to determine if the loop is valid for dlstp/letp    transformation. 6) `arm_attempt_dlstp_transform`: (called from the define_expand of     doloop_end) this function re-checks for the loop's suitability for     dlstp/letp transformation and then implements it, if possible. 7) Various utility functions:    *`arm_mve_get_vctp_lanes` to map    from vctp unspecs to number of lanes, and `arm_get_required_vpr_reg`    to check an insn to see if it requires the VPR or not.    * `arm_mve_get_loop_vctp`    * `arm_mve_get_vctp_lanes`    * `arm_emit_mve_unpredicated_insn_to_seq`    * `arm_get_required_vpr_reg`    * `arm_get_required_vpr_reg_param`    * `arm_get_required_vpr_reg_ret_val`    * `arm_mve_is_across_vector_insn`    * `arm_is_mve_load_store_insn`    * `arm_mve_vec_insn_is_predicated_with_this_predicate`    * `arm_mve_vec_insn_is_unpredicated_or_uses_other_predicate` No regressions on arm-none-eabi with various targets and on aarch64-none-elf. Thoughts on getting this into trunk? Thank you, Stam Markianos-Wright gcc/ChangeLog:     * config/arm/arm-protos.h (arm_target_insn_ok_for_lob): Rename to...     (arm_target_bb_ok_for_lob): ...this     (arm_attempt_dlstp_transform): New.     * config/arm/arm.cc (TARGET_LOOP_UNROLL_ADJUST): New.     (TARGET_PREDICT_DOLOOP_P): New.     (arm_block_set_vect):     (arm_target_insn_ok_for_lob): Rename from arm_target_insn_ok_for_lob.     (arm_target_bb_ok_for_lob): New.     (arm_mve_get_vctp_lanes): New.     (arm_get_required_vpr_reg): New.     (arm_get_required_vpr_reg_param): New.     (arm_get_required_vpr_reg_ret_val): New.     (arm_mve_get_loop_vctp): New.     (arm_mve_vec_insn_is_unpredicated_or_uses_other_predicate): New.     (arm_mve_vec_insn_is_predicated_with_this_predicate): New.     (arm_mve_check_df_chain_back_for_implic_predic): New.     (arm_mve_check_df_chain_fwd_for_implic_predic_impact): New.     (arm_mve_check_reg_origin_is_num_elems): New.     (arm_mve_dlstp_check_inc_counter): New.     (arm_mve_dlstp_check_dec_counter): New.     (arm_mve_loop_valid_for_dlstp): New.     (arm_mve_is_across_vector_insn): New.     (arm_is_mve_load_store_insn): New.     (arm_predict_doloop_p): New.     (arm_loop_unroll_adjust): New.     (arm_emit_mve_unpredicated_insn_to_seq): New.     (arm_attempt_dlstp_transform): New.         * config/arm/iterators.md (DLSTP): New.         (mode1): Add DLSTP mappings.         * config/arm/mve.md (*predicated_doloop_end_internal): New.         (dlstp_insn): New.         * config/arm/thumb2.md (doloop_end): Update for MVE LOLs.         * config/arm/unspecs.md: New unspecs.     * df-core.cc (df_bb_regno_only_def_find): New.     * df.h (df_bb_regno_only_def_find): New.         * loop-doloop.cc (doloop_condition_get): Relax conditions.         (doloop_optimize): Add support for elementwise LoLs. gcc/testsuite/ChangeLog:         * gcc.target/arm/lob.h: Update framework.         * gcc.target/arm/lob1.c: Likewise.         * gcc.target/arm/lob6.c: Likewise.     * gcc.target/arm/mve/dlstp-compile-asm.c: New test.     * gcc.target/arm/mve/dlstp-int16x8.c: New test.     * gcc.target/arm/mve/dlstp-int32x4.c: New test.     * gcc.target/arm/mve/dlstp-int64x2.c: New test.     * gcc.target/arm/mve/dlstp-int8x16.c: New test.     * gcc.target/arm/mve/dlstp-invalid-asm.c: New test. commit 8564dee09c1258c388094abd614f311e60723368 Author: Stam Markianos-Wright Date: Tue Oct 18 17:42:56 2022 +0100 arm: Add support for MVE Tail-Predicated Low Overhead Loops This is the 2/2 patch that contains the functional changes needed for MVE Tail Predicated Low Overhead Loops. See my previous email for a general introduction of MVE LOLs. This support is added through the already existing loop-doloop mechanisms that are used for non-MVE dls/le looping. Mid-end changes are: 1) Relax the loop-doloop mechanism in the mid-end to allow for decrement numbers other that -1 and for `count` to be an rtx containing a simple REG (which in this case will contain the number of elements to be processed), rather than an expression for calculating the number of iterations. 2) Added a new df utility function: `df_bb_regno_only_def_find` that will return the DEF of a REG if it is DEF-ed only once within the basic block. And many things in the backend to implement the above optimisation: 3) Implement the `arm_predict_doloop_p` target hook to instruct the mid-end about Low Overhead Loops (MVE or not), as well as `arm_loop_unroll_adjust` which will prevent unrolling of any loops that are valid for becoming MVE Tail_Predicated Low Overhead Loops (unrolling can transform a loop in ways that invalidate the dlstp/ letp tranformation logic and the benefit of the dlstp/letp loop would be considerably higher than that of unrolling) 4) Appropriate changes to the define_expand of doloop_end, new patterns for dlstp and letp, new iterators, unspecs, etc. 5) `arm_mve_loop_valid_for_dlstp` and a number of checking functions: * `arm_mve_dlstp_check_dec_counter` * `arm_mve_dlstp_check_inc_counter` * `arm_mve_check_reg_origin_is_num_elems` * `arm_mve_check_df_chain_back_for_implic_predic` * `arm_mve_check_df_chain_fwd_for_implic_predic_impact` This all, in smoe way or another, are running checks on the loop structure in order to determine if the loop is valid for dlstp/letp transformation. 6) `arm_attempt_dlstp_transform`: (called from the define_expand of doloop_end) this function re-checks for the loop's suitability for dlstp/letp transformation and then implements it, if possible. 7) Various utility functions: *`arm_mve_get_vctp_lanes` to map from vctp unspecs to number of lanes, and `arm_get_required_vpr_reg` to check an insn to see if it requires the VPR or not. * `arm_mve_get_loop_vctp` * `arm_mve_get_vctp_lanes` * `arm_emit_mve_unpredicated_insn_to_seq` * `arm_get_required_vpr_reg` * `arm_get_required_vpr_reg_param` * `arm_get_required_vpr_reg_ret_val` * `arm_mve_is_across_vector_insn` * `arm_is_mve_load_store_insn` * `arm_mve_vec_insn_is_predicated_with_this_predicate` * `arm_mve_vec_insn_is_unpredicated_or_uses_other_predicate` No regressions on arm-none-eabi with various targets and on aarch64-none-elf. Thoughts on getting this into trunk? Thank you, Stam Markianos-Wright gcc/ChangeLog: * config/arm/arm-protos.h (arm_target_insn_ok_for_lob): Rename to... (arm_target_bb_ok_for_lob): ...this (arm_attempt_dlstp_transform): New. * config/arm/arm.cc (TARGET_LOOP_UNROLL_ADJUST): New. (TARGET_PREDICT_DOLOOP_P): New. (arm_block_set_vect): (arm_target_insn_ok_for_lob): Rename from arm_target_insn_ok_for_lob. (arm_target_bb_ok_for_lob): New. (arm_mve_get_vctp_lanes): New. (arm_get_required_vpr_reg): New. (arm_get_required_vpr_reg_param): New. (arm_get_required_vpr_reg_ret_val): New. (arm_mve_get_loop_vctp): New. (arm_mve_vec_insn_is_unpredicated_or_uses_other_predicate): New. (arm_mve_vec_insn_is_predicated_with_this_predicate): New. (arm_mve_check_df_chain_back_for_implic_predic): New. (arm_mve_check_df_chain_fwd_for_implic_predic_impact): New. (arm_mve_check_reg_origin_is_num_elems): New. (arm_mve_dlstp_check_inc_counter): New. (arm_mve_dlstp_check_dec_counter): New. (arm_mve_loop_valid_for_dlstp): New. (arm_mve_is_across_vector_insn): New. (arm_is_mve_load_store_insn): New. (arm_predict_doloop_p): New. (arm_loop_unroll_adjust): New. (arm_emit_mve_unpredicated_insn_to_seq): New. (arm_attempt_dlstp_transform): New. * config/arm/iterators.md (DLSTP): New. (mode1): Add DLSTP mappings. * config/arm/mve.md (*predicated_doloop_end_internal): New. (dlstp_insn): New. * config/arm/thumb2.md (doloop_end): Update for MVE LOLs. * config/arm/unspecs.md: New unspecs. * df-core.cc (df_bb_regno_only_def_find): New. * df.h (df_bb_regno_only_def_find): New. * loop-doloop.cc (doloop_condition_get): Relax conditions. (doloop_optimize): Add support for elementwise LoLs. gcc/testsuite/ChangeLog: * gcc.target/arm/lob.h: Update framework. * gcc.target/arm/lob1.c: Likewise. * gcc.target/arm/lob6.c: Likewise. * gcc.target/arm/mve/dlstp-compile-asm.c: New test. * gcc.target/arm/mve/dlstp-int16x8.c: New test. * gcc.target/arm/mve/dlstp-int32x4.c: New test. * gcc.target/arm/mve/dlstp-int64x2.c: New test. * gcc.target/arm/mve/dlstp-int8x16.c: New test. * gcc.target/arm/mve/dlstp-invalid-asm.c: New test. diff --git a/gcc/config/arm/arm-protos.h b/gcc/config/arm/arm-protos.h index 77e76336e94..74186930f0b 100644 --- a/gcc/config/arm/arm-protos.h +++ b/gcc/config/arm/arm-protos.h @@ -65,8 +65,8 @@ extern void arm_emit_speculation_barrier_function (void); extern void arm_decompose_di_binop (rtx, rtx, rtx *, rtx *, rtx *, rtx *); extern bool arm_q_bit_access (void); extern bool arm_ge_bits_access (void); -extern bool arm_target_insn_ok_for_lob (rtx); - +extern bool arm_target_bb_ok_for_lob (basic_block); +extern rtx arm_attempt_dlstp_transform (rtx); #ifdef RTX_CODE enum reg_class arm_mode_base_reg_class (machine_mode); diff --git a/gcc/config/arm/arm.cc b/gcc/config/arm/arm.cc index 6e933c80183..39d97ba5e4d 100644 --- a/gcc/config/arm/arm.cc +++ b/gcc/config/arm/arm.cc @@ -659,6 +659,12 @@ static const struct attribute_spec arm_attribute_table[] = #undef TARGET_HAVE_CONDITIONAL_EXECUTION #define TARGET_HAVE_CONDITIONAL_EXECUTION arm_have_conditional_execution +#undef TARGET_LOOP_UNROLL_ADJUST +#define TARGET_LOOP_UNROLL_ADJUST arm_loop_unroll_adjust + +#undef TARGET_PREDICT_DOLOOP_P +#define TARGET_PREDICT_DOLOOP_P arm_predict_doloop_p + #undef TARGET_LEGITIMATE_CONSTANT_P #define TARGET_LEGITIMATE_CONSTANT_P arm_legitimate_constant_p @@ -34416,19 +34422,1096 @@ arm_invalid_within_doloop (const rtx_insn *insn) } bool -arm_target_insn_ok_for_lob (rtx insn) +arm_target_bb_ok_for_lob (basic_block bb) { - basic_block bb = BLOCK_FOR_INSN (insn); /* Make sure the basic block of the target insn is a simple latch having as single predecessor and successor the body of the loop itself. Only simple loops with a single basic block as body are supported for 'low over head loop' making sure that LE target is above LE itself in the generated code. */ - return single_succ_p (bb) - && single_pred_p (bb) - && single_succ_edge (bb)->dest == single_pred_edge (bb)->src - && contains_no_active_insn_p (bb); + && single_pred_p (bb) + && single_succ_edge (bb)->dest == single_pred_edge (bb)->src; +} + +/* Utility fuction: Given a VCTP or a VCTP_M insn, return the number of MVE + lanes based on the machine mode being used. */ + +static int +arm_mve_get_vctp_lanes (rtx x) +{ + if (GET_CODE (x) == SET && GET_CODE (XEXP (x, 1)) == UNSPEC + && (XINT (XEXP (x, 1), 1) == VCTP || XINT (XEXP (x, 1), 1) == VCTP_M)) + { + machine_mode mode = GET_MODE (XEXP (x, 1)); + return (VECTOR_MODE_P (mode) && VALID_MVE_PRED_MODE (mode)) + ? GET_MODE_NUNITS (mode) : 0; + } + return 0; +} + +/* Check if INSN requires the use of the VPR reg, if it does, return the + sub-rtx of the VPR reg. The TYPE argument controls whether + this function should: + * For TYPE == 0, check all operands, including the OUT operands, + and return the first occurrence of the VPR reg. + * For TYPE == 1, only check the input operands. + * For TYPE == 2, only check the output operands. + (INOUT operands are considered both as input and output operands) +*/ +static rtx +arm_get_required_vpr_reg (rtx_insn *insn, unsigned int type = 0) +{ + gcc_assert (type < 3); + if (!NONJUMP_INSN_P (insn)) + return NULL_RTX; + + bool requires_vpr; + extract_constrain_insn (insn); + int n_operands = recog_data.n_operands; + if (recog_data.n_alternatives == 0) + return NULL_RTX; + + /* Fill in recog_op_alt with information about the constraints of + this insn. */ + preprocess_constraints (insn); + + for (int op = 0; op < n_operands; op++) + { + requires_vpr = true; + if (type == 1 && recog_data.operand_type[op] == OP_OUT) + continue; + else if (type == 2 && recog_data.operand_type[op] == OP_IN) + continue; + + /* Iterate through alternatives of operand "op" in recog_op_alt and + identify if the operand is required to be the VPR. */ + for (int alt = 0; alt < recog_data.n_alternatives; alt++) + { + const operand_alternative *op_alt + = &recog_op_alt[alt * n_operands]; + /* Fetch the reg_class for each entry and check it against the + VPR_REG reg_class. */ + if (alternative_class (op_alt, op) != VPR_REG) + requires_vpr = false; + } + /* If all alternatives of the insn require the VPR reg for this operand, + it means that either this is VPR-generating instruction, like a vctp, + vcmp, etc., or it is a VPT-predicated insruction. Return the subrtx + of the VPR reg operand. */ + if (requires_vpr) + return recog_data.operand[op]; + } + return NULL_RTX; +} + +/* Wrapper function of arm_get_required_vpr_reg with TYPE == 1, so return + something only if the VPR reg is an input operand to the insn. */ + +static rtx +ALWAYS_INLINE +arm_get_required_vpr_reg_param (rtx_insn *insn) +{ + return arm_get_required_vpr_reg (insn, 1); +} + +/* Wrapper function of arm_get_required_vpr_reg with TYPE == 2, so return + something only if the VPR reg is the return value, an output of, or is + clobbered by the insn. */ + +static rtx +ALWAYS_INLINE +arm_get_required_vpr_reg_ret_val (rtx_insn *insn) +{ + return arm_get_required_vpr_reg (insn, 2); +} + +/* Scan the basic block of a loop body for a vctp instruction. If there is + at least vctp instruction, return the first rtx_insn *. */ + +static rtx_insn * +arm_mve_get_loop_vctp (basic_block bb) +{ + rtx_insn *insn = BB_HEAD (bb); + + /* Now scan through all the instruction patterns and pick out the VCTP + instruction. We require arm_get_required_vpr_reg_param to be false + to make sure we pick up a VCTP, rather than a VCTP_M. */ + FOR_BB_INSNS (bb, insn) + if (NONDEBUG_INSN_P (insn)) + if (arm_get_required_vpr_reg_ret_val (insn) + && (arm_mve_get_vctp_lanes (PATTERN (insn)) != 0) + && !arm_get_required_vpr_reg_param (insn)) + return insn; + return NULL; +} + +/* Return true if INSN is a MVE instruction that is VPT-predicable, but in + its unpredicated form, or if it is predicated, but on a predicate other + than VPR_REG. */ + +static bool +arm_mve_vec_insn_is_unpredicated_or_uses_other_predicate (rtx_insn *insn, + rtx vpr_reg) +{ + rtx insn_vpr_reg_operand; + if (MVE_VPT_UNPREDICATED_INSN_P (insn) + || (MVE_VPT_PREDICATED_INSN_P (insn) + && (insn_vpr_reg_operand = arm_get_required_vpr_reg_param (insn)) + && !rtx_equal_p (vpr_reg, insn_vpr_reg_operand))) + return true; + else + return false; +} + +/* Return true if INSN is a MVE instruction that is VPT-predicable and is + predicated on VPR_REG. */ + +static bool +arm_mve_vec_insn_is_predicated_with_this_predicate (rtx_insn *insn, + rtx vpr_reg) +{ + rtx insn_vpr_reg_operand; + if (MVE_VPT_PREDICATED_INSN_P (insn) + && (insn_vpr_reg_operand = arm_get_required_vpr_reg_param (insn)) + && rtx_equal_p (vpr_reg, insn_vpr_reg_operand)) + return true; + else + return false; +} + +/* Utility function to identify if INSN is an MVE instruction that performs + some across-vector operation (and as a result does not align with normal + lane predication rules). All such instructions give one only scalar + output, except for vshlcq which gives a PARALLEL of a vector and a scalar + (one vector result and one carry output). */ + +static bool +arm_is_mve_across_vector_insn (rtx_insn* insn) +{ + df_ref insn_defs = NULL; + if (!MVE_VPT_PREDICABLE_INSN_P (insn)) + return false; + + bool is_across_vector = false; + FOR_EACH_INSN_DEF (insn_defs, insn) + if (!VALID_MVE_MODE (GET_MODE (DF_REF_REG (insn_defs))) + && !arm_get_required_vpr_reg_ret_val (insn)) + is_across_vector = true; + + return is_across_vector; +} + +/* Utility function to identify if INSN is an MVE load or store instruction. + * For TYPE == 0, check all operands. If the function returns true, + INSN is a load or a store insn. + * For TYPE == 1, only check the input operands. If the function returns + true, INSN is a load insn. + * For TYPE == 2, only check the output operands. If the function returns + true, INSN is a store insn. */ + +static bool +arm_is_mve_load_store_insn (rtx_insn* insn, int type = 0) +{ + int n_operands = recog_data.n_operands; + extract_insn (insn); + + for (int op = 0; op < n_operands; op++) + { + if (type == 1 && recog_data.operand_type[op] == OP_OUT) + continue; + else if (type == 2 && recog_data.operand_type[op] == OP_IN) + continue; + if (mve_memory_operand (recog_data.operand[op], + GET_MODE (recog_data.operand[op]))) + return true; + } + return false; +} + +/* When transforming an MVE intrinsic loop into an MVE Tail Predicated Low + Overhead Loop, there are a number of instructions that, if in their + unpredicated form, act across vector lanes, but are still safe to include + within the loop, despite the implicit predication added to the vector lanes. + This list has been compiled by carefully analyzing the instruction + pseudocode in the Arm-ARM. + All other across-vector instructions aren't allowed, because the addition + of implicit predication could influnce the result of the operation. + Any new across-vector instructions to the MVE ISA will have to assessed for + inclusion to this list. */ + +static bool +arm_mve_is_allowed_unpredic_across_vector_insn (rtx_insn* insn) +{ + gcc_assert (MVE_VPT_UNPREDICATED_INSN_P (insn) + && arm_is_mve_across_vector_insn (insn)); + rtx insn_pattern = PATTERN (insn); + if (GET_CODE (insn_pattern) == SET + && GET_CODE (XEXP (insn_pattern, 1)) == UNSPEC + && (XINT (XEXP (insn_pattern, 1), 1) == VADDVQ_U + || XINT (XEXP (insn_pattern, 1), 1) == VADDVQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VADDVAQ_U + || XINT (XEXP (insn_pattern, 1), 1) == VADDVAQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VMLADAVQ_U + || XINT (XEXP (insn_pattern, 1), 1) == VMLADAVQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VMLADAVXQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VMLADAVAQ_U + || XINT (XEXP (insn_pattern, 1), 1) == VMLADAVAQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VMLADAVAXQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VABAVQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VABAVQ_U + || XINT (XEXP (insn_pattern, 1), 1) == VADDLVQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VADDLVQ_U + || XINT (XEXP (insn_pattern, 1), 1) == VADDLVAQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VADDLVAQ_U + || XINT (XEXP (insn_pattern, 1), 1) == VADDVAQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VADDVAQ_U + || XINT (XEXP (insn_pattern, 1), 1) == VMAXVQ_U + || XINT (XEXP (insn_pattern, 1), 1) == VMAXAVQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VMLALDAVQ_U + || XINT (XEXP (insn_pattern, 1), 1) == VMLALDAVXQ_U + || XINT (XEXP (insn_pattern, 1), 1) == VMLALDAVXQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VMLALDAVQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VMLALDAVAQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VMLALDAVAQ_U + || XINT (XEXP (insn_pattern, 1), 1) == VMLALDAVAXQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VMLALDAVAXQ_U + || XINT (XEXP (insn_pattern, 1), 1) == VMLSDAVQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VMLSDAVXQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VMLSDAVAXQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VMLSDAVAQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VMLSLDAVQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VMLSLDAVXQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VMLSLDAVAQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VMLSLDAVAXQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VRMLALDAVHXQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VRMLALDAVHQ_U + || XINT (XEXP (insn_pattern, 1), 1) == VRMLALDAVHQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VRMLALDAVHAQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VRMLALDAVHAQ_U + || XINT (XEXP (insn_pattern, 1), 1) == VRMLALDAVHAXQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VRMLSLDAVHQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VRMLSLDAVHXQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VRMLSLDAVHAQ_S + || XINT (XEXP (insn_pattern, 1), 1) == VRMLSLDAVHAXQ_S)) + return true; + return false; +} + + +/* Recursively scan through the DF chain backwards within the basic block and + determine if any of the USEs of the original insn (or the USEs of the insns + where thy were DEF-ed, etc., recursively) were affected by implicit VPT + predication of an MVE_VPT_UNPREDICATED_INSN_P in a dlstp/letp loop. + This function returns true if the insn is affected implicit predication + and false otherwise. + Having such implicit predication on an unpredicated insn wouldn't in itself + block tail predication, because the output of that insn might then be used + in a correctly predicated store insn, where the disabled lanes will be + ignored. To verify this we later call: + `arm_mve_check_df_chain_fwd_for_implic_predic_impact`, which will check the + DF chains forward to see if any implicitly-predicated operand gets used in + an improper way. */ + +static bool +arm_mve_check_df_chain_back_for_implic_predic + (hash_map, bool>* safe_insn_map, rtx_insn *insn, + rtx vctp_vpr_generated) +{ + bool* temp = NULL; + if ((temp = safe_insn_map->get (INSN_UID (insn)))) + return *temp; + + basic_block body = BLOCK_FOR_INSN (insn); + /* The circumstances under which an instruction is affected by "implicit + predication" are as follows: + * It is an UNPREDICATED_INSN_P: + * That loads/stores from/to memory. + * Where any one of its operands is an MVE vector from outside the + loop body bb. + Or: + * Any of it's operands, recursively backwards, are affected. */ + if (MVE_VPT_UNPREDICATED_INSN_P (insn) + && (arm_is_mve_load_store_insn (insn) + || (arm_is_mve_across_vector_insn (insn) + && !arm_mve_is_allowed_unpredic_across_vector_insn (insn)))) + { + safe_insn_map->put (INSN_UID (insn), true); + return true; + } + + df_ref insn_uses = NULL; + FOR_EACH_INSN_USE (insn_uses, insn) + { + /* If the operand is in the input reg set to the the basic block, + (i.e. it has come from outside the loop!), consider it unsafe if: + * It's being used in an unpredicated insn. + * It is a predicable MVE vector. */ + if (MVE_VPT_UNPREDICATED_INSN_P (insn) + && VALID_MVE_MODE (GET_MODE (DF_REF_REG (insn_uses))) + && REGNO_REG_SET_P (DF_LR_IN (body), DF_REF_REGNO (insn_uses))) + { + safe_insn_map->put (INSN_UID (insn), true); + return true; + } + /* Scan backwards from the current INSN through the instruction chain + until the start of the basic block. */ + for (rtx_insn *prev_insn = PREV_INSN (insn); + prev_insn && prev_insn != PREV_INSN (BB_HEAD (body)); + prev_insn = PREV_INSN (prev_insn)) + { + /* If a previous insn defines a register that INSN uses, then recurse + in order to check that insn's USEs. + If any of these insns return true as MVE_VPT_UNPREDICATED_INSN_Ps, + then the whole chain is affected by the change in behaviour from + being placed in dlstp/letp loop. */ + df_ref prev_insn_defs = NULL; + FOR_EACH_INSN_DEF (prev_insn_defs, prev_insn) + { + if (DF_REF_REGNO (insn_uses) == DF_REF_REGNO (prev_insn_defs) + && !arm_mve_vec_insn_is_predicated_with_this_predicate + (insn, vctp_vpr_generated) + && arm_mve_check_df_chain_back_for_implic_predic + (safe_insn_map, prev_insn, vctp_vpr_generated)) + { + safe_insn_map->put (INSN_UID (insn), true); + return true; + } + } + } + } + safe_insn_map->put (INSN_UID (insn), false); + return false; +} + +/* If we have identified that the current DEF will be modified + by such implicit predication, scan through all the + insns that USE it and bail out if any one is outside the + current basic block (i.e. the reg is live after the loop) + or if any are store insns that are unpredicated or using a + predicate other than the loop VPR. + This function returns true if the insn is not suitable for + implicit predication and false otherwise.*/ + +static bool +arm_mve_check_df_chain_fwd_for_implic_predic_impact (rtx_insn *insn, + rtx vctp_vpr_generated) +{ + + /* If this insn is indeed an unpredicated store to memory, bail out. */ + if (arm_mve_vec_insn_is_unpredicated_or_uses_other_predicate + (insn, vctp_vpr_generated) + && (arm_is_mve_load_store_insn (insn, 2) + || arm_is_mve_across_vector_insn (insn))) + return true; + + /* Next, scan forward to the various USEs of the DEFs in this insn. */ + df_ref insn_def = NULL; + FOR_EACH_INSN_DEF (insn_def, insn) + { + for (df_ref use = DF_REG_USE_CHAIN (DF_REF_REGNO (insn_def)); use; + use = DF_REF_NEXT_REG (use)) + { + rtx_insn *next_use_insn = DF_REF_INSN (use); + if (next_use_insn != insn + && NONDEBUG_INSN_P (next_use_insn)) + { + /* If the USE is outside the loop body bb, or it is inside, but + is an differently-predicated store to memory or it is any + across-vector instruction. */ + if (BLOCK_FOR_INSN (insn) != BLOCK_FOR_INSN (next_use_insn) + || (arm_mve_vec_insn_is_unpredicated_or_uses_other_predicate + (next_use_insn, vctp_vpr_generated) + && (arm_is_mve_load_store_insn (next_use_insn, 2) + || arm_is_mve_across_vector_insn (next_use_insn)))) + return true; + } + } + } + return false; +} + +/* Helper function to `arm_mve_dlstp_check_inc_counter` and to + `arm_mve_dlstp_check_dec_counter`. In the situations where the loop counter + is incrementing by 1 or decrementing by 1 in each iteration, ensure that the + target value or the initialisation value, respectively, was a calculation + of the number of iterations of the loop, which is expected to be an ASHIFTRT + by VCTP_STEP. */ + +static bool +arm_mve_check_reg_origin_is_num_elems (basic_block body, rtx reg, rtx vctp_step) +{ + /* Ok, we now know the loop starts from zero and increments by one. + Now just show that the max value of the counter came from an + appropriate ASHIFRT expr of the correct amount. */ + basic_block pre_loop_bb = body->prev_bb; + while (pre_loop_bb && BB_END (pre_loop_bb) + && !df_bb_regno_only_def_find (pre_loop_bb, REGNO (reg))) + pre_loop_bb = pre_loop_bb->prev_bb; + + df_ref counter_max_last_def = df_bb_regno_only_def_find (pre_loop_bb, REGNO (reg)); + rtx counter_max_last_set; + if (counter_max_last_def) + counter_max_last_set = PATTERN (DF_REF_INSN (counter_max_last_def)); + else + return false; + + /* If we encounter a simple SET from a REG, follow it through. */ + if (GET_CODE (counter_max_last_set) == SET + && REG_P (XEXP (counter_max_last_set, 1))) + return arm_mve_check_reg_origin_is_num_elems + (pre_loop_bb->next_bb, XEXP (counter_max_last_set, 1), vctp_step); + + /* If we encounter a SET from an IF_THEN_ELSE where one of the operands is a + constant and the other is a REG, follow through to that REG. */ + if (GET_CODE (counter_max_last_set) == SET + && GET_CODE (XEXP (counter_max_last_set, 1)) == IF_THEN_ELSE + && REG_P (XEXP (XEXP (counter_max_last_set, 1), 1)) + && CONST_INT_P (XEXP (XEXP (counter_max_last_set, 1), 2))) + return arm_mve_check_reg_origin_is_num_elems + (pre_loop_bb->next_bb, XEXP (XEXP (counter_max_last_set, 1), 1), vctp_step); + + if (GET_CODE (XEXP (counter_max_last_set, 1)) == ASHIFTRT + && CONST_INT_P (XEXP (XEXP (counter_max_last_set, 1), 1)) + && ((1 << INTVAL (XEXP (XEXP (counter_max_last_set, 1), 1))) + == abs (INTVAL (vctp_step)))) + return true; + + return false; +} + +/* If we have identified the loop to have an incrementing counter, we need to + make sure that it increments by 1 and that the loop is structured correctly: + * The counter starts from 0 + * The counter terminates at (num_of_elem + num_of_lanes - 1) / num_of_lanes + * The vctp insn uses a reg that decrements appropriately in each iteration. +*/ + +static rtx_insn* +arm_mve_dlstp_check_inc_counter (basic_block body, rtx_insn* vctp_insn, + rtx condconst, rtx condcount) +{ + rtx vctp_reg = XVECEXP (XEXP (PATTERN (vctp_insn), 1), 0, 0); + /* The loop latch has to be empty. When compiling all the known MVE LoLs in + user applications, none of those with incrementing counters had any real + insns in the loop latch. As such, this function has only been tested with + an empty latch and may misbehave or ICE if we somehow get here with an + increment in the latch, so, for correctness, error out early. */ + rtx_insn *dec_insn = BB_END (body->loop_father->latch); + if (NONDEBUG_INSN_P (dec_insn)) + return NULL; + + class rtx_iv vctp_reg_iv; + /* For loops of type B) the loop counter is independent of the decrement + of the reg used in the vctp_insn. So run iv analysis on that reg. This + has to succeed for such loops to be supported. */ + if (!iv_analyze (vctp_insn, as_a (GET_MODE (vctp_reg)), + vctp_reg, &vctp_reg_iv)) + return NULL; + + /* Find where both of those are modified in the loop body bb. */ + rtx condcount_reg_set + = PATTERN (DF_REF_INSN (df_bb_regno_only_def_find + (body, REGNO (condcount)))); + rtx vctp_reg_set = PATTERN (DF_REF_INSN (df_bb_regno_only_def_find + (body, REGNO (vctp_reg)))); + if (!vctp_reg_set || !condcount_reg_set) + return NULL; + + if (REG_P (condcount) && REG_P (condconst)) + { + /* First we need to prove that the loop is going 0..condconst with an + inc of 1 in each iteration. */ + if (GET_CODE (XEXP (condcount_reg_set, 1)) == PLUS + && CONST_INT_P (XEXP (XEXP (condcount_reg_set, 1), 1)) + && INTVAL (XEXP (XEXP (condcount_reg_set, 1), 1)) == 1) + { + rtx counter_reg = XEXP (condcount_reg_set, 0); + /* Check that the counter did indeed start from zero. */ + df_ref this_set = DF_REG_DEF_CHAIN (REGNO (counter_reg)); + if (!this_set) + return NULL; + df_ref last_set = DF_REF_NEXT_REG (this_set); + if (!last_set) + return NULL; + rtx_insn* last_set_insn = DF_REF_INSN (last_set); + if (!single_set (last_set_insn)) + return NULL; + rtx counter_orig_set; + counter_orig_set = XEXP (PATTERN (last_set_insn), 1); + if (!CONST_INT_P (counter_orig_set) + || (INTVAL (counter_orig_set) != 0)) + return NULL; + /* And finally check that the target value of the counter, + condconst is of the correct shape. */ + if (!arm_mve_check_reg_origin_is_num_elems (body, condconst, + vctp_reg_iv.step)) + return NULL; + } + else + return NULL; + } + else + return NULL; + + /* Extract the decrementnum of the vctp reg. */ + int decrementnum = abs (INTVAL (vctp_reg_iv.step)); + /* Ensure it matches the number of lanes of the vctp instruction. */ + if (decrementnum != arm_mve_get_vctp_lanes (PATTERN (vctp_insn))) + return NULL; + + /* Everything looks valid. */ + return vctp_insn; +} + +/* Helper function to `arm_mve_loop_valid_for_dlstp`. In the case of a + counter that is decrementing, ensure that it is decrementing by the + right amount in each iteration and that the target condition is what + we expect. */ + +static rtx_insn* +arm_mve_dlstp_check_dec_counter (basic_block body, rtx_insn* vctp_insn, + rtx condconst, rtx condcount) +{ + rtx vctp_reg = XVECEXP (XEXP (PATTERN (vctp_insn), 1), 0, 0); + class rtx_iv vctp_reg_iv; + int decrementnum; + /* For decrementing loops of type A), the counter is usually present in the + loop latch. Here we simply need to verify that this counter is the same + reg that is also used in the vctp_insn and that it is not otherwise + modified. */ + rtx_insn *dec_insn = BB_END (body->loop_father->latch); + /* If not in the loop latch, try to find the decrement in the loop body. */ + if (!NONDEBUG_INSN_P (dec_insn)) + { + df_ref temp = df_bb_regno_only_def_find (body, REGNO (condcount)); + /* If we haven't been able to find the decrement, bail out. */ + if (!temp) + return NULL; + dec_insn = DF_REF_INSN (temp); + } + + /* Next, ensure that it is a PLUS of the form: + (set (reg a) (plus (reg a) (const_int))) + where (reg a) is the same as condcount. */ + if (GET_CODE (XEXP (PATTERN (dec_insn), 1)) == PLUS + && REGNO (XEXP (PATTERN (dec_insn), 0)) + == REGNO (XEXP (XEXP (PATTERN (dec_insn), 1), 0)) + && REGNO (XEXP (PATTERN (dec_insn), 0)) == REGNO (condcount)) + decrementnum = abs (INTVAL (XEXP (XEXP (PATTERN (dec_insn), 1), 1))); + else + return NULL; + + /* Ok, so we now know the loop decrement. If it is a 1, then we need to + look at the loop vctp_reg and verify that it also decrements correctly. + Then, we need to establish that the starting value of the loop decrement + originates from the starting value of the vctp decrement. */ + if (decrementnum == 1) + { + class rtx_iv vctp_reg_iv; + /* The loop counter is found to be independent of the decrement + of the reg used in the vctp_insn, again. Ensure that IV analysis + succeeds and check the step. */ + if (!iv_analyze (vctp_insn, as_a (GET_MODE (vctp_reg)), + vctp_reg, &vctp_reg_iv)) + return NULL; + /* Ensure it matches the number of lanes of the vctp instruction. */ + if (abs (INTVAL (vctp_reg_iv.step)) + != arm_mve_get_vctp_lanes (PATTERN (vctp_insn))) + return NULL; + if (!arm_mve_check_reg_origin_is_num_elems (body, condcount, vctp_reg_iv.step)) + return NULL; + } + /* If the decrements are the same, then the situation is simple: either they + are also the same reg, which is safe, or they are different registers, in + which case makse sure that there is a only simple SET from one to the + other inside the loop.*/ + else if (decrementnum == arm_mve_get_vctp_lanes (PATTERN (vctp_insn))) + { + if (REGNO (condcount) != REGNO (vctp_reg)) + { + /* It wasn't the same reg, but it could be behild a + (set (vctp_reg) (condcount)), so instead find where + the VCTP insn is DEF'd inside the loop. */ + rtx vctp_reg_set = + PATTERN (DF_REF_INSN (df_bb_regno_only_def_find + (body, REGNO (vctp_reg)))); + /* This must just be a simple SET from the condcount. */ + if (GET_CODE (vctp_reg_set) != SET || !REG_P (XEXP (vctp_reg_set, 1)) + || REGNO (XEXP (vctp_reg_set, 1)) != REGNO (condcount)) + return NULL; + } + } + else + return NULL; + + /* We now only need to find out that the loop terminates with a LE + zero condition. If condconst is a const_int, then this is easy. + If its a REG, look at the last condition+jump in a bb before + the loop, because that usually will have a branch jumping over + the loop body. */ + if (CONST_INT_P (condconst) + && !(INTVAL (condconst) == 0 && JUMP_P (BB_END (body)) + && GET_CODE (XEXP (PATTERN (BB_END (body)), 1)) == IF_THEN_ELSE + && (GET_CODE (XEXP (XEXP (PATTERN (BB_END (body)), 1), 0)) == NE + ||GET_CODE (XEXP (XEXP (PATTERN (BB_END (body)), 1), 0)) == GT))) + return NULL; + else if (REG_P (condconst)) + { + basic_block pre_loop_bb = body; + while (pre_loop_bb->prev_bb && BB_END (pre_loop_bb->prev_bb) + && !JUMP_P (BB_END (pre_loop_bb->prev_bb))) + pre_loop_bb = pre_loop_bb->prev_bb; + if (pre_loop_bb && BB_END (pre_loop_bb)) + pre_loop_bb = pre_loop_bb->prev_bb; + else + return NULL; + rtx initial_compare = NULL_RTX; + if (!(prev_nonnote_nondebug_insn_bb (BB_END (pre_loop_bb)) + && INSN_P (prev_nonnote_nondebug_insn_bb (BB_END (pre_loop_bb))))) + return NULL; + else + initial_compare + = PATTERN (prev_nonnote_nondebug_insn_bb (BB_END (pre_loop_bb))); + if (!(initial_compare && GET_CODE (initial_compare) == SET + && cc_register (XEXP (initial_compare, 0), VOIDmode) + && GET_CODE (XEXP (initial_compare, 1)) == COMPARE + && CONST_INT_P (XEXP (XEXP (initial_compare, 1), 1)) + && INTVAL (XEXP (XEXP (initial_compare, 1), 1)) == 0)) + return NULL; + + /* Usually this is a LE condition, but it can also just be a GT or an EQ + condition (if the value is unsigned or the compiler knows its not negative) */ + rtx_insn *loop_jumpover = BB_END (pre_loop_bb); + if (!(JUMP_P (loop_jumpover) + && GET_CODE (XEXP (PATTERN (loop_jumpover), 1)) == IF_THEN_ELSE + && (GET_CODE (XEXP (XEXP (PATTERN (loop_jumpover), 1), 0)) == LE + || GET_CODE (XEXP (XEXP (PATTERN (loop_jumpover), 1), 0)) == GT + || GET_CODE (XEXP (XEXP (PATTERN (loop_jumpover), 1), 0)) == EQ))) + return NULL; + } + + /* Everything looks valid. */ + return vctp_insn; +} + +/* Function to check a loop's structure to see if it is a valid candidate for + an MVE Tail Predicated Low-Overhead Loop. Returns the loop's VCTP_INSN if + it is valid, or NULL if it isn't. */ + +static rtx_insn* +arm_mve_loop_valid_for_dlstp (basic_block body) +{ + /* Doloop can only be done "elementwise" with predicated dlstp/letp if it + contains a VCTP on the number of elements processed by the loop. + Find the VCTP predicate generation inside the loop body BB. */ + rtx_insn *vctp_insn = arm_mve_get_loop_vctp (body); + if (!vctp_insn) + return NULL; + + /* There are only two types of loops that can be turned into dlstp/letp + loops: + A) Loops of the form: + while (num_of_elem > 0) + { + p = vctp (num_of_elem) + n -= num_of_lanes; + } + B) Loops of the form: + int num_of_iters = (num_of_elem + num_of_lanes - 1) / num_of_lanes + for (i = 0; i < num_of_iters; i++) + { + p = vctp (num_of_elem) + n -= num_of_lanes; + } + + Then, depending on the type of loop above we need will need to do + different sets of checks. */ + iv_analysis_loop_init (body->loop_father); + + /* In order to find out if the loop is of type A or B above look for the + loop counter: it will either be incrementing by one per iteration or + it will be decrementing by num_of_lanes. We can find the loop counter + in the condition at the end of the loop. */ + rtx_insn *loop_cond = prev_nonnote_nondebug_insn_bb (BB_END (body)); + if (!(cc_register (XEXP (PATTERN (loop_cond), 0), VOIDmode) + && GET_CODE (XEXP (PATTERN (loop_cond), 1)) == COMPARE)) + return NULL; + + /* The operands in the condition: Try to identify which one is the + constant and which is the counter and run IV analysis on the latter. */ + rtx cond_arg_1 = XEXP (XEXP (PATTERN (loop_cond), 1), 0); + rtx cond_arg_2 = XEXP (XEXP (PATTERN (loop_cond), 1), 1); + + rtx loop_cond_constant; + rtx loop_counter; + class rtx_iv cond_counter_iv, cond_temp_iv; + + if (CONST_INT_P (cond_arg_1)) + { + /* cond_arg_1 is the constant and cond_arg_2 is the counter. */ + loop_cond_constant = cond_arg_1; + loop_counter = cond_arg_2; + iv_analyze (loop_cond, as_a (GET_MODE (cond_arg_2)), + cond_arg_2, &cond_counter_iv); + } + else if (CONST_INT_P (cond_arg_2)) + { + /* cond_arg_2 is the constant and cond_arg_1 is the counter. */ + loop_cond_constant = cond_arg_2; + loop_counter = cond_arg_1; + iv_analyze (loop_cond, as_a (GET_MODE (cond_arg_1)), + cond_arg_1, &cond_counter_iv); + } + else if (REG_P (cond_arg_1) && REG_P (cond_arg_2)) + { + /* If both operands to the compare are REGs, we can safely + run IV analysis on both and then determine which is the + constant by looking at the step. + First assume cond_arg_1 is the counter. */ + loop_counter = cond_arg_1; + loop_cond_constant = cond_arg_2; + iv_analyze (loop_cond, as_a (GET_MODE (cond_arg_1)), + cond_arg_1, &cond_counter_iv); + iv_analyze (loop_cond, as_a (GET_MODE (cond_arg_2)), + cond_arg_2, &cond_temp_iv); + + if (!CONST_INT_P (cond_counter_iv.step) || !CONST_INT_P (cond_temp_iv.step)) + return NULL; + /* Look at the steps and swap around the rtx's if needed. Error out if + one of them cannot be identified as constant. */ + if (INTVAL (cond_counter_iv.step) != 0 && INTVAL (cond_temp_iv.step) != 0) + return NULL; + if (INTVAL (cond_counter_iv.step) == 0 && INTVAL (cond_temp_iv.step) != 0) + { + loop_counter = cond_arg_2; + loop_cond_constant = cond_arg_1; + cond_counter_iv = cond_temp_iv; + } + } + else + return NULL; + + if (!REG_P (loop_counter)) + return NULL; + if (!(REG_P (loop_cond_constant) || CONST_INT_P (loop_cond_constant))) + return NULL; + + /* Now we have extracted the IV step of the loop counter, call the + appropriate checking function. */ + if (INTVAL (cond_counter_iv.step) > 0) + return arm_mve_dlstp_check_inc_counter (body, vctp_insn, + loop_cond_constant, loop_counter); + else if (INTVAL (cond_counter_iv.step) < 0) + return arm_mve_dlstp_check_dec_counter (body, vctp_insn, + loop_cond_constant, loop_counter); + else + return NULL; +} + +/* Predict whether the given loop in gimple will be transformed in the RTL + doloop_optimize pass. */ + +static bool +arm_predict_doloop_p (struct loop *loop) +{ + gcc_assert (loop); + /* On arm, targetm.can_use_doloop_p is actually + can_use_doloop_if_innermost. Ensure the loop is innermost, + it is valid and as per arm_target_bb_ok_for_lob and the + correct architecture flags are enabled. */ + if (!(TARGET_32BIT && TARGET_HAVE_LOB && optimize > 0)) + { + if (dump_file && (dump_flags & TDF_DETAILS)) + fprintf (dump_file, "Predict doloop failure due to" + " target architecture or optimisation flags.\n"); + return false; + } + else if (loop->inner != NULL) + { + if (dump_file && (dump_flags & TDF_DETAILS)) + fprintf (dump_file, "Predict doloop failure due to" + " loop nesting.\n"); + return false; + } + else if (!arm_target_bb_ok_for_lob (loop->header->next_bb)) + { + if (dump_file && (dump_flags & TDF_DETAILS)) + fprintf (dump_file, "Predict doloop failure due to" + " loop bb complexity.\n"); + return false; + } + + return true; +} + +/* Implement targetm.loop_unroll_adjust. Use this to block unrolling of loops + that may later be turned into MVE Tail Predicated Low Overhead Loops. The + performance benefit of an MVE LoL is likely to be much higher than that of + the unrolling. */ + +unsigned +arm_loop_unroll_adjust (unsigned nunroll, struct loop *loop) +{ + if (TARGET_HAVE_MVE + && arm_target_bb_ok_for_lob (loop->latch) + && arm_mve_loop_valid_for_dlstp (loop->header)) + return 0; + else + return nunroll; +} + +/* Function to hadle emitting a VPT-unpredicated version of a VPT-predicated + insn to a sequence. */ + +static bool +arm_emit_mve_unpredicated_insn_to_seq (rtx_insn* insn) +{ + rtx insn_vpr_reg_operand = arm_get_required_vpr_reg_param (insn); + int new_icode = get_attr_mve_unpredicated_insn (insn); + if (!in_sequence_p () + || !MVE_VPT_PREDICATED_INSN_P (insn) + || (!insn_vpr_reg_operand) + || (!new_icode)) + return false; + + extract_insn (insn); + rtx arr[8]; + int j = 0; + + /* When transforming a VPT-predicated instruction + into its unpredicated equivalent we need to drop + the VPR operand and we may need to also drop a + merge "vuninit" input operand, depending on the + instruction pattern. Here ensure that we have at + most a two-operand difference between the two + instrunctions. */ + int n_operands_diff + = recog_data.n_operands - insn_data[new_icode].n_operands; + if (!(n_operands_diff > 0 && n_operands_diff <= 2)) + return false; + + /* Then, loop through the operands of the predicated + instruction, and retain the ones that map to the + unpredicated instruction. */ + for (int i = 0; i < recog_data.n_operands; i++) + { + /* Ignore the VPR and, if needed, the vuninit + operand. */ + if (insn_vpr_reg_operand == recog_data.operand[i] + || (n_operands_diff == 2 + && !strcmp (recog_data.constraints[i], "0"))) + continue; + else + { + arr[j] = recog_data.operand[i]; + j++; + } + } + + /* Finally, emit the upredicated instruction. */ + switch (j) + { + case 1: + emit_insn (GEN_FCN (new_icode) (arr[0])); + break; + case 2: + emit_insn (GEN_FCN (new_icode) (arr[0], arr[1])); + break; + case 3: + emit_insn (GEN_FCN (new_icode) (arr[0], arr[1], arr[2])); + break; + case 4: + emit_insn (GEN_FCN (new_icode) (arr[0], arr[1], arr[2], + arr[3])); + break; + case 5: + emit_insn (GEN_FCN (new_icode) (arr[0], arr[1], arr[2], arr[3], + arr[4])); + break; + case 6: + emit_insn (GEN_FCN (new_icode) (arr[0], arr[1], arr[2], arr[3], + arr[4], arr[5])); + break; + case 7: + emit_insn (GEN_FCN (new_icode) (arr[0], arr[1], arr[2], arr[3], + arr[4], arr[5], arr[6])); + break; + default: + gcc_unreachable (); + } + return true; +} + +/* When a vctp insn is used, its out is often followed by + a zero-extend insn to SImode, which is then SUBREG'd into a + vector form of mode VALID_MVE_PRED_MODE: this vector form is + what is then used as an input to the instructions within the + loop. Hence, store that vector form of the VPR reg into + vctp_vpr_generated, so that we can match it with instructions + in the loop to determine if they are predicated on this same + VPR. If there is no zero-extend and subreg or it is otherwise + invalid, then return NULL to cancel the dlstp transform. */ + +static rtx +arm_mve_get_vctp_vec_form (rtx_insn *insn) +{ + rtx vctp_vpr_generated = NULL_RTX; + rtx_insn *next_use1 = NULL; + df_ref use; + for (use + = DF_REG_USE_CHAIN + (DF_REF_REGNO (DF_INSN_INFO_DEFS (DF_INSN_INFO_GET (insn)))); + use; use = DF_REF_NEXT_REG (use)) + if (!next_use1 && NONDEBUG_INSN_P (DF_REF_INSN (use))) + next_use1 = DF_REF_INSN (use); + + if (single_set (next_use1) + && GET_CODE (SET_SRC (single_set (next_use1))) == ZERO_EXTEND) + { + rtx_insn *next_use2 = NULL; + for (use + = DF_REG_USE_CHAIN + (DF_REF_REGNO + (DF_INSN_INFO_DEFS (DF_INSN_INFO_GET (next_use1)))); + use; use = DF_REF_NEXT_REG (use)) + if (!next_use2 && NONDEBUG_INSN_P (DF_REF_INSN (use))) + next_use2 = DF_REF_INSN (use); + + if (single_set (next_use2) + && GET_CODE (SET_SRC (single_set (next_use2))) == SUBREG) + vctp_vpr_generated = XEXP (PATTERN (next_use2), 0); + } + + if (!vctp_vpr_generated || !REG_P (vctp_vpr_generated) + || !VALID_MVE_PRED_MODE (GET_MODE (vctp_vpr_generated))) + return NULL_RTX; + + return vctp_vpr_generated; +} + +/* Attempt to transform the loop contents of loop basic block from VPT + predicated insns into unpredicated insns for a dlstp/letp loop. */ + +rtx +arm_attempt_dlstp_transform (rtx label) +{ + basic_block body = BLOCK_FOR_INSN (label)->prev_bb; + + /* Ensure that the bb is within a loop that has all required metadata. */ + if (!body->loop_father || !body->loop_father->header + || !body->loop_father->simple_loop_desc) + return GEN_INT (1); + + rtx_insn *vctp_insn = arm_mve_loop_valid_for_dlstp (body); + if (!vctp_insn) + return GEN_INT (1); + rtx vctp_reg = XVECEXP (XEXP (PATTERN (vctp_insn), 1), 0, 0); + + rtx vctp_vpr_generated = arm_mve_get_vctp_vec_form (vctp_insn); + if (!vctp_vpr_generated) + return GEN_INT (1); + + /* decrementunum is already known to be valid at this point. */ + int decrementnum = arm_mve_get_vctp_lanes (PATTERN (vctp_insn)); + + rtx_insn *insn = 0; + rtx_insn *cur_insn = 0; + rtx_insn *seq; + hash_map, bool>* safe_insn_map + = new hash_map, + bool>; + + /* Scan through the insns in the loop bb and emit the transformed bb + insns to a sequence. */ + start_sequence (); + FOR_BB_INSNS (body, insn) + { + if (GET_CODE (insn) == CODE_LABEL || NOTE_INSN_BASIC_BLOCK_P (insn)) + continue; + else if (NOTE_P (insn)) + emit_note ((enum insn_note)NOTE_KIND (insn)); + else if (DEBUG_INSN_P (insn)) + emit_debug_insn (PATTERN (insn)); + else if (!INSN_P (insn)) + { + end_sequence (); + return GEN_INT (1); + } + /* When we find the vctp instruction: continue. */ + else if (insn == vctp_insn) + continue; + /* If the insn pattern requires the use of the VPR value from the + vctp as an input parameter for predication. */ + else if (arm_mve_vec_insn_is_predicated_with_this_predicate + (insn, vctp_vpr_generated)) + { + bool success = arm_emit_mve_unpredicated_insn_to_seq (insn); + if (!success) + { + end_sequence (); + return GEN_INT (1); + } + } + /* If the insn isn't VPT predicated on vctp_vpr_generated, we need to + make sure that it is still valid within the dlstp/letp loop. */ + else + { + /* If this instruction USE-s the vctp_vpr_generated other than for + predication, this blocks the transformation as we are not allowed + to optimise the VPR value away. */ + df_ref insn_uses = NULL; + FOR_EACH_INSN_USE (insn_uses, insn) + { + if (rtx_equal_p (vctp_vpr_generated, DF_REF_REG (insn_uses))) + { + end_sequence (); + return GEN_INT (1); + } + } + /* If within the loop we have an MVE vector instruction that is + unpredicated, the dlstp/letp looping will add implicit + predication to it. This will result in a change in behaviour + of the instruction, so we need to find out if any instructions + that feed into the current instruction were implicitly + predicated. */ + if (arm_mve_check_df_chain_back_for_implic_predic + (safe_insn_map, insn, vctp_vpr_generated)) + { + if (arm_mve_check_df_chain_fwd_for_implic_predic_impact + (insn, vctp_vpr_generated)) + { + end_sequence (); + return GEN_INT (1); + } + } + emit_insn (PATTERN (insn)); + } + } + seq = get_insns (); + end_sequence (); + + /* Re-write the entire BB contents with the transformed + sequence. */ + FOR_BB_INSNS_SAFE (body, insn, cur_insn) + if (!(GET_CODE (insn) == CODE_LABEL || NOTE_INSN_BASIC_BLOCK_P (insn))) + delete_insn (insn); + for (insn = seq; NEXT_INSN (insn); insn = NEXT_INSN (insn)) + if (NOTE_P (insn)) + emit_note_after ((enum insn_note)NOTE_KIND (insn), BB_END (body)); + else if (DEBUG_INSN_P (insn)) + emit_debug_insn_after (PATTERN (insn), BB_END (body)); + else + emit_insn_after (PATTERN (insn), BB_END (body)); + + emit_jump_insn_after (PATTERN (insn), BB_END (body)); + /* The transformation has succeeded, so now modify the "count" + (a.k.a. niter_expr) for the middle-end. Also set noloop_assumptions + to NULL to stop the middle-end from making assumptions about the + number of iterations. */ + simple_loop_desc (body->loop_father)->niter_expr = vctp_reg; + simple_loop_desc (body->loop_father)->noloop_assumptions = NULL_RTX; + return GEN_INT (decrementnum); } #if CHECKING_P diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md index ee931ad6ebd..70fade0d0da 100644 --- a/gcc/config/arm/arm.md +++ b/gcc/config/arm/arm.md @@ -124,6 +124,11 @@ ; and not all ARM insns do. (define_attr "predicated" "yes,no" (const_string "no")) + +; An attribute that encodes the CODE_FOR_ of the MVE VPT unpredicated +; version of a VPT-predicated instruction. For unpredicated instructions +; that are predicable, encode the same pattern's CODE_FOR_ as a way to +; encode that it is a predicable instruction. (define_attr "mve_unpredicated_insn" "" (const_int 0)) ; LENGTH of an instruction (in bytes) diff --git a/gcc/config/arm/iterators.md b/gcc/config/arm/iterators.md index 71e43539616..1401b59dc0b 100644 --- a/gcc/config/arm/iterators.md +++ b/gcc/config/arm/iterators.md @@ -2660,6 +2660,9 @@ (define_int_attr mrrc [(VUNSPEC_MRRC "mrrc") (VUNSPEC_MRRC2 "mrrc2")]) (define_int_attr MRRC [(VUNSPEC_MRRC "MRRC") (VUNSPEC_MRRC2 "MRRC2")]) +(define_int_attr mode1 [(DLSTP8 "8") (DLSTP16 "16") (DLSTP32 "32") + (DLSTP64 "64")]) + (define_int_attr opsuffix [(UNSPEC_DOT_S "s8") (UNSPEC_DOT_U "u8") (UNSPEC_DOT_US "s8") @@ -2903,6 +2906,8 @@ (define_int_iterator VSHLCQ_M [VSHLCQ_M_S VSHLCQ_M_U]) (define_int_iterator VQSHLUQ_M_N [VQSHLUQ_M_N_S]) (define_int_iterator VQSHLUQ_N [VQSHLUQ_N_S]) +(define_int_iterator DLSTP [DLSTP8 DLSTP16 DLSTP32 + DLSTP64]) ;; Define iterators for VCMLA operations (define_int_iterator VCMLA_OP [UNSPEC_VCMLA diff --git a/gcc/config/arm/mve.md b/gcc/config/arm/mve.md index 87cbf6c1726..dc4b6301aaa 100644 --- a/gcc/config/arm/mve.md +++ b/gcc/config/arm/mve.md @@ -6997,7 +6997,7 @@ (set (reg:SI LR_REGNUM) (plus:SI (reg:SI LR_REGNUM) (match_dup 0))) (clobber (reg:CC CC_REGNUM))] - "TARGET_32BIT && TARGET_HAVE_LOB && TARGET_HAVE_MVE && TARGET_THUMB2" + "TARGET_HAVE_MVE" { if (get_attr_length (insn) == 4) return "letp\t%|lr, %l1"; @@ -7017,5 +7017,5 @@ (unspec:SI [(match_operand:SI 0 "s_register_operand" "r")] DLSTP)) ] - "TARGET_32BIT && TARGET_HAVE_LOB && TARGET_HAVE_MVE && TARGET_THUMB2" + "TARGET_HAVE_MVE" "dlstp.\t%|lr, %0") diff --git a/gcc/config/arm/thumb2.md b/gcc/config/arm/thumb2.md index e1e013befa7..368d5138ca1 100644 --- a/gcc/config/arm/thumb2.md +++ b/gcc/config/arm/thumb2.md @@ -1613,7 +1613,7 @@ (use (match_operand 1 "" ""))] ; label "TARGET_32BIT" " - { +{ /* Currently SMS relies on the do-loop pattern to recognize loops where (1) the control part consists of all insns defining and/or using a certain 'count' register and (2) the loop count can be @@ -1623,41 +1623,65 @@ Also used to implement the low over head loops feature, which is part of the Armv8.1-M Mainline Low Overhead Branch (LOB) extension. */ - if (optimize > 0 && (flag_modulo_sched || TARGET_HAVE_LOB)) - { - rtx s0; - rtx bcomp; - rtx loc_ref; - rtx cc_reg; - rtx insn; - rtx cmp; - - if (GET_MODE (operands[0]) != SImode) - FAIL; - - s0 = operands [0]; - - /* Low over head loop instructions require the first operand to be LR. */ - if (TARGET_HAVE_LOB && arm_target_insn_ok_for_lob (operands [1])) - s0 = gen_rtx_REG (SImode, LR_REGNUM); - - if (TARGET_THUMB2) - insn = emit_insn (gen_thumb2_addsi3_compare0 (s0, s0, GEN_INT (-1))); - else - insn = emit_insn (gen_addsi3_compare0 (s0, s0, GEN_INT (-1))); - - cmp = XVECEXP (PATTERN (insn), 0, 0); - cc_reg = SET_DEST (cmp); - bcomp = gen_rtx_NE (VOIDmode, cc_reg, const0_rtx); - loc_ref = gen_rtx_LABEL_REF (VOIDmode, operands [1]); - emit_jump_insn (gen_rtx_SET (pc_rtx, - gen_rtx_IF_THEN_ELSE (VOIDmode, bcomp, - loc_ref, pc_rtx))); - DONE; - } - else - FAIL; - }") + if (optimize > 0 && (flag_modulo_sched || TARGET_HAVE_LOB)) + { + rtx s0; + rtx bcomp; + rtx loc_ref; + rtx cc_reg; + rtx insn; + rtx cmp; + rtx decrement_num; + + if (GET_MODE (operands[0]) != SImode) + FAIL; + + s0 = operands[0]; + + if (TARGET_HAVE_LOB && arm_target_bb_ok_for_lob (BLOCK_FOR_INSN (operands[1]))) + { + s0 = gen_rtx_REG (SImode, LR_REGNUM); + + /* If we have a compatibe MVE target, try and analyse the loop + contents to determine if we can use predicated dlstp/letp + looping. */ + if (TARGET_HAVE_MVE + && (decrement_num = arm_attempt_dlstp_transform (operands[1])) + && (INTVAL (decrement_num) != 1)) + { + insn = emit_insn + (gen_thumb2_addsi3_compare0 + (s0, s0, GEN_INT ((-1) * (INTVAL (decrement_num))))); + cmp = XVECEXP (PATTERN (insn), 0, 0); + cc_reg = SET_DEST (cmp); + bcomp = gen_rtx_GE (VOIDmode, cc_reg, const0_rtx); + loc_ref = gen_rtx_LABEL_REF (VOIDmode, operands[1]); + emit_jump_insn (gen_rtx_SET (pc_rtx, + gen_rtx_IF_THEN_ELSE (VOIDmode, bcomp, + loc_ref, pc_rtx))); + DONE; + } + } + + /* Otherwise, try standard decrement-by-one dls/le looping. */ + if (TARGET_THUMB2) + insn = emit_insn (gen_thumb2_addsi3_compare0 (s0, s0, + GEN_INT (-1))); + else + insn = emit_insn (gen_addsi3_compare0 (s0, s0, GEN_INT (-1))); + + cmp = XVECEXP (PATTERN (insn), 0, 0); + cc_reg = SET_DEST (cmp); + bcomp = gen_rtx_NE (VOIDmode, cc_reg, const0_rtx); + loc_ref = gen_rtx_LABEL_REF (VOIDmode, operands[1]); + emit_jump_insn (gen_rtx_SET (pc_rtx, + gen_rtx_IF_THEN_ELSE (VOIDmode, bcomp, + loc_ref, pc_rtx))); + DONE; + } + else + FAIL; +}") (define_insn "*clear_apsr" [(unspec_volatile:SI [(const_int 0)] VUNSPEC_CLRM_APSR) @@ -1755,7 +1779,37 @@ { if (REGNO (operands[0]) == LR_REGNUM) { - emit_insn (gen_dls_insn (operands[0])); + /* Pick out the number by which we are decrementing the loop counter + in every iteration. If it's > 1, then use dlstp. */ + int const_int_dec_num + = abs (INTVAL (XEXP (XEXP (XVECEXP (PATTERN (operands[1]), 0, 1), + 1), + 1))); + switch (const_int_dec_num) + { + case 16: + emit_insn (gen_dlstp8_insn (operands[0])); + break; + + case 8: + emit_insn (gen_dlstp16_insn (operands[0])); + break; + + case 4: + emit_insn (gen_dlstp32_insn (operands[0])); + break; + + case 2: + emit_insn (gen_dlstp64_insn (operands[0])); + break; + + case 1: + emit_insn (gen_dls_insn (operands[0])); + break; + + default: + gcc_unreachable (); + } DONE; } else diff --git a/gcc/config/arm/unspecs.md b/gcc/config/arm/unspecs.md index 6a5b1f8f623..7921bffc169 100644 --- a/gcc/config/arm/unspecs.md +++ b/gcc/config/arm/unspecs.md @@ -581,6 +581,10 @@ VADDLVQ_U VCTP VCTP_M + DLSTP8 + DLSTP16 + DLSTP32 + DLSTP64 VPNOT VCREATEQ_F VCVTQ_N_TO_F_S diff --git a/gcc/df-core.cc b/gcc/df-core.cc index d4812b04a7c..4fcc14bf790 100644 --- a/gcc/df-core.cc +++ b/gcc/df-core.cc @@ -1964,6 +1964,21 @@ df_bb_regno_last_def_find (basic_block bb, unsigned int regno) return NULL; } +/* Return the one and only def of REGNO within BB. If there is no def or + there are multiple defs, return NULL. */ + +df_ref +df_bb_regno_only_def_find (basic_block bb, unsigned int regno) +{ + df_ref temp = df_bb_regno_first_def_find (bb, regno); + if (!temp) + return NULL; + else if (temp == df_bb_regno_last_def_find (bb, regno)) + return temp; + else + return NULL; +} + /* Finds the reference corresponding to the definition of REG in INSN. DF is the dataflow object. */ diff --git a/gcc/df.h b/gcc/df.h index 402657a7076..98623637f9c 100644 --- a/gcc/df.h +++ b/gcc/df.h @@ -987,6 +987,7 @@ extern void df_check_cfg_clean (void); #endif extern df_ref df_bb_regno_first_def_find (basic_block, unsigned int); extern df_ref df_bb_regno_last_def_find (basic_block, unsigned int); +extern df_ref df_bb_regno_only_def_find (basic_block, unsigned int); extern df_ref df_find_def (rtx_insn *, rtx); extern bool df_reg_defined (rtx_insn *, rtx); extern df_ref df_find_use (rtx_insn *, rtx); diff --git a/gcc/loop-doloop.cc b/gcc/loop-doloop.cc index 4feb0a25ab9..f6dbd0515de 100644 --- a/gcc/loop-doloop.cc +++ b/gcc/loop-doloop.cc @@ -85,29 +85,29 @@ doloop_condition_get (rtx_insn *doloop_pat) forms: 1) (parallel [(set (pc) (if_then_else (condition) - (label_ref (label)) - (pc))) - (set (reg) (plus (reg) (const_int -1))) - (additional clobbers and uses)]) + (label_ref (label)) + (pc))) + (set (reg) (plus (reg) (const_int -n))) + (additional clobbers and uses)]) The branch must be the first entry of the parallel (also required by jump.cc), and the second entry of the parallel must be a set of the loop counter register. Some targets (IA-64) wrap the set of the loop counter in an if_then_else too. - 2) (set (reg) (plus (reg) (const_int -1)) - (set (pc) (if_then_else (reg != 0) - (label_ref (label)) - (pc))). + 2) (set (reg) (plus (reg) (const_int -n)) + (set (pc) (if_then_else (reg != 0) + (label_ref (label)) + (pc))). Some targets (ARM) do the comparison before the branch, as in the following form: - 3) (parallel [(set (cc) (compare ((plus (reg) (const_int -1), 0))) - (set (reg) (plus (reg) (const_int -1)))]) - (set (pc) (if_then_else (cc == NE) - (label_ref (label)) - (pc))) */ + 3) (parallel [(set (cc) (compare ((plus (reg) (const_int -n), 0))) + (set (reg) (plus (reg) (const_int -n)))]) + (set (pc) (if_then_else (cc == NE) + (label_ref (label)) + (pc))) */ pattern = PATTERN (doloop_pat); @@ -143,7 +143,7 @@ doloop_condition_get (rtx_insn *doloop_pat) || GET_CODE (cmp_arg1) != PLUS) return 0; reg_orig = XEXP (cmp_arg1, 0); - if (XEXP (cmp_arg1, 1) != GEN_INT (-1) + if (!CONST_INT_P (XEXP (cmp_arg1, 1)) || !REG_P (reg_orig)) return 0; cc_reg = SET_DEST (cmp_orig); @@ -156,7 +156,8 @@ doloop_condition_get (rtx_insn *doloop_pat) { /* We expect the condition to be of the form (reg != 0) */ cond = XEXP (SET_SRC (cmp), 0); - if (GET_CODE (cond) != NE || XEXP (cond, 1) != const0_rtx) + if ((GET_CODE (cond) != NE && GET_CODE (cond) != GE) + || XEXP (cond, 1) != const0_rtx) return 0; } } @@ -173,14 +174,14 @@ doloop_condition_get (rtx_insn *doloop_pat) if (! REG_P (reg)) return 0; - /* Check if something = (plus (reg) (const_int -1)). + /* Check if something = (plus (reg) (const_int -n)). On IA-64, this decrement is wrapped in an if_then_else. */ inc_src = SET_SRC (inc); if (GET_CODE (inc_src) == IF_THEN_ELSE) inc_src = XEXP (inc_src, 1); if (GET_CODE (inc_src) != PLUS || XEXP (inc_src, 0) != reg - || XEXP (inc_src, 1) != constm1_rtx) + || !CONST_INT_P (XEXP (inc_src, 1))) return 0; /* Check for (set (pc) (if_then_else (condition) @@ -211,42 +212,49 @@ doloop_condition_get (rtx_insn *doloop_pat) || (GET_CODE (XEXP (condition, 0)) == PLUS && XEXP (XEXP (condition, 0), 0) == reg)) { - if (GET_CODE (pattern) != PARALLEL) /* For the second form we expect: - (set (reg) (plus (reg) (const_int -1)) - (set (pc) (if_then_else (reg != 0) - (label_ref (label)) - (pc))). + (set (reg) (plus (reg) (const_int -n)) + (set (pc) (if_then_else (reg != 0) + (label_ref (label)) + (pc))). - is equivalent to the following: + If n == 1, that is equivalent to the following: - (parallel [(set (pc) (if_then_else (reg != 1) - (label_ref (label)) - (pc))) - (set (reg) (plus (reg) (const_int -1))) - (additional clobbers and uses)]) + (parallel [(set (pc) (if_then_else (reg != 1) + (label_ref (label)) + (pc))) + (set (reg) (plus (reg) (const_int -1))) + (additional clobbers and uses)]) - For the third form we expect: + For the third form we expect: - (parallel [(set (cc) (compare ((plus (reg) (const_int -1)), 0)) - (set (reg) (plus (reg) (const_int -1)))]) - (set (pc) (if_then_else (cc == NE) - (label_ref (label)) - (pc))) + (parallel [(set (cc) (compare ((plus (reg) (const_int -n)), 0)) + (set (reg) (plus (reg) (const_int -n)))]) + (set (pc) (if_then_else (cc == NE) + (label_ref (label)) + (pc))) - which is equivalent to the following: + Which also for n == 1 is equivalent to the following: - (parallel [(set (cc) (compare (reg, 1)) - (set (reg) (plus (reg) (const_int -1))) - (set (pc) (if_then_else (NE == cc) - (label_ref (label)) - (pc))))]) + (parallel [(set (cc) (compare (reg, 1)) + (set (reg) (plus (reg) (const_int -1))) + (set (pc) (if_then_else (NE == cc) + (label_ref (label)) + (pc))))]) - So we return the second form instead for the two cases. + So we return the second form instead for the two cases. + For the "elementwise" form where the decrement number isn't -1, + the final value may be exceeded, so use GE instead of NE. */ - condition = gen_rtx_fmt_ee (NE, VOIDmode, inc_src, const1_rtx); + if (GET_CODE (pattern) != PARALLEL) + { + if (INTVAL (XEXP (inc_src, 1)) != -1) + condition = gen_rtx_fmt_ee (GE, VOIDmode, inc_src, const0_rtx); + else + condition = gen_rtx_fmt_ee (NE, VOIDmode, inc_src, const1_rtx);; + } return condition; } @@ -685,17 +693,6 @@ doloop_optimize (class loop *loop) return false; } - max_cost - = COSTS_N_INSNS (param_max_iterations_computation_cost); - if (set_src_cost (desc->niter_expr, mode, optimize_loop_for_speed_p (loop)) - > max_cost) - { - if (dump_file) - fprintf (dump_file, - "Doloop: number of iterations too costly to compute.\n"); - return false; - } - if (desc->const_iter) iterations = widest_int::from (rtx_mode_t (desc->niter_expr, mode), UNSIGNED); @@ -716,11 +713,24 @@ doloop_optimize (class loop *loop) /* Generate looping insn. If the pattern FAILs then give up trying to modify the loop since there is some aspect the back-end does - not like. */ - count = copy_rtx (desc->niter_expr); + not like. If this succeeds, there is a chance that the loop + desc->niter_expr has been altered by the backend, so only extract + that data after the gen_doloop_end. */ start_label = block_label (desc->in_edge->dest); doloop_reg = gen_reg_rtx (mode); rtx_insn *doloop_seq = targetm.gen_doloop_end (doloop_reg, start_label); + count = copy_rtx (desc->niter_expr); + + max_cost + = COSTS_N_INSNS (param_max_iterations_computation_cost); + if (set_src_cost (count, mode, optimize_loop_for_speed_p (loop)) + > max_cost) + { + if (dump_file) + fprintf (dump_file, + "Doloop: number of iterations too costly to compute.\n"); + return false; + } word_mode_size = GET_MODE_PRECISION (word_mode); word_mode_max = (HOST_WIDE_INT_1U << (word_mode_size - 1) << 1) - 1; diff --git a/gcc/testsuite/gcc.target/arm/lob.h b/gcc/testsuite/gcc.target/arm/lob.h index feaae7cc899..3941fe7a8b6 100644 --- a/gcc/testsuite/gcc.target/arm/lob.h +++ b/gcc/testsuite/gcc.target/arm/lob.h @@ -1,15 +1,131 @@ #include - +#include /* Common code for lob tests. */ #define NO_LOB asm volatile ("@ clobber lr" : : : "lr" ) -#define N 10000 +#define N 100 + +static void +reset_data (int *a, int *b, int *c, int x) +{ + memset (a, -1, x * sizeof (*a)); + memset (b, -1, x * sizeof (*b)); + memset (c, 0, x * sizeof (*c)); +} + +static void +reset_data8 (int8_t *a, int8_t *b, int8_t *c, int x) +{ + memset (a, -1, x * sizeof (*a)); + memset (b, -1, x * sizeof (*b)); + memset (c, 0, x * sizeof (*c)); +} + +static void +reset_data16 (int16_t *a, int16_t *b, int16_t *c, int x) +{ + memset (a, -1, x * sizeof (*a)); + memset (b, -1, x * sizeof (*b)); + memset (c, 0, x * sizeof (*c)); +} + +static void +reset_data32 (int32_t *a, int32_t *b, int32_t *c, int x) +{ + memset (a, -1, x * sizeof (*a)); + memset (b, -1, x * sizeof (*b)); + memset (c, 0, x * sizeof (*c)); +} + +static void +reset_data64 (int64_t *a, int64_t *c, int x) +{ + memset (a, -1, x * sizeof (*a)); + memset (c, 0, x * sizeof (*c)); +} + +static void +check_plus (int *a, int *b, int *c, int x) +{ + for (int i = 0; i < N; i++) + { + NO_LOB; + if (i < x) + { + if (c[i] != (a[i] + b[i])) abort (); + } + else + { + if (c[i] != 0) abort (); + } + } +} + +static void +check_plus8 (int8_t *a, int8_t *b, int8_t *c, int x) +{ + for (int i = 0; i < N; i++) + { + NO_LOB; + if (i < x) + { + if (c[i] != (a[i] + b[i])) abort (); + } + else + { + if (c[i] != 0) abort (); + } + } +} + +static void +check_plus16 (int16_t *a, int16_t *b, int16_t *c, int x) +{ + for (int i = 0; i < N; i++) + { + NO_LOB; + if (i < x) + { + if (c[i] != (a[i] + b[i])) abort (); + } + else + { + if (c[i] != 0) abort (); + } + } +} + +static void +check_plus32 (int32_t *a, int32_t *b, int32_t *c, int x) +{ + for (int i = 0; i < N; i++) + { + NO_LOB; + if (i < x) + { + if (c[i] != (a[i] + b[i])) abort (); + } + else + { + if (c[i] != 0) abort (); + } + } +} static void -reset_data (int *a, int *b, int *c) +check_memcpy64 (int64_t *a, int64_t *c, int x) { - memset (a, -1, N * sizeof (*a)); - memset (b, -1, N * sizeof (*b)); - memset (c, -1, N * sizeof (*c)); + for (int i = 0; i < N; i++) + { + NO_LOB; + if (i < x) + { + if (c[i] != a[i]) abort (); + } + else + { + if (c[i] != 0) abort (); + } + } } diff --git a/gcc/testsuite/gcc.target/arm/lob1.c b/gcc/testsuite/gcc.target/arm/lob1.c index ba5c82cd55c..c8ce653a5c3 100644 --- a/gcc/testsuite/gcc.target/arm/lob1.c +++ b/gcc/testsuite/gcc.target/arm/lob1.c @@ -54,29 +54,18 @@ loop3 (int *a, int *b, int *c) } while (i < N); } -void -check (int *a, int *b, int *c) -{ - for (int i = 0; i < N; i++) - { - NO_LOB; - if (c[i] != a[i] + b[i]) - abort (); - } -} - int main (void) { - reset_data (a, b, c); + reset_data (a, b, c, N); loop1 (a, b ,c); - check (a, b ,c); - reset_data (a, b, c); + check_plus (a, b, c, N); + reset_data (a, b, c, N); loop2 (a, b ,c); - check (a, b ,c); - reset_data (a, b, c); + check_plus (a, b, c, N); + reset_data (a, b, c, N); loop3 (a, b ,c); - check (a, b ,c); + check_plus (a, b, c, N); return 0; } diff --git a/gcc/testsuite/gcc.target/arm/lob6.c b/gcc/testsuite/gcc.target/arm/lob6.c index 17b6124295e..4fe116e2c2b 100644 --- a/gcc/testsuite/gcc.target/arm/lob6.c +++ b/gcc/testsuite/gcc.target/arm/lob6.c @@ -79,14 +79,14 @@ check (void) int main (void) { - reset_data (a1, b1, c1); - reset_data (a2, b2, c2); + reset_data (a1, b1, c1, N); + reset_data (a2, b2, c2, N); loop1 (a1, b1, c1); ref1 (a2, b2, c2); check (); - reset_data (a1, b1, c1); - reset_data (a2, b2, c2); + reset_data (a1, b1, c1, N); + reset_data (a2, b2, c2, N); loop2 (a1, b1, c1); ref2 (a2, b2, c2); check (); diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-compile-asm.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-compile-asm.c new file mode 100644 index 00000000000..5ddd994e53d --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-compile-asm.c @@ -0,0 +1,561 @@ +/* { dg-do compile { target { arm*-*-* } } } */ +/* { dg-require-effective-target arm_v8_1m_mve_ok } */ +/* { dg-options "-O3 -save-temps" } */ +/* { dg-add-options arm_v8_1m_mve } */ + +#include + +#define IMM 5 + +#define TEST_COMPILE_IN_DLSTP_TERNARY(BITS, LANES, LDRSTRYTPE, TYPE, SIGN, NAME, PRED) \ +void test_##NAME##PRED##_##SIGN##BITS (TYPE##BITS##_t *a, TYPE##BITS##_t *b, TYPE##BITS##_t *c, int n) \ +{ \ + while (n > 0) \ + { \ + mve_pred16_t p = vctp##BITS##q (n); \ + TYPE##BITS##x##LANES##_t va = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (a, p); \ + TYPE##BITS##x##LANES##_t vb = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (b, p); \ + TYPE##BITS##x##LANES##_t vc = NAME##PRED##_##SIGN##BITS (va, vb, p); \ + vstr##LDRSTRYTPE##q_p_##SIGN##BITS (c, vc, p); \ + c += LANES; \ + a += LANES; \ + b += LANES; \ + n -= LANES; \ + } \ +} + +#define TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY(BITS, LANES, LDRSTRYTPE, NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_TERNARY (BITS, LANES, LDRSTRYTPE, int, s, NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_TERNARY (BITS, LANES, LDRSTRYTPE, uint, u, NAME, PRED) + +#define TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY(NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY (8, 16, b, NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY (16, 8, h, NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY (32, 4, w, NAME, PRED) + + +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY (vaddq, _x) +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY (vmulq, _x) +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY (vsubq, _x) +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY (vhaddq, _x) +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY (vorrq, _x) + + +#define TEST_COMPILE_IN_DLSTP_TERNARY_M(BITS, LANES, LDRSTRYTPE, TYPE, SIGN, NAME, PRED) \ +void test_##NAME##PRED##_##SIGN##BITS (TYPE##BITS##x##LANES##_t __inactive, TYPE##BITS##_t *a, TYPE##BITS##_t *b, TYPE##BITS##_t *c, int n) \ +{ \ + while (n > 0) \ + { \ + mve_pred16_t p = vctp##BITS##q (n); \ + TYPE##BITS##x##LANES##_t va = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (a, p); \ + TYPE##BITS##x##LANES##_t vb = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (b, p); \ + TYPE##BITS##x##LANES##_t vc = NAME##PRED##_##SIGN##BITS (__inactive, va, vb, p); \ + vstr##LDRSTRYTPE##q_p_##SIGN##BITS (c, vc, p); \ + c += LANES; \ + a += LANES; \ + b += LANES; \ + n -= LANES; \ + } \ +} + +#define TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M(BITS, LANES, LDRSTRYTPE, NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_TERNARY_M (BITS, LANES, LDRSTRYTPE, int, s, NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_TERNARY_M (BITS, LANES, LDRSTRYTPE, uint, u, NAME, PRED) + +#define TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M(NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M (8, 16, b, NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M (16, 8, h, NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M (32, 4, w, NAME, PRED) + + +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M (vaddq, _m) +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M (vmulq, _m) +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M (vsubq, _m) +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M (vhaddq, _m) +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M (vorrq, _m) + +#define TEST_COMPILE_IN_DLSTP_TERNARY_N(BITS, LANES, LDRSTRYTPE, TYPE, SIGN, NAME, PRED) \ +void test_##NAME##PRED##_n_##SIGN##BITS (TYPE##BITS##_t *a, TYPE##BITS##_t *c, int n) \ +{ \ + while (n > 0) \ + { \ + mve_pred16_t p = vctp##BITS##q (n); \ + TYPE##BITS##x##LANES##_t va = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (a, p); \ + TYPE##BITS##x##LANES##_t vc = NAME##PRED##_n_##SIGN##BITS (va, IMM, p); \ + vstr##LDRSTRYTPE##q_p_##SIGN##BITS (c, vc, p); \ + c += LANES; \ + a += LANES; \ + n -= LANES; \ + } \ +} + +#define TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N(BITS, LANES, LDRSTRYTPE, NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_TERNARY_N (BITS, LANES, LDRSTRYTPE, int, s, NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_TERNARY_N (BITS, LANES, LDRSTRYTPE, uint, u, NAME, PRED) + +#define TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N(NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N (8, 16, b, NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N (16, 8, h, NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N (32, 4, w, NAME, PRED) + +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vaddq, _x) +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vmulq, _x) +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vsubq, _x) +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vhaddq, _x) + +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vbrsrq, _x) +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vshlq, _x) +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vshrq, _x) + +#define TEST_COMPILE_IN_DLSTP_TERNARY_M_N(BITS, LANES, LDRSTRYTPE, TYPE, SIGN, NAME, PRED) \ +void test_##NAME##PRED##_n_##SIGN##BITS (TYPE##BITS##x##LANES##_t __inactive, TYPE##BITS##_t *a, TYPE##BITS##_t *c, int n) \ +{ \ + while (n > 0) \ + { \ + mve_pred16_t p = vctp##BITS##q (n); \ + TYPE##BITS##x##LANES##_t va = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (a, p); \ + TYPE##BITS##x##LANES##_t vc = NAME##PRED##_n_##SIGN##BITS (__inactive, va, IMM, p); \ + vstr##LDRSTRYTPE##q_p_##SIGN##BITS (c, vc, p); \ + c += LANES; \ + a += LANES; \ + n -= LANES; \ + } \ +} + +#define TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M_N(BITS, LANES, LDRSTRYTPE, NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_TERNARY_M_N (BITS, LANES, LDRSTRYTPE, int, s, NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_TERNARY_M_N (BITS, LANES, LDRSTRYTPE, uint, u, NAME, PRED) + +#define TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N(NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M_N (8, 16, b, NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M_N (16, 8, h, NAME, PRED) \ +TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M_N (32, 4, w, NAME, PRED) + +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vaddq, _m) +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vmulq, _m) +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vsubq, _m) +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vhaddq, _m) + +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vbrsrq, _m) +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vshlq, _m) +TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vshrq, _m) + +/* Now test some more configurations. */ + +/* Using a >=1 condition. */ +void test1 (int32_t *a, int32_t *b, int32_t *c, int n) +{ + while (n >= 1) + { + mve_pred16_t p = vctp32q (n); + int32x4_t va = vldrwq_z_s32 (a, p); + int32x4_t vb = vldrwq_z_s32 (b, p); + int32x4_t vc = vaddq_x_s32 (va, vb, p); + vstrwq_p_s32 (c, vc, p); + c+=4; + a+=4; + b+=4; + n-=4; + } +} + +/* Test a for loop format of decrementing to zero */ +int32_t a[] = {0, 1, 2, 3, 4, 5, 6, 7}; +void test2 (int32_t *b, int num_elems) +{ + for (int i = num_elems; i > 0; i-= 4) + { + mve_pred16_t p = vctp32q (i); + int32x4_t va = vldrwq_z_s32 (&(a[i]), p); + vstrwq_p_s32 (b + i, va, p); + } +} + +/* Iteration counter counting up to num_iter. */ +void test3 (uint8_t *a, uint8_t *b, uint8_t *c, int n) +{ + int num_iter = (n + 15)/16; + for (int i = 0; i < num_iter; i++) + { + mve_pred16_t p = vctp8q (n); + uint8x16_t va = vldrbq_z_u8 (a, p); + uint8x16_t vb = vldrbq_z_u8 (b, p); + uint8x16_t vc = vaddq_x_u8 (va, vb, p); + vstrbq_p_u8 (c, vc, p); + n-=16; + } +} + +/* Iteration counter counting down from num_iter. */ +void test4 (uint8_t *a, uint8_t *b, uint8_t *c, int n) +{ + int num_iter = (n + 15)/16; + for (int i = num_iter; i > 0; i--) + { + mve_pred16_t p = vctp8q (n); + uint8x16_t va = vldrbq_z_u8 (a, p); + uint8x16_t vb = vldrbq_z_u8 (b, p); + uint8x16_t vc = vaddq_x_u8 (va, vb, p); + vstrbq_p_u8 (c, vc, p); + n-=16; + } +} + +/* Using an unpredicated arithmetic instruction within the loop. */ +void test5 (uint8_t *a, uint8_t *b, uint8_t *c, uint8_t *d, int n) +{ + while (n > 0) + { + mve_pred16_t p = vctp8q (n); + uint8x16_t va = vldrbq_z_u8 (a, p); + uint8x16_t vb = vldrbq_u8 (b); + /* Is affected by implicit predication, because vb also + came from an unpredicated load, but there is no functional + problem, because the result is used in a predicated store. */ + uint8x16_t vc = vaddq_u8 (va, vb); + uint8x16_t vd = vaddq_x_u8 (va, vb, p); + vstrbq_p_u8 (c, vc, p); + vstrbq_p_u8 (d, vd, p); + n-=16; + } +} + +/* Using a different VPR value for one instruction in the loop. */ +void test6 (int32_t *a, int32_t *b, int32_t *c, int n, mve_pred16_t p1) +{ + while (n > 0) + { + mve_pred16_t p = vctp32q (n); + int32x4_t va = vldrwq_z_s32 (a, p); + int32x4_t vb = vldrwq_z_s32 (b, p1); + int32x4_t vc = vaddq_x_s32 (va, vb, p); + vstrwq_p_s32 (c, vc, p); + c += 4; + a += 4; + b += 4; + n -= 4; + } +} + +/* Generating and using another VPR value in the loop, with a vctp. + The doloop logic will always try to do the transform on the first + vctp it encounters, so this is still expected to work. */ +void test7 (int32_t *a, int32_t *b, int32_t *c, int n, int g) +{ + while (n > 0) + { + mve_pred16_t p = vctp32q (n); + int32x4_t va = vldrwq_z_s32 (a, p); + mve_pred16_t p1 = vctp32q (g); + int32x4_t vb = vldrwq_z_s32 (b, p1); + int32x4_t vc = vaddq_x_s32 (va, vb, p); + vstrwq_p_s32 (c, vc, p); + c += 4; + a += 4; + b += 4; + n -= 4; + } +} + +/* Generating and using a different VPR value in the loop, with a vctp, + but this time the p1 will also change in every loop (still fine) */ +void test8 (int32_t *a, int32_t *b, int32_t *c, int n, int g) +{ + while (n > 0) + { + mve_pred16_t p = vctp32q (n); + int32x4_t va = vldrwq_z_s32 (a, p); + mve_pred16_t p1 = vctp32q (g); + int32x4_t vb = vldrwq_z_s32 (b, p1); + int32x4_t vc = vaddq_x_s32 (va, vb, p); + vstrwq_p_s32 (c, vc, p); + c += 4; + a += 4; + b += 4; + n -= 4; + g++; + } +} + +/* Generating and using a different VPR value in the loop, with a vctp_m + that is independent of the loop vctp VPR. */ +void test9 (int32_t *a, int32_t *b, int32_t *c, int n, mve_pred16_t p1) +{ + while (n > 0) + { + mve_pred16_t p = vctp32q (n); + int32x4_t va = vldrwq_z_s32 (a, p); + mve_pred16_t p2 = vctp32q_m (n, p1); + int32x4_t vb = vldrwq_z_s32 (b, p1); + int32x4_t vc = vaddq_x_s32 (va, vb, p2); + vstrwq_p_s32 (c, vc, p); + c += 4; + a += 4; + b += 4; + n -= 4; + } +} + +/* Generating and using a different VPR value in the loop, + with a vctp_m that is tied to the base vctp VPR. This + is still fine, because the vctp_m will be transformed + into a vctp and be implicitly predicated. */ +void test10 (int32_t *a, int32_t *b, int32_t *c, int n) +{ + while (n > 0) + { + mve_pred16_t p = vctp32q (n); + int32x4_t va = vldrwq_z_s32 (a, p); + mve_pred16_t p1 = vctp32q_m (n, p); + int32x4_t vb = vldrwq_z_s32 (b, p1); + int32x4_t vc = vaddq_x_s32 (va, vb, p1); + vstrwq_p_s32 (c, vc, p); + c += 4; + a += 4; + b += 4; + n -= 4; + } +} + +/* Generating and using a different VPR value in the loop, with a vcmp. */ +void test11 (int32_t *a, int32_t *b, int32_t *c, int n) +{ + while (n > 0) + { + mve_pred16_t p = vctp32q (n); + int32x4_t va = vldrwq_z_s32 (a, p); + int32x4_t vb = vldrwq_z_s32 (b, p); + mve_pred16_t p1 = vcmpeqq_s32 (va, vb); + int32x4_t vc = vaddq_x_s32 (va, vb, p1); + vstrwq_p_s32 (c, vc, p); + c += 4; + a += 4; + b += 4; + n -= 4; + } +} + +/* Generating and using a different VPR value in the loop, with a vcmp_m. */ +void test12 (int32_t *a, int32_t *b, int32_t *c, int n, mve_pred16_t p1) +{ + while (n > 0) + { + mve_pred16_t p = vctp32q (n); + int32x4_t va = vldrwq_z_s32 (a, p); + int32x4_t vb = vldrwq_z_s32 (b, p); + mve_pred16_t p2 = vcmpeqq_m_s32 (va, vb, p1); + int32x4_t vc = vaddq_x_s32 (va, vb, p2); + vstrwq_p_s32 (c, vc, p); + c += 4; + a += 4; + b += 4; + n -= 4; + } +} + +/* Generating and using a different VPR value in the loop, with a vcmp_m + that is tied to the base vctp VPR (same as above, this will be turned + into a vcmp and be implicitly predicated). */ +void test13 (int32_t *a, int32_t *b, int32_t *c, int n, mve_pred16_t p1) +{ + while (n > 0) + { + mve_pred16_t p = vctp32q (n); + int32x4_t va = vldrwq_z_s32 (a, p); + int32x4_t vb = vldrwq_z_s32 (b, p); + mve_pred16_t p2 = vcmpeqq_m_s32 (va, vb, p); + int32x4_t vc = vaddq_x_s32 (va, vb, p2); + vstrwq_p_s32 (c, vc, p); + c += 4; + a += 4; + b += 4; + n -= 4; + } +} + +/* Using an unpredicated op with a scalar output, where the result is valid + outside the bb. This is valid, because all the inputs to the unpredicated + op are correctly predicated. */ +uint8_t test14 (uint8_t *a, uint8_t *b, uint8_t *c, int n, uint8x16_t vx) +{ + uint8_t sum = 0; + while (n > 0) + { + mve_pred16_t p = vctp8q (n); + uint8x16_t va = vldrbq_z_u8 (a, p); + uint8x16_t vb = vldrbq_z_u8 (b, p); + uint8x16_t vc = vaddq_m_u8 (vx, va, vb, p); + sum += vaddvq_u8 (vc); + a += 16; + b += 16; + n -= 16; + } + return sum; +} + +/* Same as above, but with another scalar op between the unpredicated op and + the scalar op outside the loop. */ +uint8_t test15 (uint8_t *a, uint8_t *b, uint8_t *c, int n, uint8x16_t vx, int g) +{ + uint8_t sum = 0; + while (n > 0) + { + mve_pred16_t p = vctp8q (n); + uint8x16_t va = vldrbq_z_u8 (a, p); + uint8x16_t vb = vldrbq_z_u8 (b, p); + uint8x16_t vc = vaddq_m_u8 (vx, va, vb, p); + sum += vaddvq_u8 (vc); + sum += g; + a += 16; + b += 16; + n -= 16; + } + return sum; +} + +/* Using an unpredicated vcmp to generate a new predicate value in the + loop and then using it in a predicated store insn. */ +void test16 (int32_t *a, int32_t *b, int32_t *c, int n) +{ + while (n > 0) + { + mve_pred16_t p = vctp32q (n); + int32x4_t va = vldrwq_z_s32 (a, p); + int32x4_t vb = vldrwq_s32 (b); + int32x4_t vc = vaddq_x_s32 (va, vb, p); + mve_pred16_t p1 = vcmpeqq_s32 (va, vc); + vstrwq_p_s32 (c, vc, p1); + c += 4; + a += 4; + b += 4; + n -= 4; + } +} + +/* Using a predicated vcmp to generate a new predicate value in the + loop and then using it in a predicated store insn. */ +void test17 (int32_t *a, int32_t *b, int32_t *c, int n) +{ + while (n > 0) + { + mve_pred16_t p = vctp32q (n); + int32x4_t va = vldrwq_z_s32 (a, p); + int32x4_t vb = vldrwq_z_s32 (b, p); + int32x4_t vc = vaddq_s32 (va, vb); + mve_pred16_t p1 = vcmpeqq_m_s32 (va, vc, p); + vstrwq_p_s32 (c, vc, p1); + c += 4; + a += 4; + b += 4; + n -= 4; + } +} + +/* Using an across-vector unpredicated instruction in a valid way. + This tests that "vc" has correctly masked the risky "vb". */ +uint16_t test18 (uint16_t *a, uint16_t *b, uint16_t *c, int n) +{ + uint16x8_t vb = vldrhq_u16 (b); + uint16_t res = 0; + while (n > 0) + { + mve_pred16_t p = vctp16q (n); + uint16x8_t va = vldrhq_z_u16 (a, p); + uint16x8_t vc = vaddq_x_u16 (va, vb, p); + res = vaddvq_u16 (vc); + c += 8; + a += 8; + b += 8; + n -= 8; + } + return res; +} + +/* Using an across-vector unpredicated instruction with a scalar from outside the loop. */ +uint16_t test19 (uint16_t *a, uint16_t *b, uint16_t *c, int n) +{ + uint16x8_t vb = vldrhq_u16 (b); + uint16_t res = 0; + while (n > 0) + { + mve_pred16_t p = vctp16q (n); + uint16x8_t va = vldrhq_z_u16 (a, p); + uint16x8_t vc = vaddq_x_u16 (va, vb, p); + res = vaddvaq_u16 (res, vc); + c += 8; + a += 8; + b += 8; + n -= 8; + } + return res; +} + +/* Using an across-vector predicated instruction in a valid way. */ +uint16_t test20 (uint16_t *a, uint16_t *b, uint16_t *c, int n) +{ + uint16_t res = 0; + while (n > 0) + { + mve_pred16_t p = vctp16q (n); + uint16x8_t vb = vldrhq_u16 (b); + uint16x8_t va = vldrhq_z_u16 (a, p); + res = vaddvaq_p_u16 (res, vb, p); + c += 8; + a += 8; + b += 8; + n -= 8; + } + return res; +} + +/* Using an across-vector predicated instruction in a valid way. */ +uint16_t test21 (uint16_t *a, uint16_t *b, uint16_t *c, int n) +{ + uint16_t res = 0; + while (n > 0) + { + mve_pred16_t p = vctp16q (n); + uint16x8_t vb = vldrhq_u16 (b); + uint16x8_t va = vldrhq_z_u16 (a, p); + res++; + res = vaddvaq_p_u16 (res, vb, p); + c += 8; + a += 8; + b += 8; + n -= 8; + } + return res; +} + +int test22 (uint8_t *a, uint8_t *b, uint8_t *c, int n) +{ + int res = 0; + while (n > 0) + { + mve_pred16_t p = vctp8q (n); + uint8x16_t va = vldrbq_z_u8 (a, p); + res = vmaxvq (res, va); + n-=16; + a+=16; + } + return res; +} + +int test23 (int8_t *a, int8_t *b, int8_t *c, int n) +{ + int res = 0; + while (n > 0) + { + mve_pred16_t p = vctp8q (n); + int8x16_t va = vldrbq_z_s8 (a, p); + res = vmaxavq (res, va); + n-=16; + a+=16; + } + return res; +} + +/* The final number of DLSTPs currently is calculated by the number of + `TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY.*` macros * 6 + 23. */ +/* { dg-final { scan-assembler-times {\tdlstp} 167 } } */ +/* { dg-final { scan-assembler-times {\tletp} 167 } } */ diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-int16x8.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-int16x8.c new file mode 100644 index 00000000000..0cdffb312b3 --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-int16x8.c @@ -0,0 +1,68 @@ +/* { dg-do run { target { arm*-*-* } } } */ +/* { dg-require-effective-target arm_v8_1m_mve_ok } */ +/* { dg-options "-O2 -save-temps" } */ +/* { dg-add-options arm_v8_1m_mve } */ + +#include +#include +#include +#include "../lob.h" + +void __attribute__ ((noinline)) test (int16_t *a, int16_t *b, int16_t *c, int n) +{ + while (n > 0) + { + mve_pred16_t p = vctp16q (n); + int16x8_t va = vldrhq_z_s16 (a, p); + int16x8_t vb = vldrhq_z_s16 (b, p); + int16x8_t vc = vaddq_x_s16 (va, vb, p); + vstrhq_p_s16 (c, vc, p); + c+=8; + a+=8; + b+=8; + n-=8; + } +} + +int main () +{ + int i; + int16_t temp1[N]; + int16_t temp2[N]; + int16_t temp3[N]; + reset_data16 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 0); + check_plus16 (temp1, temp2, temp3, 0); + + reset_data16 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 1); + check_plus16 (temp1, temp2, temp3, 1); + + reset_data16 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 7); + check_plus16 (temp1, temp2, temp3, 7); + + reset_data16 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 8); + check_plus16 (temp1, temp2, temp3, 8); + + reset_data16 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 9); + check_plus16 (temp1, temp2, temp3, 9); + + reset_data16 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 16); + check_plus16 (temp1, temp2, temp3, 16); + + reset_data16 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 17); + check_plus16 (temp1, temp2, temp3, 17); + + reset_data16 (temp1, temp2, temp3, N); +} + +/* { dg-final { scan-assembler-times {\tdlstp.16} 1 } } */ +/* { dg-final { scan-assembler-times {\tletp} 1 } } */ +/* { dg-final { scan-assembler-not "\tvctp" } } */ +/* { dg-final { scan-assembler-not "\tvpst" } } */ +/* { dg-final { scan-assembler-not "p0" } } */ diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-int32x4.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-int32x4.c new file mode 100644 index 00000000000..7ff789d7650 --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-int32x4.c @@ -0,0 +1,68 @@ +/* { dg-do run { target { arm*-*-* } } } */ +/* { dg-require-effective-target arm_v8_1m_mve_ok } */ +/* { dg-options "-O2 -save-temps" } */ +/* { dg-add-options arm_v8_1m_mve } */ + +#include +#include +#include +#include "../lob.h" + +void __attribute__ ((noinline)) test (int32_t *a, int32_t *b, int32_t *c, int n) +{ + while (n > 0) + { + mve_pred16_t p = vctp32q (n); + int32x4_t va = vldrwq_z_s32 (a, p); + int32x4_t vb = vldrwq_z_s32 (b, p); + int32x4_t vc = vaddq_x_s32 (va, vb, p); + vstrwq_p_s32 (c, vc, p); + c+=4; + a+=4; + b+=4; + n-=4; + } +} + +int main () +{ + int i; + int32_t temp1[N]; + int32_t temp2[N]; + int32_t temp3[N]; + reset_data32 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 0); + check_plus32 (temp1, temp2, temp3, 0); + + reset_data32 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 1); + check_plus32 (temp1, temp2, temp3, 1); + + reset_data32 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 3); + check_plus32 (temp1, temp2, temp3, 3); + + reset_data32 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 4); + check_plus32 (temp1, temp2, temp3, 4); + + reset_data32 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 5); + check_plus32 (temp1, temp2, temp3, 5); + + reset_data32 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 8); + check_plus32 (temp1, temp2, temp3, 8); + + reset_data32 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 9); + check_plus32 (temp1, temp2, temp3, 9); + + reset_data32 (temp1, temp2, temp3, N); +} + +/* { dg-final { scan-assembler-times {\tdlstp.32} 1 } } */ +/* { dg-final { scan-assembler-times {\tletp} 1 } } */ +/* { dg-final { scan-assembler-not "\tvctp" } } */ +/* { dg-final { scan-assembler-not "\tvpst" } } */ +/* { dg-final { scan-assembler-not "p0" } } */ diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-int64x2.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-int64x2.c new file mode 100644 index 00000000000..8065bd02469 --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-int64x2.c @@ -0,0 +1,68 @@ +/* { dg-do run { target { arm*-*-* } } } */ +/* { dg-require-effective-target arm_v8_1m_mve_ok } */ +/* { dg-options "-O2 -save-temps" } */ +/* { dg-add-options arm_v8_1m_mve } */ + +#include +#include +#include +#include "../lob.h" + +void __attribute__ ((noinline)) test (int64_t *a, int64_t *c, int n) +{ + while (n > 0) + { + mve_pred16_t p = vctp64q (n); + int64x2_t va = vldrdq_gather_offset_z_s64 (a, vcreateq_u64 (0, 8), p); + vstrdq_scatter_offset_p_s64 (c, vcreateq_u64 (0, 8), va, p); + c+=2; + a+=2; + n-=2; + } +} + +int main () +{ + int i; + int64_t temp1[N]; + int64_t temp3[N]; + reset_data64 (temp1, temp3, N); + test (temp1, temp3, 0); + check_memcpy64 (temp1, temp3, 0); + + reset_data64 (temp1, temp3, N); + test (temp1, temp3, 1); + check_memcpy64 (temp1, temp3, 1); + + reset_data64 (temp1, temp3, N); + test (temp1, temp3, 2); + check_memcpy64 (temp1, temp3, 2); + + reset_data64 (temp1, temp3, N); + test (temp1, temp3, 3); + check_memcpy64 (temp1, temp3, 3); + + reset_data64 (temp1, temp3, N); + test (temp1, temp3, 4); + check_memcpy64 (temp1, temp3, 4); + + reset_data64 (temp1, temp3, N); + test (temp1, temp3, 5); + check_memcpy64 (temp1, temp3, 5); + + reset_data64 (temp1, temp3, N); + test (temp1, temp3, 6); + check_memcpy64 (temp1, temp3, 6); + + reset_data64 (temp1, temp3, N); + test (temp1, temp3, 7); + check_memcpy64 (temp1, temp3, 7); + + reset_data64 (temp1, temp3, N); +} + +/* { dg-final { scan-assembler-times {\tdlstp.64} 1 } } */ +/* { dg-final { scan-assembler-times {\tletp} 1 } } */ +/* { dg-final { scan-assembler-not "\tvctp" } } */ +/* { dg-final { scan-assembler-not "\tvpst" } } */ +/* { dg-final { scan-assembler-not "p0" } } */ diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-int8x16.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-int8x16.c new file mode 100644 index 00000000000..552781001e9 --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-int8x16.c @@ -0,0 +1,68 @@ +/* { dg-do run { target { arm*-*-* } } } */ +/* { dg-require-effective-target arm_v8_1m_mve_ok } */ +/* { dg-options "-O2 -save-temps" } */ +/* { dg-add-options arm_v8_1m_mve } */ + +#include +#include +#include +#include "../lob.h" + +void __attribute__ ((noinline)) test (int8_t *a, int8_t *b, int8_t *c, int n) +{ + while (n > 0) + { + mve_pred16_t p = vctp8q (n); + int8x16_t va = vldrbq_z_s8 (a, p); + int8x16_t vb = vldrbq_z_s8 (b, p); + int8x16_t vc = vaddq_x_s8 (va, vb, p); + vstrbq_p_s8 (c, vc, p); + c+=16; + a+=16; + b+=16; + n-=16; + } +} + +int main () +{ + int i; + int8_t temp1[N]; + int8_t temp2[N]; + int8_t temp3[N]; + reset_data8 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 0); + check_plus8 (temp1, temp2, temp3, 0); + + reset_data8 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 1); + check_plus8 (temp1, temp2, temp3, 1); + + reset_data8 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 15); + check_plus8 (temp1, temp2, temp3, 15); + + reset_data8 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 16); + check_plus8 (temp1, temp2, temp3, 16); + + reset_data8 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 17); + check_plus8 (temp1, temp2, temp3, 17); + + reset_data8 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 32); + check_plus8 (temp1, temp2, temp3, 32); + + reset_data8 (temp1, temp2, temp3, N); + test (temp1, temp2, temp3, 33); + check_plus8 (temp1, temp2, temp3, 33); + + reset_data8 (temp1, temp2, temp3, N); +} + +/* { dg-final { scan-assembler-times {\tdlstp.8} 1 } } */ +/* { dg-final { scan-assembler-times {\tletp} 1 } } */ +/* { dg-final { scan-assembler-not "\tvctp" } } */ +/* { dg-final { scan-assembler-not "\tvpst" } } */ +/* { dg-final { scan-assembler-not "p0" } } */ diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-invalid-asm.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-invalid-asm.c new file mode 100644 index 00000000000..c1c40c2fea7 --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-invalid-asm.c @@ -0,0 +1,343 @@ +/* { dg-do compile { target { arm*-*-* } } } */ +/* { dg-require-effective-target arm_v8_1m_mve_ok } */ +/* { dg-options "-O3 -save-temps" } */ +/* { dg-add-options arm_v8_1m_mve } */ + +#include + +/* Terminating on a non-zero number of elements. */ +void test0 (uint8_t *a, uint8_t *b, uint8_t *c, int n) +{ + while (n > 1) + { + mve_pred16_t p = vctp8q (n); + uint8x16_t va = vldrbq_z_u8 (a, p); + uint8x16_t vb = vldrbq_z_u8 (b, p); + uint8x16_t vc = vaddq_x_u8 (va, vb, p); + vstrbq_p_u8 (c, vc, p); + n -= 16; + } +} + +/* Terminating on n >= 0. */ +void test1 (uint8_t *a, uint8_t *b, uint8_t *c, int n) +{ + while (n >= 0) + { + mve_pred16_t p = vctp8q (n); + uint8x16_t va = vldrbq_z_u8 (a, p); + uint8x16_t vb = vldrbq_z_u8 (b, p); + uint8x16_t vc = vaddq_x_u8 (va, vb, p); + vstrbq_p_u8 (c, vc, p); + n -= 16; + } +} + +/* Similar, terminating on a non-zero number of elements, but in a for loop + format. */ +int32_t a[] = {0, 1, 2, 3, 4, 5, 6, 7}; +void test2 (int32_t *b, int num_elems) +{ + for (int i = num_elems; i >= 2; i-= 4) + { + mve_pred16_t p = vctp32q (i); + int32x4_t va = vldrwq_z_s32 (&(a[i]), p); + vstrwq_p_s32 (b + i, va, p); + } +} + +/* Iteration counter counting up to num_iter, with a non-zero starting num. */ +void test3 (uint8_t *a, uint8_t *b, uint8_t *c, int n) +{ + int num_iter = (n + 15)/16; + for (int i = 1; i < num_iter; i++) + { + mve_pred16_t p = vctp8q (n); + uint8x16_t va = vldrbq_z_u8 (a, p); + uint8x16_t vb = vldrbq_z_u8 (b, p); + uint8x16_t vc = vaddq_x_u8 (va, vb, p); + vstrbq_p_u8 (c, vc, p); + n -= 16; + } +} + +/* Iteration counter counting up to num_iter, with a larger increment */ +void test4 (uint8_t *a, uint8_t *b, uint8_t *c, int n) +{ + int num_iter = (n + 15)/16; + for (int i = 0; i < num_iter; i+=2) + { + mve_pred16_t p = vctp8q (n); + uint8x16_t va = vldrbq_z_u8 (a, p); + uint8x16_t vb = vldrbq_z_u8 (b, p); + uint8x16_t vc = vaddq_x_u8 (va, vb, p); + vstrbq_p_u8 (c, vc, p); + n -= 16; + } +} + +/* Using an unpredicated store instruction within the loop. */ +void test5 (uint8_t *a, uint8_t *b, uint8_t *c, uint8_t *d, int n) +{ + while (n > 0) + { + mve_pred16_t p = vctp8q (n); + uint8x16_t va = vldrbq_z_u8 (a, p); + uint8x16_t vb = vldrbq_z_u8 (b, p); + uint8x16_t vc = vaddq_u8 (va, vb); + uint8x16_t vd = vaddq_x_u8 (va, vb, p); + vstrbq_u8 (d, vd); + n -= 16; + } +} + +/* Using an unpredicated store outside the loop. */ +void test6 (uint8_t *a, uint8_t *b, uint8_t *c, int n, uint8x16_t vx) +{ + while (n > 0) + { + mve_pred16_t p = vctp8q (n); + uint8x16_t va = vldrbq_z_u8 (a, p); + uint8x16_t vb = vldrbq_z_u8 (b, p); + uint8x16_t vc = vaddq_m_u8 (vx, va, vb, p); + vx = vaddq_u8 (vx, vc); + a += 16; + b += 16; + n -= 16; + } + vstrbq_u8 (c, vx); +} + +/* Using a VPR that gets modified within the loop. */ +void test9 (int32_t *a, int32_t *b, int32_t *c, int n) +{ + while (n > 0) + { + mve_pred16_t p = vctp32q (n); + int32x4_t va = vldrwq_z_s32 (a, p); + p++; + int32x4_t vb = vldrwq_z_s32 (b, p); + int32x4_t vc = vaddq_x_s32 (va, vb, p); + vstrwq_p_s32 (c, vc, p); + c += 4; + a += 4; + b += 4; + n -= 4; + } +} + +/* Using a VPR that gets re-generated within the loop. */ +void test10 (int32_t *a, int32_t *b, int32_t *c, int n) +{ + mve_pred16_t p = vctp32q (n); + while (n > 0) + { + int32x4_t va = vldrwq_z_s32 (a, p); + p = vctp32q (n); + int32x4_t vb = vldrwq_z_s32 (b, p); + int32x4_t vc = vaddq_x_s32 (va, vb, p); + vstrwq_p_s32 (c, vc, p); + c += 4; + a += 4; + b += 4; + n -= 4; + } +} + +/* Using vctp32q_m instead of vctp32q. */ +void test11 (int32_t *a, int32_t *b, int32_t *c, int n, mve_pred16_t p0) +{ + while (n > 0) + { + mve_pred16_t p = vctp32q_m (n, p0); + int32x4_t va = vldrwq_z_s32 (a, p); + int32x4_t vb = vldrwq_z_s32 (b, p); + int32x4_t vc = vaddq_x_s32 (va, vb, p); + vstrwq_p_s32 (c, vc, p); + c += 4; + a += 4; + b += 4; + n -= 4; + } +} + +/* Using an unpredicated op with a scalar output, where the result is valid + outside the bb. This is invalid, because one of the inputs to the + unpredicated op is also unpredicated. */ +uint8_t test12 (uint8_t *a, uint8_t *b, uint8_t *c, int n, uint8x16_t vx) +{ + uint8_t sum = 0; + while (n > 0) + { + mve_pred16_t p = vctp8q (n); + uint8x16_t va = vldrbq_z_u8 (a, p); + uint8x16_t vb = vldrbq_u8 (b); + uint8x16_t vc = vaddq_u8 (va, vb); + sum += vaddvq_u8 (vc); + a += 16; + b += 16; + n -= 16; + } + return sum; +} + +/* Using an unpredicated vcmp to generate a new predicate value in the + loop and then using that VPR to predicate a store insn. */ +void test13 (int32_t *a, int32_t *b, int32x4_t vc, int32_t *c, int n) +{ + while (n > 0) + { + mve_pred16_t p = vctp32q (n); + int32x4_t va = vldrwq_s32 (a); + int32x4_t vb = vldrwq_z_s32 (b, p); + int32x4_t vc = vaddq_s32 (va, vb); + mve_pred16_t p1 = vcmpeqq_s32 (va, vc); + vstrwq_p_s32 (c, vc, p1); + c += 4; + a += 4; + b += 4; + n -= 4; + } +} + +/* Using an across-vector unpredicated instruction. "vb" is the risk. */ +uint16_t test14 (uint16_t *a, uint16_t *b, uint16_t *c, int n) +{ + uint16x8_t vb = vldrhq_u16 (b); + uint16_t res = 0; + while (n > 0) + { + mve_pred16_t p = vctp16q (n); + uint16x8_t va = vldrhq_z_u16 (a, p); + vb = vaddq_u16 (va, vb); + res = vaddvq_u16 (vb); + c += 8; + a += 8; + b += 8; + n -= 8; + } + return res; +} + +/* Using an across-vector unpredicated instruction. "vc" is the risk. */ +uint16_t test15 (uint16_t *a, uint16_t *b, uint16_t *c, int n) +{ + uint16x8_t vb = vldrhq_u16 (b); + uint16_t res = 0; + while (n > 0) + { + mve_pred16_t p = vctp16q (n); + uint16x8_t va = vldrhq_z_u16 (a, p); + uint16x8_t vc = vaddq_u16 (va, vb); + res = vaddvaq_u16 (res, vc); + c += 8; + a += 8; + b += 8; + n -= 8; + } + return res; +} + +uint16_t test16 (uint16_t *a, uint16_t *b, uint16_t *c, int n) +{ + uint16_t res =0; + while (n > 0) + { + mve_pred16_t p = vctp16q (n); + uint16x8_t vb = vldrhq_u16 (b); + uint16x8_t va = vldrhq_z_u16 (a, p); + res = vaddvaq_u16 (res, vb); + res = vaddvaq_p_u16 (res, va, p); + c += 8; + a += 8; + b += 8; + n -= 8; + } + return res; +} + +int test17 (int8_t *a, int8_t *b, int8_t *c, int n) +{ + int res = 0; + while (n > 0) + { + mve_pred16_t p = vctp8q (n); + int8x16_t va = vldrbq_z_s8 (a, p); + res = vmaxvq (res, va); + n-=16; + a+=16; + } + return res; +} + + + +int test18 (int8_t *a, int8_t *b, int8_t *c, int n) +{ + int res = 0; + while (n > 0) + { + mve_pred16_t p = vctp8q (n); + int8x16_t va = vldrbq_z_s8 (a, p); + res = vminvq (res, va); + n-=16; + a+=16; + } + return res; +} + +int test19 (int8_t *a, int8_t *b, int8_t *c, int n) +{ + int res = 0; + while (n > 0) + { + mve_pred16_t p = vctp8q (n); + int8x16_t va = vldrbq_z_s8 (a, p); + res = vminavq (res, va); + n-=16; + a+=16; + } + return res; +} + +int test20 (uint8_t *a, uint8_t *b, uint8_t *c, int n) +{ + int res = 0; + while (n > 0) + { + mve_pred16_t p = vctp8q (n); + uint8x16_t va = vldrbq_z_u8 (a, p); + res = vminvq (res, va); + n-=16; + a+=16; + } + return res; +} + +uint8x16_t test21 (uint8_t *a, uint32_t *b, int n, uint8x16_t res) +{ + while (n > 0) + { + mve_pred16_t p = vctp8q (n); + uint8x16_t va = vldrbq_z_u8 (a, p); + res = vshlcq_u8 (va, b, 1); + n-=16; + a+=16; + } + return res; +} + +int8x16_t test22 (int8_t *a, int32_t *b, int n, int8x16_t res) +{ + while (n > 0) + { + mve_pred16_t p = vctp8q (n); + int8x16_t va = vldrbq_z_s8 (a, p); + res = vshlcq_s8 (va, b, 1); + n-=16; + a+=16; + } + return res; +} +/* { dg-final { scan-assembler-not "\tdlstp" } } */ +/* { dg-final { scan-assembler-not "\tletp" } } */