From patchwork Fri May 5 16:48:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Lyon X-Patchwork-Id: 68841 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 3D282385414A for ; Fri, 5 May 2023 16:50:03 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 3D282385414A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1683305403; bh=qjDu1oHqktJjo+SNmknJTIkzrFhho66K/I/aNEy8LXI=; h=To:CC:Subject:Date:List-Id:List-Unsubscribe:List-Archive: List-Post:List-Help:List-Subscribe:From:Reply-To:From; b=GFE5V20mTKVTCXMouQegSz4qdHuhmurT7AFni89D/WPNBVXfdHyxnCJacn8GFOZXI tl69M69TSKYwiMIkURmLjf24rm73KPSy/qxLq2YEa/Lr2iYQhPbNq+ZmWSHmNXjbWU 3KONc2iU+k2UAujtym4lXnhJ8hPyX7UUwrxkBDKc= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR05-AM6-obe.outbound.protection.outlook.com (mail-am6eur05on2069.outbound.protection.outlook.com [40.107.22.69]) by sourceware.org (Postfix) with ESMTPS id 6D9403858D33 for ; Fri, 5 May 2023 16:49:30 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 6D9403858D33 Received: from DU2PR04CA0041.eurprd04.prod.outlook.com (2603:10a6:10:234::16) by GV1PR08MB8129.eurprd08.prod.outlook.com (2603:10a6:150:93::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Fri, 5 May 2023 16:49:27 +0000 Received: from DBAEUR03FT031.eop-EUR03.prod.protection.outlook.com (2603:10a6:10:234:cafe::83) by DU2PR04CA0041.outlook.office365.com (2603:10a6:10:234::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DBAEUR03FT031.mail.protection.outlook.com (100.127.142.173) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:25 +0000 Received: ("Tessian outbound 8b05220b4215:v136"); Fri, 05 May 2023 16:49:25 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: c38bbaeced90c0c6 X-CR-MTA-TID: 64aa7808 Received: from 933a38b26be4.2 by 64aa7808-outbound-1.mta.getcheckrecipient.com id F9AFB577-3F0B-4A54-ABAD-AA22E7BEACA6.1; Fri, 05 May 2023 16:49:19 +0000 Received: from EUR04-HE1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 933a38b26be4.2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Fri, 05 May 2023 16:49:19 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=glidfgAHcPMfZZOZe+Pxoj433GEmL4YBezct8xzT45PdLR9mdvNn266bcIi9NA3qdY7zP65or77N5lN0Tm8i/9+ZYjxM1ftyvodpby8a1gnsiVeO2Izx6voCnysPXsHxKFoIQ0de2n11wZMymKy+ZcBhKU/cnw51XF9lNQkv+rpGbMH2a86X2TLR0gWm6FWUtoiKJdVpgA+UysM8mrJmnsG7+Dy1ej3IdTrKm+8LG25VHC8MGtjoQY5EnEn5Mo33Zf8gT0NmO4RLPCapXN7F2uxKBcTtdr4c6lHlAMwxBmTe8o6soMazOpjKHxaOOVhGga/+XMtAELhCJTyrfiBFSg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qjDu1oHqktJjo+SNmknJTIkzrFhho66K/I/aNEy8LXI=; b=GQWwZN0mfHuv8ntKelKiL60NpxHy6n0bsX6taKBS4z9M9zcPvIQdFPbI31f6qdA9+2Oqxxn9ifT7o18YS+yr61fUTnI00I7nhhR4AD6kr1JbQgIQ/tkVCgg7OmGZ8K2HBG7ch2mU2Cb01a7QIFp1DOWZ8eK69+dolqE8b/Zwpnh8QZqHLgsjZFVRiXHCvDp8DM4RfzLCZPW8WQSF9NCyK9YFayWLwu/pNMaTYE1OO4s4d6E4VgANRebvnuJOaaaQ2NKeCEOkuLvdv4KJJ1jF5IfikasRKyiB0vjU4mGpvO9TePTwBRE/5FSbNwD+vFgZ/NV+ftr5MJFnDWI5JCrCoA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none Received: from DB6PR0501CA0047.eurprd05.prod.outlook.com (2603:10a6:4:67::33) by DU0PR08MB7763.eurprd08.prod.outlook.com (2603:10a6:10:3b9::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 16:49:15 +0000 Received: from DBAEUR03FT044.eop-EUR03.prod.protection.outlook.com (2603:10a6:4:67:cafe::23) by DB6PR0501CA0047.outlook.office365.com (2603:10a6:4:67::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend Transport; Fri, 5 May 2023 16:49:15 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234) smtp.mailfrom=arm.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 40.67.248.234 as permitted sender) receiver=protection.outlook.com; client-ip=40.67.248.234; helo=nebula.arm.com; pr=C Received: from nebula.arm.com (40.67.248.234) by DBAEUR03FT044.mail.protection.outlook.com (100.127.142.189) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:15 +0000 Received: from AZ-NEU-EX02.Emea.Arm.com (10.251.26.5) by AZ-NEU-EX04.Arm.com (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Fri, 5 May 2023 16:49:15 +0000 Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX02.Emea.Arm.com (10.251.26.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Fri, 5 May 2023 16:49:14 +0000 Received: from e129018.arm.com (10.57.22.112) by mail.arm.com (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.23 via Frontend Transport; Fri, 5 May 2023 16:49:13 +0000 To: , , , CC: Christophe Lyon Subject: [PATCH 01/10] arm: [MVE intrinsics] add unary shape Date: Fri, 5 May 2023 18:48:57 +0200 Message-ID: <20230505164906.596219-1-christophe.lyon@arm.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-EOPAttributedMessage: 1 X-MS-TrafficTypeDiagnostic: DBAEUR03FT044:EE_|DU0PR08MB7763:EE_|DBAEUR03FT031:EE_|GV1PR08MB8129:EE_ X-MS-Office365-Filtering-Correlation-Id: 6add6437-d193-4f7b-3fee-08db4d88af39 x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: q5HzJzhJCvbls++b6nSixl6kFlHqRIlkgaK/PF25fMhMxxEGl2g+zPNsncYaWROFqYH2ytRHPX3Lo4abF649pWrIQ5ypZzyE5HKqBX3GX8Pdmz+rDMydu19ySSMVjmWPnrwVful7z0PVOn/hVbZR5tWzK2/2zXQkripDh4BXSo9YwIUs/RpG5uFDrWpl7EdkouyNMsPhMQznM/yXUOlAIZjokq16JQQDrDolouyLDIKLk6xs9ixfzTo9qQv71PVB9pdDRQ7rVEE9v7pufxJHmLDBmgcFSXILtYuqKqRtcfu7+lpxzcjaqFck3WiYCBUhlncaJrXrk5c3wbvMUoNMQNxSLQhNk9WgA8cxrvmDleRaARIj/ObggdKnCjp6+1anXsLsyVunHfDgctMb/pmDBtxnyGK85mIz/aeEAtgErwGII4na3SKO7BYKI2ImxdtrOix3Jc/nuoCaxxmwRh2APXgHg3ySUN6SD6wsIFOBe/mGbrOh+KuKEuLXjx90fzMnus8OMFQ60AYvnupUh9IOvGsdQVy++ZX/5piAjmGaB3bJhrROs/PSEaBF/CGj5w1ioLoJj8lFkHDhHMAN5qQJBPakgzgN76UaZb1EjG4UKWfUQ0Pela7SbLsIC/B/NFA/m1y/GOuW08YpllW6WxRmbMBC9QEuE+oeb6IL2QvOQ+lebCazct4ywrQTllZ/wgZ7PCKSce98B1ycgmk7bHVObtqAcg3qQZemRdHk+wnDfI4= X-Forefront-Antispam-Report-Untrusted: CIP:40.67.248.234; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:nebula.arm.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230028)(4636009)(346002)(39860400002)(396003)(136003)(376002)(451199021)(36840700001)(46966006)(40470700004)(36756003)(86362001)(6666004)(316002)(110136005)(70206006)(70586007)(4326008)(478600001)(6636002)(7696005)(336012)(40480700001)(82310400005)(8676002)(8936002)(5660300002)(41300700001)(44832011)(2906002)(2616005)(356005)(186003)(81166007)(82740400003)(1076003)(36860700001)(26005)(426003)(47076005)(40460700003)(36900700001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB7763 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DBAEUR03FT031.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 4a351dc6-2da0-49ff-b220-08db4d88a957 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 2+C4zrgjxdcM6M/0aDKY0GfsJncS9uWCT3rf52ZGfcl0PLVpGyrfWgbgvgPGv6bur2AitsmYTnkJqHl50uRDwJpQWbnVYllfew4eqCvlvPiiYyBYWBAUzlmvMaSJ+/Ag1DQgGHY3mRFJhd2fOcnuCUY4eVur29xPEgz2S8AXpZMJuCAf9v39FuL/bp/4Er6lGyzOtTYMUSOngijW441IsKdMUO174uahmHrAUXzMCynZZowhibyWPp8lZZo++aWuEPkULlTShMSHmHYK+S3nKlKRz5gUMQd86qSlZ5Kf74ZAlaEGs1y70P8MTMILl+gNZubtm7U2D331VBZMjlC+//BN/s2wGgCaS0kourKAX17fWP9d83/PqfW1Pb1LEPINYInp3IllgUI2H/Bl7ZWelUlIAE4EmWxhaCBe2pZngu/fneWCCtxpoRDToroqE/o5rGx5ihqp+6FNtphESl/6yCY2YTWvsKfKpif35iIIO59CD79lSbIMMBz2VQpMt0hm3I/FSLgNh3Q3dTu8omDUzvivnACfrjdbnmceHIDvZJdymVRz0HQi5cd1wOds4KNl/YGHgvw1EoCd6Y+o66pjvhPE/Up8lDlfHVeMkbh5WgGXTASLT8cJmAoGU8loe8GQnXLduOHXnrOT4URAF+dlsFMlOaqjaqGXZhJjjbSjHxwPK/eT6KLEzuxVPhcYGlDczhFnLSFvXzNuQ/A+8hG5aQ== X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230028)(4636009)(39860400002)(396003)(136003)(376002)(346002)(451199021)(36840700001)(46966006)(40470700004)(2906002)(47076005)(82740400003)(426003)(81166007)(2616005)(1076003)(26005)(36860700001)(186003)(4326008)(40460700003)(36756003)(8676002)(8936002)(5660300002)(44832011)(40480700001)(478600001)(6636002)(316002)(7696005)(41300700001)(82310400005)(6666004)(86362001)(110136005)(70206006)(70586007)(336012); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 16:49:25.3712 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6add6437-d193-4f7b-3fee-08db4d88af39 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DBAEUR03FT031.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8129 X-Spam-Status: No, score=-12.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Christophe Lyon via Gcc-patches From: Christophe Lyon Reply-To: Christophe Lyon Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" This patch adds the unary shape description. 2022-09-08 Christophe Lyon gcc/ * config/arm/arm-mve-builtins-shapes.cc (unary): New. * config/arm/arm-mve-builtins-shapes.h (unary): New. --- gcc/config/arm/arm-mve-builtins-shapes.cc | 27 +++++++++++++++++++++++ gcc/config/arm/arm-mve-builtins-shapes.h | 1 + 2 files changed, 28 insertions(+) diff --git a/gcc/config/arm/arm-mve-builtins-shapes.cc b/gcc/config/arm/arm-mve-builtins-shapes.cc index 7078f7d7220..7d39cf79aec 100644 --- a/gcc/config/arm/arm-mve-builtins-shapes.cc +++ b/gcc/config/arm/arm-mve-builtins-shapes.cc @@ -786,6 +786,33 @@ struct inherent_def : public nonoverloaded_base }; SHAPE (inherent) +/* _t vfoo[_t0](_t) + + i.e. the standard shape for unary operations that operate on + uniform types. + + Example: vabsq. + int8x16_t [__arm_]vabsq[_s8](int8x16_t a) + int8x16_t [__arm_]vabsq_m[_s8](int8x16_t inactive, int8x16_t a, mve_pred16_t p) + int8x16_t [__arm_]vabsq_x[_s8](int8x16_t a, mve_pred16_t p) */ +struct unary_def : public overloaded_base<0> +{ + void + build (function_builder &b, const function_group_info &group, + bool preserve_user_namespace) const override + { + b.add_overloaded_functions (group, MODE_none, preserve_user_namespace); + build_all (b, "v0,v0", group, MODE_none, preserve_user_namespace); + } + + tree + resolve (function_resolver &r) const override + { + return r.resolve_unary (); + } +}; +SHAPE (unary) + /* _t foo_t0[_t1](_t) where the target type must be specified explicitly but the source diff --git a/gcc/config/arm/arm-mve-builtins-shapes.h b/gcc/config/arm/arm-mve-builtins-shapes.h index 09e00b69e63..bd7e11b89f6 100644 --- a/gcc/config/arm/arm-mve-builtins-shapes.h +++ b/gcc/config/arm/arm-mve-builtins-shapes.h @@ -45,6 +45,7 @@ namespace arm_mve extern const function_shape *const binary_rshift_narrow_unsigned; extern const function_shape *const create; extern const function_shape *const inherent; + extern const function_shape *const unary; extern const function_shape *const unary_convert; } /* end namespace arm_mve::shapes */ From patchwork Fri May 5 16:48:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Lyon X-Patchwork-Id: 68842 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 9AFB93855595 for ; Fri, 5 May 2023 16:50:07 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 9AFB93855595 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1683305407; bh=9R7ne38SxvRHKguU+MYkar3o+HQWDgpoESeYi8+LdQM=; h=To:CC:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=eyfdTaCA9evarA0LvLYszCmH67v3lyxoNC3374cRGAeV1P3L9qaFg3rKfYVL1Z7d7 HmioapHbfeAR4iE6ExD+z3x2G5RXx7lDnDi0aXSHQ/5lKgYFYuN5NT6N6k6yC9nqpL 6FAQIvtr1f8lSHZPB1zT3++9YAbTqmTs7+iNxJxU= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR05-AM6-obe.outbound.protection.outlook.com (mail-am6eur05on2078.outbound.protection.outlook.com [40.107.22.78]) by sourceware.org (Postfix) with ESMTPS id F27063858D35 for ; Fri, 5 May 2023 16:49:33 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org F27063858D35 Received: from AS9PR04CA0112.eurprd04.prod.outlook.com (2603:10a6:20b:531::13) by AS2PR08MB9917.eurprd08.prod.outlook.com (2603:10a6:20b:55f::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 16:49:30 +0000 Received: from AM7EUR03FT052.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:531:cafe::ad) by AS9PR04CA0112.outlook.office365.com (2603:10a6:20b:531::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM7EUR03FT052.mail.protection.outlook.com (100.127.140.214) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:30 +0000 Received: ("Tessian outbound 3a01b65b5aad:v136"); Fri, 05 May 2023 16:49:30 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 3261d7050d0efc95 X-CR-MTA-TID: 64aa7808 Received: from 8f7b35d04e14.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id F504CCA3-F8CF-4106-919A-3C385131D429.1; Fri, 05 May 2023 16:49:18 +0000 Received: from EUR03-AM7-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8f7b35d04e14.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Fri, 05 May 2023 16:49:18 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BZQgyBBGw1wKmg/oGqjwAGagzKz7RW4gLI4A2xMN5+DkIx4NHpDw4mUrtqWMT823r8dP8cgjw4k3ih8zmA6KDEYh/AxvxZcWsCRKO5IB5RhRq5UnW8dKwQMiJ7pZy1zskMDmpeUOLA+ivGfmLaw+VlOEr1T0mcm+pHTolv9Tta+9v8lTsrYQet0+quDYEMzGJ6HB1FvOoqRKpE+GWAi82YPouBAWDy40d+x7rrQW23DrrwYPAa7tjpwKbo0K49vn/V6WCyCp8KsJnhNeIpN9/kVLLu0TiwrL0DJK2IVuQVbovqFLjjf11pz2yUsDDz1JbVB8WJefcCwL9qZXiFAU7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=9R7ne38SxvRHKguU+MYkar3o+HQWDgpoESeYi8+LdQM=; b=oMku1ZXOcNvdZ7N3np5M+mOxZdNWEAF9Y4HdNw83HQAC17adipP1EDDBwbIpxBq5heU7xLf/FG+oearMvQhzUVz5cXPcC5jaO/rPUXFPVCADzwob8Nmbi0xzVDe5m1xGx2JikboYOu6fXibyZ5xV3adWgEi37s2cUEsirIp0dn9auMkdhnlAqn0mK8dF1esp7ZHo1V6Wy2tdYPb3ZQlJhvTlGRPMZ3XyMKLsKVPDLV39Wa/i/TQZmfNd8UXUPyTLoMFzRYuTPqefyJrooOsmjrn9Dd6y85lY6m3qlGlLHsbe5KnXfh0xauC0noC/rmWyqhsPehJDsWc0vH75F5t6Jw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none Received: from DB6PR0501CA0032.eurprd05.prod.outlook.com (2603:10a6:4:67::18) by DB9PR08MB9563.eurprd08.prod.outlook.com (2603:10a6:10:451::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Fri, 5 May 2023 16:49:16 +0000 Received: from DBAEUR03FT044.eop-EUR03.prod.protection.outlook.com (2603:10a6:4:67:cafe::24) by DB6PR0501CA0032.outlook.office365.com (2603:10a6:4:67::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:16 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234) smtp.mailfrom=arm.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 40.67.248.234 as permitted sender) receiver=protection.outlook.com; client-ip=40.67.248.234; helo=nebula.arm.com; pr=C Received: from nebula.arm.com (40.67.248.234) by DBAEUR03FT044.mail.protection.outlook.com (100.127.142.189) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:16 +0000 Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Fri, 5 May 2023 16:49:15 +0000 Received: from e129018.arm.com (10.57.22.112) by mail.arm.com (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.23 via Frontend Transport; Fri, 5 May 2023 16:49:14 +0000 To: , , , CC: Christophe Lyon Subject: [PATCH 02/10] arm: [MVE intrinsics] factorize several unary operations Date: Fri, 5 May 2023 18:48:58 +0200 Message-ID: <20230505164906.596219-2-christophe.lyon@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230505164906.596219-1-christophe.lyon@arm.com> References: <20230505164906.596219-1-christophe.lyon@arm.com> MIME-Version: 1.0 X-EOPAttributedMessage: 1 X-MS-TrafficTypeDiagnostic: DBAEUR03FT044:EE_|DB9PR08MB9563:EE_|AM7EUR03FT052:EE_|AS2PR08MB9917:EE_ X-MS-Office365-Filtering-Correlation-Id: c68065a8-d6ab-4ef9-5c1f-08db4d88b264 x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: QxZhO+6/sN3yQxj4c60w3d2tz8Wc2/BnGbkEvNPyh1xbEQbiGVy0Ucc/ny5RGnLf4rd9dIsZ+TgWfgIi2r6pdF4hf1eQBpSuAnn0UERp6GYVEI7QTegHtj81qSO86xF+KoDb84OMhnHe964NoHVArKB3n9jTGgxXkspKr9gvfNzTyO2QxJI2JhRHfCPwBi5uroWTcgcHlYqV0HP0wtfHiM0T6FmnxbCJzZ/GfJwg9EmnBfyQdp8v9j7LVZ06nKRewlqInMKXIt5iIcx1ldNFashycJxJOqYvu6jJxUhx85cJueA+uTV5zq/bj9VXC1FPIYngrMXvENLMmqODj/qhTnJ3siOFI0Cwk6xowz/OXqj71Yp1TjElff7mlfU8BpcEpJpG6KTlWbmTQJzHjYlw70+72R/lIdb1AyJxRT4tV3mTHMD4buo1Uz9xI5iBC9QSyH1EKATvaJ9y0Zl3zfyZLjTlT4cvHT263m7yS7VXC6L8EN18wumK2fhBX4mE2xyMTPBtZMnsshmEAQAQIjBPZNOVMdUM0Z5nNmd/kyV7usEMfOVC4TT2EFMhKf5SpYtc97v7V5qqA9qV+pntLUQ551DKVn9/aF32YCKRoZh/rkp2SGoe4+VZOwKg5FDQmvG7//xvuhgAmR+7+68hCtJPcIRbRRGCW3A9UZqLvR6a/sOKdR49hkhKeYMNVAed3ECBh2UAsgHKlwKKdpb+N3rEQ1cbGpaaTfAwv48vy0XfWeY= X-Forefront-Antispam-Report-Untrusted: CIP:40.67.248.234; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:nebula.arm.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230028)(4636009)(39860400002)(136003)(396003)(376002)(346002)(451199021)(40470700004)(36840700001)(46966006)(36860700001)(36756003)(1076003)(82740400003)(336012)(356005)(81166007)(186003)(83380400001)(47076005)(26005)(426003)(6666004)(82310400005)(5660300002)(41300700001)(44832011)(40480700001)(2906002)(110136005)(478600001)(8676002)(8936002)(30864003)(7696005)(2616005)(40460700003)(316002)(6636002)(86362001)(70206006)(4326008)(70586007)(36900700001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB9563 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM7EUR03FT052.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 2934d146-4973-4726-22d0-08db4d88a9ab X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: BOrjlFXauLgLCn0BVPvLwj+bYALzL/YFI4pqadVWGd6PNG4qyeznQRCg4/mJ9rbB0fHs7F+f9JEqcn2+9tL1MmP+P9WqQIKiE8RIH4LtIW2UtuevLcPqIB72ec0F6c7O1CpGkfKeLeRNPEubfQwL+1kpLKSn6CrRuCbOev3RzXVzhnnGZvOhUdgTOhxpaLtomoDTPBKtq3OKN723qlB4q6vLEaAZmt7HKdo6JSlYxXufmYJbG5SdCN5kBcpqTvkRegUn8ZArUFkO2+WQL28ig5kmVVB0OvV++zZbdk5FKe0oxRsSwb9FDFVQ/SDfZlg+sRjzWCTV3Xc8ztKVHzDZN7UveMqMC1B06Bn/9O8lVQ7MeVrp/y4zDKNVca/SViTmqGWLAGdLVIBjdmThbVWAVITEn0UpnEKGzNt4SPLg69nUJz2yBQbDqGDqS3V9SJ/l7el0U76TRHfeM81PvhQXp1OGoqI9UOEwvtEtXtFceF5Z4QB33IAuQjOrqobSGJa+B7msIOkShpZeoQLuiV9T64m9izwhQNxP/9nCHG06dK7/f9t/o6Y0LvUfCFl+ACHxUQcsHw2zZCFRp7CjMIb9+LyS3fcYY1CwcWyUhNj4o0fZ+zDxHdl7u2csYQwln2RaGWUfDqYNHdwTPJwOLx1BHAInsKO6J+l8IlcFoJ2Jb/2DTXEYvpFdZBcc/r0SWK6VNjIYfbJyPvaLMgTEO0PSSQ== X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230028)(4636009)(39860400002)(136003)(396003)(376002)(346002)(451199021)(36840700001)(46966006)(40470700004)(7696005)(5660300002)(1076003)(6666004)(4326008)(70586007)(6636002)(478600001)(8936002)(70206006)(316002)(2616005)(336012)(426003)(83380400001)(8676002)(41300700001)(47076005)(44832011)(30864003)(36860700001)(2906002)(81166007)(82740400003)(26005)(110136005)(186003)(86362001)(82310400005)(40460700003)(36756003)(40480700001); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 16:49:30.5932 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c68065a8-d6ab-4ef9-5c1f-08db4d88b264 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM7EUR03FT052.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9917 X-Spam-Status: No, score=-12.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Christophe Lyon via Gcc-patches From: Christophe Lyon Reply-To: Christophe Lyon Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" Factorize vabs vcls vclz vneg vqabs vqneg vrnda vrndm vrndn vrndp vrnd vrndx so that they use the same pattern. This patch introduces the mve_mnemo iterator because some of the involved intrinsics have a different name from their mnenonic: for instance vrndq vs vrintz. 2022-09-08 Christophe Lyon gcc/ * config/arm/iterators.md (MVE_INT_M_UNARY, MVE_INT_UNARY) (MVE_FP_UNARY, MVE_FP_M_UNARY): New. (mve_insn): Add vabs, vcls, vclz, vneg, vqabs, vqneg, vrnda, vrndm, vrndn, vrndp, vrnd, vrndx. (isu): Add VABSQ_M_S, VCLSQ_M_S, VCLZQ_M_S, VCLZQ_M_U, VNEGQ_M_S, VQABSQ_M_S, VQNEGQ_M_S. (mve_mnemo): New. * config/arm/mve.md (mve_vrndq_m_f, mve_vrndxq_f) (mve_vrndq_f, mve_vrndpq_f, mve_vrndnq_f) (mve_vrndmq_f, mve_vrndaq_f): Merge into ... (@mve_q_f): ... this. (mve_vnegq_f, mve_vabsq_f): Merge into ... (mve_vq_f): ... this. (mve_vnegq_s, mve_vabsq_s): Merge into ... (mve_vq_s): ... this. (mve_vclsq_s, mve_vqnegq_s, mve_vqabsq_s): Merge into ... (@mve_q_): ... this. (mve_vabsq_m_s, mve_vclsq_m_s) (mve_vclzq_m_, mve_vnegq_m_s) (mve_vqabsq_m_s, mve_vqnegq_m_s): Merge into ... (@mve_q_m_): ... this. (mve_vabsq_m_f, mve_vnegq_m_f, mve_vrndaq_m_f) (mve_vrndmq_m_f, mve_vrndnq_m_f, mve_vrndpq_m_f) (mve_vrndxq_m_f): Merge into ... (@mve_q_m_f): ... this. --- gcc/config/arm/iterators.md | 80 ++++++++ gcc/config/arm/mve.md | 383 +++++------------------------------- 2 files changed, 126 insertions(+), 337 deletions(-) diff --git a/gcc/config/arm/iterators.md b/gcc/config/arm/iterators.md index 53873704174..0b4f69ee874 100644 --- a/gcc/config/arm/iterators.md +++ b/gcc/config/arm/iterators.md @@ -333,6 +333,42 @@ (define_code_iterator SSPLUSMINUS [ss_plus ss_minus]) ;; Max/Min iterator, to factorize MVE patterns (define_code_iterator MAX_MIN_SU [smax umax smin umin]) +;; MVE integer unary operations. +(define_int_iterator MVE_INT_M_UNARY [ + VABSQ_M_S + VCLSQ_M_S + VCLZQ_M_S VCLZQ_M_U + VNEGQ_M_S + VQABSQ_M_S + VQNEGQ_M_S + ]) + +(define_int_iterator MVE_INT_UNARY [ + VCLSQ_S + VQABSQ_S + VQNEGQ_S + ]) + +(define_int_iterator MVE_FP_UNARY [ + VRNDQ_F + VRNDAQ_F + VRNDMQ_F + VRNDNQ_F + VRNDPQ_F + VRNDXQ_F + ]) + +(define_int_iterator MVE_FP_M_UNARY [ + VABSQ_M_F + VNEGQ_M_F + VRNDAQ_M_F + VRNDMQ_M_F + VRNDNQ_M_F + VRNDPQ_M_F + VRNDQ_M_F + VRNDXQ_M_F + ]) + ;; MVE integer binary operations. (define_code_iterator MVE_INT_BINARY_RTX [plus minus mult]) @@ -551,6 +587,8 @@ (define_code_attr mve_addsubmul [ (define_int_attr mve_insn [ (VABDQ_M_S "vabd") (VABDQ_M_U "vabd") (VABDQ_M_F "vabd") (VABDQ_S "vabd") (VABDQ_U "vabd") (VABDQ_F "vabd") + (VABSQ_M_F "vabs") + (VABSQ_M_S "vabs") (VADDQ_M_N_S "vadd") (VADDQ_M_N_U "vadd") (VADDQ_M_N_F "vadd") (VADDQ_M_S "vadd") (VADDQ_M_U "vadd") (VADDQ_M_F "vadd") (VADDQ_N_S "vadd") (VADDQ_N_U "vadd") (VADDQ_N_F "vadd") @@ -558,6 +596,9 @@ (define_int_attr mve_insn [ (VBICQ_M_N_S "vbic") (VBICQ_M_N_U "vbic") (VBICQ_M_S "vbic") (VBICQ_M_U "vbic") (VBICQ_M_F "vbic") (VBICQ_N_S "vbic") (VBICQ_N_U "vbic") + (VCLSQ_M_S "vcls") + (VCLSQ_S "vcls") + (VCLZQ_M_S "vclz") (VCLZQ_M_U "vclz") (VCREATEQ_S "vcreate") (VCREATEQ_U "vcreate") (VCREATEQ_F "vcreate") (VEORQ_M_S "veor") (VEORQ_M_U "veor") (VEORQ_M_F "veor") (VHADDQ_M_N_S "vhadd") (VHADDQ_M_N_U "vhadd") @@ -577,9 +618,13 @@ (define_int_attr mve_insn [ (VMULQ_M_N_S "vmul") (VMULQ_M_N_U "vmul") (VMULQ_M_N_F "vmul") (VMULQ_M_S "vmul") (VMULQ_M_U "vmul") (VMULQ_M_F "vmul") (VMULQ_N_S "vmul") (VMULQ_N_U "vmul") (VMULQ_N_F "vmul") + (VNEGQ_M_F "vneg") + (VNEGQ_M_S "vneg") (VORRQ_M_N_S "vorr") (VORRQ_M_N_U "vorr") (VORRQ_M_S "vorr") (VORRQ_M_U "vorr") (VORRQ_M_F "vorr") (VORRQ_N_S "vorr") (VORRQ_N_U "vorr") + (VQABSQ_M_S "vqabs") + (VQABSQ_S "vqabs") (VQADDQ_M_N_S "vqadd") (VQADDQ_M_N_U "vqadd") (VQADDQ_M_S "vqadd") (VQADDQ_M_U "vqadd") (VQADDQ_N_S "vqadd") (VQADDQ_N_U "vqadd") @@ -594,6 +639,8 @@ (define_int_attr mve_insn [ (VQDMULHQ_M_S "vqdmulh") (VQDMULHQ_N_S "vqdmulh") (VQDMULHQ_S "vqdmulh") + (VQNEGQ_M_S "vqneg") + (VQNEGQ_S "vqneg") (VQRDMLADHQ_M_S "vqrdmladh") (VQRDMLADHXQ_M_S "vqrdmladhx") (VQRDMLAHQ_M_N_S "vqrdmlah") @@ -638,6 +685,12 @@ (define_int_attr mve_insn [ (VRHADDQ_S "vrhadd") (VRHADDQ_U "vrhadd") (VRMULHQ_M_S "vrmulh") (VRMULHQ_M_U "vrmulh") (VRMULHQ_S "vrmulh") (VRMULHQ_U "vrmulh") + (VRNDAQ_F "vrnda") (VRNDAQ_M_F "vrnda") + (VRNDMQ_F "vrndm") (VRNDMQ_M_F "vrndm") + (VRNDNQ_F "vrndn") (VRNDNQ_M_F "vrndn") + (VRNDPQ_F "vrndp") (VRNDPQ_M_F "vrndp") + (VRNDQ_F "vrnd") (VRNDQ_M_F "vrnd") + (VRNDXQ_F "vrndx") (VRNDXQ_M_F "vrndx") (VRSHLQ_M_N_S "vrshl") (VRSHLQ_M_N_U "vrshl") (VRSHLQ_M_S "vrshl") (VRSHLQ_M_U "vrshl") (VRSHLQ_N_S "vrshl") (VRSHLQ_N_U "vrshl") @@ -666,6 +719,13 @@ (define_int_attr mve_insn [ ]) (define_int_attr isu [ + (VABSQ_M_S "s") + (VCLSQ_M_S "s") + (VCLZQ_M_S "i") + (VCLZQ_M_U "i") + (VNEGQ_M_S "s") + (VQABSQ_M_S "s") + (VQNEGQ_M_S "s") (VQRSHRNBQ_M_N_S "s") (VQRSHRNBQ_M_N_U "u") (VQRSHRNBQ_N_S "s") (VQRSHRNBQ_N_U "u") (VQRSHRNTQ_M_N_S "s") (VQRSHRNTQ_M_N_U "u") @@ -692,6 +752,17 @@ (define_int_attr isu [ (VSHRNTQ_N_S "i") (VSHRNTQ_N_U "i") ]) +(define_int_attr mve_mnemo [ + (VABSQ_M_S "vabs") (VABSQ_M_F "vabs") + (VNEGQ_M_S "vneg") (VNEGQ_M_F "vneg") + (VRNDAQ_F "vrinta") (VRNDAQ_M_F "vrinta") + (VRNDMQ_F "vrintm") (VRNDMQ_M_F "vrintm") + (VRNDNQ_F "vrintn") (VRNDNQ_M_F "vrintn") + (VRNDPQ_F "vrintp") (VRNDPQ_M_F "vrintp") + (VRNDQ_F "vrintz") (VRNDQ_M_F "vrintz") + (VRNDXQ_F "vrintx") (VRNDXQ_M_F "vrintx") + ]) + ;; plus and minus are the only SHIFTABLE_OPS for which Thumb2 allows ;; a stack pointer operand. The minus operation is a candidate for an rsub ;; and hence only plus is supported. @@ -1862,6 +1933,15 @@ (define_int_attr supf [(VCVTQ_TO_F_S "s") (VCVTQ_TO_F_U "u") (VREV16Q_S "s") (VQSHRUNBQ_N_S "s") (VQSHRUNTQ_M_N_S "s") (VQSHRUNTQ_N_S "s") + (VABSQ_M_S "s") + (VCLSQ_M_S "s") + (VCLZQ_M_S "s") (VCLZQ_M_U "u") + (VNEGQ_M_S "s") + (VQABSQ_M_S "s") + (VQNEGQ_M_S "s") + (VCLSQ_S "s") + (VQABSQ_S "s") + (VQNEGQ_S "s") ]) ;; Both kinds of return insn. diff --git a/gcc/config/arm/mve.md b/gcc/config/arm/mve.md index b5c89fd4105..7bf344d547a 100644 --- a/gcc/config/arm/mve.md +++ b/gcc/config/arm/mve.md @@ -130,102 +130,21 @@ (define_insn "mve_vst4q" [(set_attr "length" "16")]) ;; -;; [vrndq_m_f]) +;; [vrndaq_f] +;; [vrndmq_f] +;; [vrndnq_f] +;; [vrndpq_f] +;; [vrndq_f] +;; [vrndxq_f] ;; -(define_insn "mve_vrndq_m_f" - [ - (set (match_operand:MVE_0 0 "s_register_operand" "=w") - (unspec:MVE_0 [(match_operand:MVE_0 1 "s_register_operand" "0") - (match_operand:MVE_0 2 "s_register_operand" "w") - (match_operand: 3 "vpr_register_operand" "Up")] - VRNDQ_M_F)) - ] - "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" - "vpst\;vrintzt.f%# %q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) - -;; -;; [vrndxq_f]) -;; -(define_insn "mve_vrndxq_f" - [ - (set (match_operand:MVE_0 0 "s_register_operand" "=w") - (unspec:MVE_0 [(match_operand:MVE_0 1 "s_register_operand" "w")] - VRNDXQ_F)) - ] - "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" - "vrintx.f%# %q0, %q1" - [(set_attr "type" "mve_move") -]) - -;; -;; [vrndq_f]) -;; -(define_insn "mve_vrndq_f" - [ - (set (match_operand:MVE_0 0 "s_register_operand" "=w") - (unspec:MVE_0 [(match_operand:MVE_0 1 "s_register_operand" "w")] - VRNDQ_F)) - ] - "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" - "vrintz.f%# %q0, %q1" - [(set_attr "type" "mve_move") -]) - -;; -;; [vrndpq_f]) -;; -(define_insn "mve_vrndpq_f" - [ - (set (match_operand:MVE_0 0 "s_register_operand" "=w") - (unspec:MVE_0 [(match_operand:MVE_0 1 "s_register_operand" "w")] - VRNDPQ_F)) - ] - "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" - "vrintp.f%# %q0, %q1" - [(set_attr "type" "mve_move") -]) - -;; -;; [vrndnq_f]) -;; -(define_insn "mve_vrndnq_f" - [ - (set (match_operand:MVE_0 0 "s_register_operand" "=w") - (unspec:MVE_0 [(match_operand:MVE_0 1 "s_register_operand" "w")] - VRNDNQ_F)) - ] - "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" - "vrintn.f%# %q0, %q1" - [(set_attr "type" "mve_move") -]) - -;; -;; [vrndmq_f]) -;; -(define_insn "mve_vrndmq_f" - [ - (set (match_operand:MVE_0 0 "s_register_operand" "=w") - (unspec:MVE_0 [(match_operand:MVE_0 1 "s_register_operand" "w")] - VRNDMQ_F)) - ] - "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" - "vrintm.f%# %q0, %q1" - [(set_attr "type" "mve_move") -]) - -;; -;; [vrndaq_f]) -;; -(define_insn "mve_vrndaq_f" +(define_insn "@mve_q_f" [ (set (match_operand:MVE_0 0 "s_register_operand" "=w") (unspec:MVE_0 [(match_operand:MVE_0 1 "s_register_operand" "w")] - VRNDAQ_F)) + MVE_FP_UNARY)) ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" - "vrinta.f%# %q0, %q1" + ".f%#\t%q0, %q1" [(set_attr "type" "mve_move") ]) @@ -244,15 +163,16 @@ (define_insn "mve_vrev64q_f" ]) ;; -;; [vnegq_f]) +;; [vabsq_f] +;; [vnegq_f] ;; -(define_insn "mve_vnegq_f" +(define_insn "mve_vq_f" [ (set (match_operand:MVE_0 0 "s_register_operand" "=w") - (neg:MVE_0 (match_operand:MVE_0 1 "s_register_operand" "w"))) + (ABSNEG:MVE_0 (match_operand:MVE_0 1 "s_register_operand" "w"))) ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" - "vneg.f%#\t%q0, %q1" + "v.f%#\t%q0, %q1" [(set_attr "type" "mve_move") ]) @@ -270,19 +190,6 @@ (define_insn "mve_vdupq_n_f" [(set_attr "type" "mve_move") ]) -;; -;; [vabsq_f]) -;; -(define_insn "mve_vabsq_f" - [ - (set (match_operand:MVE_0 0 "s_register_operand" "=w") - (abs:MVE_0 (match_operand:MVE_0 1 "s_register_operand" "w"))) - ] - "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" - "vabs.f%#\t%q0, %q1" - [(set_attr "type" "mve_move") -]) - ;; ;; [vrev32q_f]) ;; @@ -365,43 +272,18 @@ (define_insn "mve_vcvtq_from_f_" "vcvt.%#.f%# %q0, %q1" [(set_attr "type" "mve_move") ]) -;; [vqnegq_s]) -;; -(define_insn "mve_vqnegq_s" - [ - (set (match_operand:MVE_2 0 "s_register_operand" "=w") - (unspec:MVE_2 [(match_operand:MVE_2 1 "s_register_operand" "w")] - VQNEGQ_S)) - ] - "TARGET_HAVE_MVE" - "vqneg.s%#\t%q0, %q1" - [(set_attr "type" "mve_move") -]) - -;; -;; [vqabsq_s]) -;; -(define_insn "mve_vqabsq_s" - [ - (set (match_operand:MVE_2 0 "s_register_operand" "=w") - (unspec:MVE_2 [(match_operand:MVE_2 1 "s_register_operand" "w")] - VQABSQ_S)) - ] - "TARGET_HAVE_MVE" - "vqabs.s%#\t%q0, %q1" - [(set_attr "type" "mve_move") -]) ;; -;; [vnegq_s]) +;; [vabsq_s] +;; [vnegq_s] ;; -(define_insn "mve_vnegq_s" +(define_insn "mve_vq_s" [ (set (match_operand:MVE_2 0 "s_register_operand" "=w") - (neg:MVE_2 (match_operand:MVE_2 1 "s_register_operand" "w"))) + (ABSNEG:MVE_2 (match_operand:MVE_2 1 "s_register_operand" "w"))) ] "TARGET_HAVE_MVE" - "vneg.s%#\t%q0, %q1" + "v.s%#\t%q0, %q1" [(set_attr "type" "mve_move") ]) @@ -460,16 +342,18 @@ (define_expand "mve_vclzq_u" ) ;; -;; [vclsq_s]) +;; [vclsq_s] +;; [vqabsq_s] +;; [vqnegq_s] ;; -(define_insn "mve_vclsq_s" +(define_insn "@mve_q_" [ (set (match_operand:MVE_2 0 "s_register_operand" "=w") (unspec:MVE_2 [(match_operand:MVE_2 1 "s_register_operand" "w")] - VCLSQ_S)) + MVE_INT_UNARY)) ] "TARGET_HAVE_MVE" - "vcls.s%#\t%q0, %q1" + ".%#\t%q0, %q1" [(set_attr "type" "mve_move") ]) @@ -487,19 +371,6 @@ (define_insn "@mve_vaddvq_" [(set_attr "type" "mve_move") ]) -;; -;; [vabsq_s]) -;; -(define_insn "mve_vabsq_s" - [ - (set (match_operand:MVE_2 0 "s_register_operand" "=w") - (abs:MVE_2 (match_operand:MVE_2 1 "s_register_operand" "w"))) - ] - "TARGET_HAVE_MVE" - "vabs.s%#\t%q0, %q1" - [(set_attr "type" "mve_move") -]) - ;; ;; [vrev32q_u, vrev32q_s]) ;; @@ -2254,18 +2125,23 @@ (define_insn "mve_vshlcq_" "vshlc %q0, %1, %4") ;; -;; [vabsq_m_s]) +;; [vabsq_m_s] +;; [vclsq_m_s] +;; [vclzq_m_s, vclzq_m_u] +;; [vnegq_m_s] +;; [vqabsq_m_s] +;; [vqnegq_m_s] ;; -(define_insn "mve_vabsq_m_s" +(define_insn "@mve_q_m_" [ (set (match_operand:MVE_2 0 "s_register_operand" "=w") (unspec:MVE_2 [(match_operand:MVE_2 1 "s_register_operand" "0") (match_operand:MVE_2 2 "s_register_operand" "w") (match_operand: 3 "vpr_register_operand" "Up")] - VABSQ_M_S)) + MVE_INT_M_UNARY)) ] "TARGET_HAVE_MVE" - "vpst\;vabst.s%# %q0, %q2" + "vpst\;t.%#\t%q0, %q2" [(set_attr "type" "mve_move") (set_attr "length""8")]) @@ -2285,38 +2161,6 @@ (define_insn "mve_vaddvaq_p_" [(set_attr "type" "mve_move") (set_attr "length""8")]) -;; -;; [vclsq_m_s]) -;; -(define_insn "mve_vclsq_m_s" - [ - (set (match_operand:MVE_2 0 "s_register_operand" "=w") - (unspec:MVE_2 [(match_operand:MVE_2 1 "s_register_operand" "0") - (match_operand:MVE_2 2 "s_register_operand" "w") - (match_operand: 3 "vpr_register_operand" "Up")] - VCLSQ_M_S)) - ] - "TARGET_HAVE_MVE" - "vpst\;vclst.s%# %q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) - -;; -;; [vclzq_m_s, vclzq_m_u]) -;; -(define_insn "mve_vclzq_m_" - [ - (set (match_operand:MVE_2 0 "s_register_operand" "=w") - (unspec:MVE_2 [(match_operand:MVE_2 1 "s_register_operand" "0") - (match_operand:MVE_2 2 "s_register_operand" "w") - (match_operand: 3 "vpr_register_operand" "Up")] - VCLZQ_M)) - ] - "TARGET_HAVE_MVE" - "vpst\;vclzt.i%# %q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) - ;; ;; [vcmpcsq_m_n_u]) ;; @@ -2813,22 +2657,6 @@ (define_insn "mve_vmvnq_m_" [(set_attr "type" "mve_move") (set_attr "length""8")]) -;; -;; [vnegq_m_s]) -;; -(define_insn "mve_vnegq_m_s" - [ - (set (match_operand:MVE_2 0 "s_register_operand" "=w") - (unspec:MVE_2 [(match_operand:MVE_2 1 "s_register_operand" "0") - (match_operand:MVE_2 2 "s_register_operand" "w") - (match_operand: 3 "vpr_register_operand" "Up")] - VNEGQ_M_S)) - ] - "TARGET_HAVE_MVE" - "vpst\;vnegt.s%#\t%q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) - ;; ;; [vpselq_u, vpselq_s]) ;; @@ -2845,22 +2673,6 @@ (define_insn "@mve_vpselq_" [(set_attr "type" "mve_move") ]) -;; -;; [vqabsq_m_s]) -;; -(define_insn "mve_vqabsq_m_s" - [ - (set (match_operand:MVE_2 0 "s_register_operand" "=w") - (unspec:MVE_2 [(match_operand:MVE_2 1 "s_register_operand" "0") - (match_operand:MVE_2 2 "s_register_operand" "w") - (match_operand: 3 "vpr_register_operand" "Up")] - VQABSQ_M_S)) - ] - "TARGET_HAVE_MVE" - "vpst\;vqabst.s%#\t%q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) - ;; ;; [vqdmlahq_n_s]) ;; @@ -2893,22 +2705,6 @@ (define_insn "mve_vqdmlashq_n_" [(set_attr "type" "mve_move") ]) -;; -;; [vqnegq_m_s]) -;; -(define_insn "mve_vqnegq_m_s" - [ - (set (match_operand:MVE_2 0 "s_register_operand" "=w") - (unspec:MVE_2 [(match_operand:MVE_2 1 "s_register_operand" "0") - (match_operand:MVE_2 2 "s_register_operand" "w") - (match_operand: 3 "vpr_register_operand" "Up")] - VQNEGQ_M_S)) - ] - "TARGET_HAVE_MVE" - "vpst\;vqnegt.s%# %q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) - ;; ;; [vqrdmladhq_s]) ;; @@ -3198,19 +2994,27 @@ (define_insn "mve_vmladavaxq_s" "vmladavax.s%#\t%0, %q2, %q3" [(set_attr "type" "mve_move") ]) + ;; -;; [vabsq_m_f]) +;; [vabsq_m_f] +;; [vnegq_m_f] +;; [vrndaq_m_f] +;; [vrndmq_m_f] +;; [vrndnq_m_f] +;; [vrndpq_m_f] +;; [vrndq_m_f] +;; [vrndxq_m_f] ;; -(define_insn "mve_vabsq_m_f" +(define_insn "@mve_q_m_f" [ (set (match_operand:MVE_0 0 "s_register_operand" "=w") (unspec:MVE_0 [(match_operand:MVE_0 1 "s_register_operand" "0") (match_operand:MVE_0 2 "s_register_operand" "w") (match_operand: 3 "vpr_register_operand" "Up")] - VABSQ_M_F)) + MVE_FP_M_UNARY)) ] "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" - "vpst\;vabst.f%# %q0, %q2" + "vpst\;t.f%#\t%q0, %q2" [(set_attr "type" "mve_move") (set_attr "length""8")]) @@ -3863,21 +3667,6 @@ (define_insn "mve_vmvnq_m_n_" "vpst\;vmvnt.i%# %q0, %2" [(set_attr "type" "mve_move") (set_attr "length""8")]) -;; -;; [vnegq_m_f]) -;; -(define_insn "mve_vnegq_m_f" - [ - (set (match_operand:MVE_0 0 "s_register_operand" "=w") - (unspec:MVE_0 [(match_operand:MVE_0 1 "s_register_operand" "0") - (match_operand:MVE_0 2 "s_register_operand" "w") - (match_operand: 3 "vpr_register_operand" "Up")] - VNEGQ_M_F)) - ] - "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" - "vpst\;vnegt.f%# %q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) ;; ;; [vbicq_m_n_s, vbicq_m_n_u] @@ -4104,86 +3893,6 @@ (define_insn "mve_vrmlsldavhxq_p_sv4si" [(set_attr "type" "mve_move") (set_attr "length""8")]) -;; -;; [vrndaq_m_f]) -;; -(define_insn "mve_vrndaq_m_f" - [ - (set (match_operand:MVE_0 0 "s_register_operand" "=w") - (unspec:MVE_0 [(match_operand:MVE_0 1 "s_register_operand" "0") - (match_operand:MVE_0 2 "s_register_operand" "w") - (match_operand: 3 "vpr_register_operand" "Up")] - VRNDAQ_M_F)) - ] - "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" - "vpst\;vrintat.f%# %q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) - -;; -;; [vrndmq_m_f]) -;; -(define_insn "mve_vrndmq_m_f" - [ - (set (match_operand:MVE_0 0 "s_register_operand" "=w") - (unspec:MVE_0 [(match_operand:MVE_0 1 "s_register_operand" "0") - (match_operand:MVE_0 2 "s_register_operand" "w") - (match_operand: 3 "vpr_register_operand" "Up")] - VRNDMQ_M_F)) - ] - "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" - "vpst\;vrintmt.f%# %q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) - -;; -;; [vrndnq_m_f]) -;; -(define_insn "mve_vrndnq_m_f" - [ - (set (match_operand:MVE_0 0 "s_register_operand" "=w") - (unspec:MVE_0 [(match_operand:MVE_0 1 "s_register_operand" "0") - (match_operand:MVE_0 2 "s_register_operand" "w") - (match_operand: 3 "vpr_register_operand" "Up")] - VRNDNQ_M_F)) - ] - "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" - "vpst\;vrintnt.f%# %q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) - -;; -;; [vrndpq_m_f]) -;; -(define_insn "mve_vrndpq_m_f" - [ - (set (match_operand:MVE_0 0 "s_register_operand" "=w") - (unspec:MVE_0 [(match_operand:MVE_0 1 "s_register_operand" "0") - (match_operand:MVE_0 2 "s_register_operand" "w") - (match_operand: 3 "vpr_register_operand" "Up")] - VRNDPQ_M_F)) - ] - "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" - "vpst\;vrintpt.f%# %q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) - -;; -;; [vrndxq_m_f]) -;; -(define_insn "mve_vrndxq_m_f" - [ - (set (match_operand:MVE_0 0 "s_register_operand" "=w") - (unspec:MVE_0 [(match_operand:MVE_0 1 "s_register_operand" "0") - (match_operand:MVE_0 2 "s_register_operand" "w") - (match_operand: 3 "vpr_register_operand" "Up")] - VRNDXQ_M_F)) - ] - "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" - "vpst\;vrintxt.f%# %q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) - ;; ;; [vcvtmq_m_s, vcvtmq_m_u]) ;; From patchwork Fri May 5 16:48:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Lyon X-Patchwork-Id: 68845 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 722673870885 for ; Fri, 5 May 2023 16:51:19 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 722673870885 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1683305479; bh=4v/ZAgYphoyVEM9WM9HIvgrOpGlOYw5bUj1XXzoA4OQ=; h=To:CC:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=yuPVGQrmk1VpoVwCx5O2VtyChKq32RtSd0jdhmmfcp+RZA1/wEGERfAl0BuzLKavi uIy7uw4EC1ZXVH0foU3E512Yq9v0zkshdw5bjM0fYBDWw4Ta5Fc0EkIEHwi6eL1Krc IFyamrBC/b61twEoiu3yzrIG/GoiP6H8hHmhYtbI= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR04-VI1-obe.outbound.protection.outlook.com (mail-vi1eur04on2085.outbound.protection.outlook.com [40.107.8.85]) by sourceware.org (Postfix) with ESMTPS id 420953853801 for ; Fri, 5 May 2023 16:50:00 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 420953853801 Received: from AS9PR04CA0041.eurprd04.prod.outlook.com (2603:10a6:20b:46a::30) by DBBPR08MB6186.eurprd08.prod.outlook.com (2603:10a6:10:204::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 16:49:45 +0000 Received: from AM7EUR03FT012.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:46a:cafe::e) by AS9PR04CA0041.outlook.office365.com (2603:10a6:20b:46a::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM7EUR03FT012.mail.protection.outlook.com (100.127.141.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.12 via Frontend Transport; Fri, 5 May 2023 16:49:45 +0000 Received: ("Tessian outbound 945aec65ec65:v136"); Fri, 05 May 2023 16:49:45 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 7c738fb22841d178 X-CR-MTA-TID: 64aa7808 Received: from c374babc9e04.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 1673F74D-D3DC-47CE-B85E-7F90C2B651EB.1; Fri, 05 May 2023 16:49:33 +0000 Received: from EUR05-VI1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c374babc9e04.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Fri, 05 May 2023 16:49:33 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mpJhiN4I10hgdIo9+oWfLnKQpu6epFrnV6uWawTG9Q2uea+Q8RD3k3LvXR38VBlqTOZjpO6uG+KRTStNsnbEihNVaDq8QYWBRwSAO3TcX8mP4DXkVrASrF52QegMCzvS79RcvNKFqb3DcGOeU+H+zPtIMYQXgUrXuKApHeTv2zFIyAYfvjNEe5Q8+zMfLqsA4IFiJR/75YzzD/jabiW9CUwvMJ2Ze8HnP2NHNXTuuCv55wYs6XVZsE995fiA9hwoiyGISL/ZVEggoK9SYz70CDF6m/mNyEd+tquoyj5eGKLnTUktZt1wDp/Bdvqz9PyW7EVO3kyJ2VPrTJC/21Micw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4v/ZAgYphoyVEM9WM9HIvgrOpGlOYw5bUj1XXzoA4OQ=; b=CDk+rgooRcgojZIAXIS40h75vIID37Ekj13ehecdc1sNlNAOJUttfHZWwGig2QLkJ0DmUWYk94n6Oj6/vHeJikdqQxPF0s2Dv3HLOzwtuIW1ANsfO65WJqrJuWSndVJyNsJbf1bA7qKIKJ3Du6Ho4IXoCBSLIjLb+bgpxtqwRrve8lnpDO8JuL9yDer2ZLgvgbmGCQE+3/jIQYusgcb0rNyGHdLR3P5l9zUTfYjIdRP6kQvJT/iQEXTzyHkd8PtrrNy/+CoZVCH2985n4fTgt+t2LGiB8EnpFNr/r8ad3yhu8InJla4oXSJmQgT5Wgg97IfL6OyBvqFIcO5b7MprUg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none Received: from AS9PR06CA0005.eurprd06.prod.outlook.com (2603:10a6:20b:462::26) by AM9PR08MB6660.eurprd08.prod.outlook.com (2603:10a6:20b:305::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 16:49:17 +0000 Received: from AM7EUR03FT034.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:462:cafe::2c) by AS9PR06CA0005.outlook.office365.com (2603:10a6:20b:462::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:16 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234) smtp.mailfrom=arm.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 40.67.248.234 as permitted sender) receiver=protection.outlook.com; client-ip=40.67.248.234; helo=nebula.arm.com; pr=C Received: from nebula.arm.com (40.67.248.234) by AM7EUR03FT034.mail.protection.outlook.com (100.127.140.87) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:16 +0000 Received: from AZ-NEU-EX02.Emea.Arm.com (10.251.26.5) by AZ-NEU-EX03.Arm.com (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Fri, 5 May 2023 16:49:16 +0000 Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX02.Emea.Arm.com (10.251.26.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Fri, 5 May 2023 16:49:15 +0000 Received: from e129018.arm.com (10.57.22.112) by mail.arm.com (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.23 via Frontend Transport; Fri, 5 May 2023 16:49:15 +0000 To: , , , CC: Christophe Lyon Subject: [PATCH 03/10] arm: [MVE intrinsics] rework vabsq vnegq vclsq vclzq, vqabsq, vqnegq Date: Fri, 5 May 2023 18:48:59 +0200 Message-ID: <20230505164906.596219-3-christophe.lyon@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230505164906.596219-1-christophe.lyon@arm.com> References: <20230505164906.596219-1-christophe.lyon@arm.com> MIME-Version: 1.0 X-EOPAttributedMessage: 1 X-MS-TrafficTypeDiagnostic: AM7EUR03FT034:EE_|AM9PR08MB6660:EE_|AM7EUR03FT012:EE_|DBBPR08MB6186:EE_ X-MS-Office365-Filtering-Correlation-Id: 92ca8d00-2bf7-4cfa-7398-08db4d88bb1d x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: LnZEhFmDCRfZVyBnhlPk6D3X9TZhmXnBIGSHNKoWUJ3sDNy30e3LmnZeOOM6Q+46J1uZKYYAxQpEHHs2qxjO8xHsQeZ0Z8u7ZMe1WymJiNHTutRBOs3euEBIUojBtuBasuYUr/7GYznRKnYVlxMOFzjLo8eYmRNxX1nUREwHCFbIDocR+vmENz4u6XkFgOuPQZrvekbDvl+MvHqGtuVI2Ps3lCsc6S/QrL6ow7S599AaExVYj0slr7diP4+9Ot8LVmO+wqaI1AG2DzRPcMRv21MtInPcPyasqnxubq+qrD63NySbVRom0cDip/4n9fTPqpSdEuF2JQOsollfTnB61pw/OLDVPqHbDY79rCfkw+2x7pDdgOUUaYKBNwyGIJ64v47tUhWTMerGSFx6yXt/RR0U0DXPNP8OypbFHKc6rqHu+WNqeOQ787OcQGCziDBpBOOXKaInbh0QV84PpYYgfxLLlQT98mvQXvMCYPAxMcWsBgzqnhQQQzXF93n9iZQd7xwV2zhQ1X1eyvfIq9DzseTz+rA8vGv8o5mn6dmCKoEzE+Wl2PiSKKL5dIKp6JOqWO0tjUt3bhuvx5luzYbi5ZyZq93j6zO1U0RzxWvf7OEMgKg1/MWUOHdNq4vINHmuVoqQN6FxJsygW6QhhNLRBhHUOwy0m6z6BYl+GKLqWMibqdB/iqDZfENiKIoqX8X5jYK8rYXFKuBkLyDdoVi6Vf85mSzVqJtlvuhEBJ42fAc= X-Forefront-Antispam-Report-Untrusted: CIP:40.67.248.234; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:nebula.arm.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230028)(4636009)(39860400002)(396003)(376002)(136003)(346002)(451199021)(36840700001)(40470700004)(46966006)(40460700003)(70586007)(70206006)(4326008)(7696005)(6636002)(316002)(110136005)(478600001)(6666004)(36756003)(86362001)(47076005)(426003)(336012)(83380400001)(2616005)(1076003)(36860700001)(26005)(8936002)(8676002)(30864003)(2906002)(44832011)(41300700001)(5660300002)(82310400005)(40480700001)(81166007)(356005)(186003)(82740400003)(36900700001)(559001)(579004); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6660 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM7EUR03FT012.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 12325aa2-ac6f-47d6-caaa-08db4d88aa32 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: IFlMAGebFpIV/l6QWrvxzaPmHcTbzIYxZ3CTba50V6/AikkFbLvuv9QrKiHyI293DrSnqc2W3QLPBK+P5zRkP++QvJhMyME6U0YOtGbA38D2dcJ2GJsqmYEs8d4vSwl2IgPRIdpFFZXW8IZVRmZwOxwuMSOGo5nTUg3IPeM+zrK6qGTBgRrQoN5hd7GL6UFVtEMCDQejs7WPOwTjPSWG4wDSlfIr0pTf69A7djEyTuQC1iI7ay+lTt6566OBDzeBv5wY98y0jRJwxk6EMthJxbj5S3iONhYTADRlLbGS5CLxsBi9NZ6qxJTHwARKibKD0iUqEPM7mm9ctq51aCUDMxkxIW2g5lpjCYDj0tl69PoLM7fk3i9POrT8tYEtT1qDNXQpeqDn3jbAUN448JK93yGx3mCszS2GxX4zSe0gikmub2j6DccckZcXM0H1CmEkZjrG4xK3anH4RDrlyDmIjiomAmOT8/I7Iquy2ZxAAPu6leFCi1WLxyTj+qBg0F9yJyzgXlb8Fy91QJbwz2u0lC8R1nD3NpvimFgQ7jVTgwhiK9h3s7cXvv185+4yx2OwDhiLwSFJvq123UW5E4v7bQRu9vYmacK7h7zpZ+IHRCrl0PJUMb2tnQnq6s26DHNDgVgG2obIKq0dEiJiUakyNMHrdg/E3kRyJK65NbthB29LuCoZk2sQajS0l5NhXCfnsN1ByCsvcFSBsRRORy2okQ== X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230028)(4636009)(376002)(136003)(396003)(39860400002)(346002)(451199021)(40470700004)(46966006)(36840700001)(6636002)(4326008)(44832011)(70206006)(70586007)(41300700001)(5660300002)(8676002)(8936002)(26005)(1076003)(110136005)(316002)(478600001)(6666004)(7696005)(30864003)(2906002)(2616005)(47076005)(83380400001)(426003)(40460700003)(186003)(336012)(36860700001)(81166007)(82740400003)(40480700001)(86362001)(82310400005)(36756003)(559001)(579004); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 16:49:45.2592 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 92ca8d00-2bf7-4cfa-7398-08db4d88bb1d X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM7EUR03FT012.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6186 X-Spam-Status: No, score=-12.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Christophe Lyon via Gcc-patches From: Christophe Lyon Reply-To: Christophe Lyon Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" Implement vabsq, vnegq, vclsq, vclzq, vqabsq, vqnegq using the new MVE builtins framework. 2022-09-08 Christophe Lyon gcc/ * config/arm/arm-mve-builtins-base.cc (FUNCTION_WITHOUT_N_NO_U_F): New. (vabsq, vnegq, vclsq, vclzq, vqabsq, vqnegq): New. * config/arm/arm-mve-builtins-base.def (vabsq, vnegq, vclsq) (vclzq, vqabsq, vqnegq): New. * config/arm/arm-mve-builtins-base.h (vabsq, vnegq, vclsq, vclzq) (vqabsq, vqnegq): New. * config/arm/arm_mve.h (vabsq): Remove. (vabsq_m): Remove. (vabsq_x): Remove. (vabsq_f16): Remove. (vabsq_f32): Remove. (vabsq_s8): Remove. (vabsq_s16): Remove. (vabsq_s32): Remove. (vabsq_m_s8): Remove. (vabsq_m_s16): Remove. (vabsq_m_s32): Remove. (vabsq_m_f16): Remove. (vabsq_m_f32): Remove. (vabsq_x_s8): Remove. (vabsq_x_s16): Remove. (vabsq_x_s32): Remove. (vabsq_x_f16): Remove. (vabsq_x_f32): Remove. (__arm_vabsq_s8): Remove. (__arm_vabsq_s16): Remove. (__arm_vabsq_s32): Remove. (__arm_vabsq_m_s8): Remove. (__arm_vabsq_m_s16): Remove. (__arm_vabsq_m_s32): Remove. (__arm_vabsq_x_s8): Remove. (__arm_vabsq_x_s16): Remove. (__arm_vabsq_x_s32): Remove. (__arm_vabsq_f16): Remove. (__arm_vabsq_f32): Remove. (__arm_vabsq_m_f16): Remove. (__arm_vabsq_m_f32): Remove. (__arm_vabsq_x_f16): Remove. (__arm_vabsq_x_f32): Remove. (__arm_vabsq): Remove. (__arm_vabsq_m): Remove. (__arm_vabsq_x): Remove. (vnegq): Remove. (vnegq_m): Remove. (vnegq_x): Remove. (vnegq_f16): Remove. (vnegq_f32): Remove. (vnegq_s8): Remove. (vnegq_s16): Remove. (vnegq_s32): Remove. (vnegq_m_s8): Remove. (vnegq_m_s16): Remove. (vnegq_m_s32): Remove. (vnegq_m_f16): Remove. (vnegq_m_f32): Remove. (vnegq_x_s8): Remove. (vnegq_x_s16): Remove. (vnegq_x_s32): Remove. (vnegq_x_f16): Remove. (vnegq_x_f32): Remove. (__arm_vnegq_s8): Remove. (__arm_vnegq_s16): Remove. (__arm_vnegq_s32): Remove. (__arm_vnegq_m_s8): Remove. (__arm_vnegq_m_s16): Remove. (__arm_vnegq_m_s32): Remove. (__arm_vnegq_x_s8): Remove. (__arm_vnegq_x_s16): Remove. (__arm_vnegq_x_s32): Remove. (__arm_vnegq_f16): Remove. (__arm_vnegq_f32): Remove. (__arm_vnegq_m_f16): Remove. (__arm_vnegq_m_f32): Remove. (__arm_vnegq_x_f16): Remove. (__arm_vnegq_x_f32): Remove. (__arm_vnegq): Remove. (__arm_vnegq_m): Remove. (__arm_vnegq_x): Remove. (vclsq): Remove. (vclsq_m): Remove. (vclsq_x): Remove. (vclsq_s8): Remove. (vclsq_s16): Remove. (vclsq_s32): Remove. (vclsq_m_s8): Remove. (vclsq_m_s16): Remove. (vclsq_m_s32): Remove. (vclsq_x_s8): Remove. (vclsq_x_s16): Remove. (vclsq_x_s32): Remove. (__arm_vclsq_s8): Remove. (__arm_vclsq_s16): Remove. (__arm_vclsq_s32): Remove. (__arm_vclsq_m_s8): Remove. (__arm_vclsq_m_s16): Remove. (__arm_vclsq_m_s32): Remove. (__arm_vclsq_x_s8): Remove. (__arm_vclsq_x_s16): Remove. (__arm_vclsq_x_s32): Remove. (__arm_vclsq): Remove. (__arm_vclsq_m): Remove. (__arm_vclsq_x): Remove. (vclzq): Remove. (vclzq_m): Remove. (vclzq_x): Remove. (vclzq_s8): Remove. (vclzq_s16): Remove. (vclzq_s32): Remove. (vclzq_u8): Remove. (vclzq_u16): Remove. (vclzq_u32): Remove. (vclzq_m_u8): Remove. (vclzq_m_s8): Remove. (vclzq_m_u16): Remove. (vclzq_m_s16): Remove. (vclzq_m_u32): Remove. (vclzq_m_s32): Remove. (vclzq_x_s8): Remove. (vclzq_x_s16): Remove. (vclzq_x_s32): Remove. (vclzq_x_u8): Remove. (vclzq_x_u16): Remove. (vclzq_x_u32): Remove. (__arm_vclzq_s8): Remove. (__arm_vclzq_s16): Remove. (__arm_vclzq_s32): Remove. (__arm_vclzq_u8): Remove. (__arm_vclzq_u16): Remove. (__arm_vclzq_u32): Remove. (__arm_vclzq_m_u8): Remove. (__arm_vclzq_m_s8): Remove. (__arm_vclzq_m_u16): Remove. (__arm_vclzq_m_s16): Remove. (__arm_vclzq_m_u32): Remove. (__arm_vclzq_m_s32): Remove. (__arm_vclzq_x_s8): Remove. (__arm_vclzq_x_s16): Remove. (__arm_vclzq_x_s32): Remove. (__arm_vclzq_x_u8): Remove. (__arm_vclzq_x_u16): Remove. (__arm_vclzq_x_u32): Remove. (__arm_vclzq): Remove. (__arm_vclzq_m): Remove. (__arm_vclzq_x): Remove. (vqabsq): Remove. (vqnegq): Remove. (vqnegq_m): Remove. (vqabsq_m): Remove. (vqabsq_s8): Remove. (vqabsq_s16): Remove. (vqabsq_s32): Remove. (vqnegq_s8): Remove. (vqnegq_s16): Remove. (vqnegq_s32): Remove. (vqnegq_m_s8): Remove. (vqabsq_m_s8): Remove. (vqnegq_m_s16): Remove. (vqabsq_m_s16): Remove. (vqnegq_m_s32): Remove. (vqabsq_m_s32): Remove. (__arm_vqabsq_s8): Remove. (__arm_vqabsq_s16): Remove. (__arm_vqabsq_s32): Remove. (__arm_vqnegq_s8): Remove. (__arm_vqnegq_s16): Remove. (__arm_vqnegq_s32): Remove. (__arm_vqnegq_m_s8): Remove. (__arm_vqabsq_m_s8): Remove. (__arm_vqnegq_m_s16): Remove. (__arm_vqabsq_m_s16): Remove. (__arm_vqnegq_m_s32): Remove. (__arm_vqabsq_m_s32): Remove. (__arm_vqabsq): Remove. (__arm_vqnegq): Remove. (__arm_vqnegq_m): Remove. (__arm_vqabsq_m): Remove. --- gcc/config/arm/arm-mve-builtins-base.cc | 16 + gcc/config/arm/arm-mve-builtins-base.def | 8 + gcc/config/arm/arm-mve-builtins-base.h | 6 + gcc/config/arm/arm_mve.h | 1272 +--------------------- 4 files changed, 34 insertions(+), 1268 deletions(-) diff --git a/gcc/config/arm/arm-mve-builtins-base.cc b/gcc/config/arm/arm-mve-builtins-base.cc index bb585a3921f..627553f1784 100644 --- a/gcc/config/arm/arm-mve-builtins-base.cc +++ b/gcc/config/arm/arm-mve-builtins-base.cc @@ -193,9 +193,22 @@ namespace arm_mve { -1, -1, -1, \ UNSPEC##_M_N_S, -1, -1)) + /* Helper for builtins with only unspec codes, _m predicated + overrides, but no _n version, no unsigned and no + floating-point. */ +#define FUNCTION_WITHOUT_N_NO_U_F(NAME, UNSPEC) FUNCTION \ + (NAME, unspec_mve_function_exact_insn, \ + (UNSPEC##_S, -1, -1, \ + -1, -1, -1, \ + UNSPEC##_M_S, -1, -1, \ + -1, -1, -1)) + FUNCTION_WITHOUT_N (vabdq, VABDQ) +FUNCTION (vabsq, unspec_based_mve_function_exact_insn, (ABS, ABS, ABS, -1, -1, -1, VABSQ_M_S, -1, VABSQ_M_F, -1, -1, -1)) FUNCTION_WITH_RTX_M_N (vaddq, PLUS, VADDQ) FUNCTION_WITH_RTX_M (vandq, AND, VANDQ) +FUNCTION_WITHOUT_N_NO_U_F (vclsq, VCLSQ) +FUNCTION (vclzq, unspec_based_mve_function_exact_insn, (CLZ, CLZ, CLZ, -1, -1, -1, VCLZQ_M_S, VCLZQ_M_U, -1, -1, -1 ,-1)) FUNCTION_WITHOUT_M_N (vcreateq, VCREATEQ) FUNCTION_WITH_RTX_M (veorq, XOR, VEORQ) FUNCTION_WITH_M_N_NO_F (vhaddq, VHADDQ) @@ -204,9 +217,12 @@ FUNCTION_WITH_RTX_M_NO_F (vmaxq, SMAX, UMAX, VMAXQ) FUNCTION_WITH_RTX_M_NO_F (vminq, SMIN, UMIN, VMINQ) FUNCTION_WITHOUT_N_NO_F (vmulhq, VMULHQ) FUNCTION_WITH_RTX_M_N (vmulq, MULT, VMULQ) +FUNCTION (vnegq, unspec_based_mve_function_exact_insn, (NEG, NEG, NEG, -1, -1, -1, VNEGQ_M_S, -1, VNEGQ_M_F, -1, -1, -1)) FUNCTION_WITH_RTX_M_N_NO_N_F (vorrq, IOR, VORRQ) +FUNCTION_WITHOUT_N_NO_U_F (vqabsq, VQABSQ) FUNCTION_WITH_M_N_NO_F (vqaddq, VQADDQ) FUNCTION_WITH_M_N_NO_U_F (vqdmulhq, VQDMULHQ) +FUNCTION_WITHOUT_N_NO_U_F (vqnegq, VQNEGQ) FUNCTION_WITH_M_N_NO_F (vqrshlq, VQRSHLQ) FUNCTION_WITH_M_N_NO_U_F (vqrdmulhq, VQRDMULHQ) FUNCTION_WITH_M_N_R (vqshlq, VQSHLQ) diff --git a/gcc/config/arm/arm-mve-builtins-base.def b/gcc/config/arm/arm-mve-builtins-base.def index 33c95c02396..7a8f5ac78e4 100644 --- a/gcc/config/arm/arm-mve-builtins-base.def +++ b/gcc/config/arm/arm-mve-builtins-base.def @@ -19,8 +19,11 @@ #define REQUIRES_FLOAT false DEF_MVE_FUNCTION (vabdq, binary, all_integer, mx_or_none) +DEF_MVE_FUNCTION (vabsq, unary, all_signed, mx_or_none) DEF_MVE_FUNCTION (vaddq, binary_opt_n, all_integer, mx_or_none) DEF_MVE_FUNCTION (vandq, binary, all_integer, mx_or_none) +DEF_MVE_FUNCTION (vclsq, unary, all_signed, mx_or_none) +DEF_MVE_FUNCTION (vclzq, unary, all_integer, mx_or_none) DEF_MVE_FUNCTION (vcreateq, create, all_integer_with_64, none) DEF_MVE_FUNCTION (veorq, binary, all_integer, mx_or_none) DEF_MVE_FUNCTION (vhaddq, binary_opt_n, all_integer, mx_or_none) @@ -29,9 +32,12 @@ DEF_MVE_FUNCTION (vmaxq, binary, all_integer, mx_or_none) DEF_MVE_FUNCTION (vminq, binary, all_integer, mx_or_none) DEF_MVE_FUNCTION (vmulhq, binary, all_integer, mx_or_none) DEF_MVE_FUNCTION (vmulq, binary_opt_n, all_integer, mx_or_none) +DEF_MVE_FUNCTION (vnegq, unary, all_signed, mx_or_none) DEF_MVE_FUNCTION (vorrq, binary_orrq, all_integer, mx_or_none) +DEF_MVE_FUNCTION (vqabsq, unary, all_signed, m_or_none) DEF_MVE_FUNCTION (vqaddq, binary_opt_n, all_integer, m_or_none) DEF_MVE_FUNCTION (vqdmulhq, binary_opt_n, all_signed, m_or_none) +DEF_MVE_FUNCTION (vqnegq, unary, all_signed, m_or_none) DEF_MVE_FUNCTION (vqrdmulhq, binary_opt_n, all_signed, m_or_none) DEF_MVE_FUNCTION (vqrshlq, binary_round_lshift, all_integer, m_or_none) DEF_MVE_FUNCTION (vqrshrnbq, binary_rshift_narrow, integer_16_32, m_or_none) @@ -63,11 +69,13 @@ DEF_MVE_FUNCTION (vuninitializedq, inherent, all_integer_with_64, none) #define REQUIRES_FLOAT true DEF_MVE_FUNCTION (vabdq, binary, all_float, mx_or_none) +DEF_MVE_FUNCTION (vabsq, unary, all_float, mx_or_none) DEF_MVE_FUNCTION (vaddq, binary_opt_n, all_float, mx_or_none) DEF_MVE_FUNCTION (vandq, binary, all_float, mx_or_none) DEF_MVE_FUNCTION (vcreateq, create, all_float, none) DEF_MVE_FUNCTION (veorq, binary, all_float, mx_or_none) DEF_MVE_FUNCTION (vmulq, binary_opt_n, all_float, mx_or_none) +DEF_MVE_FUNCTION (vnegq, unary, all_float, mx_or_none) DEF_MVE_FUNCTION (vorrq, binary_orrq, all_float, mx_or_none) DEF_MVE_FUNCTION (vreinterpretq, unary_convert, reinterpret_float, none) DEF_MVE_FUNCTION (vsubq, binary_opt_n, all_float, mx_or_none) diff --git a/gcc/config/arm/arm-mve-builtins-base.h b/gcc/config/arm/arm-mve-builtins-base.h index 2a230f5f34d..8425a84b9ad 100644 --- a/gcc/config/arm/arm-mve-builtins-base.h +++ b/gcc/config/arm/arm-mve-builtins-base.h @@ -24,8 +24,11 @@ namespace arm_mve { namespace functions { extern const function_base *const vabdq; +extern const function_base *const vabsq; extern const function_base *const vaddq; extern const function_base *const vandq; +extern const function_base *const vclsq; +extern const function_base *const vclzq; extern const function_base *const vcreateq; extern const function_base *const veorq; extern const function_base *const vhaddq; @@ -34,9 +37,12 @@ extern const function_base *const vmaxq; extern const function_base *const vminq; extern const function_base *const vmulhq; extern const function_base *const vmulq; +extern const function_base *const vnegq; extern const function_base *const vorrq; +extern const function_base *const vqabsq; extern const function_base *const vqaddq; extern const function_base *const vqdmulhq; +extern const function_base *const vqnegq; extern const function_base *const vqrdmulhq; extern const function_base *const vqrshlq; extern const function_base *const vqrshrnbq; diff --git a/gcc/config/arm/arm_mve.h b/gcc/config/arm/arm_mve.h index 89de7e0e46b..8101515497b 100644 --- a/gcc/config/arm/arm_mve.h +++ b/gcc/config/arm/arm_mve.h @@ -43,10 +43,6 @@ #ifndef __ARM_MVE_PRESERVE_USER_NAMESPACE #define vst4q(__addr, __value) __arm_vst4q(__addr, __value) #define vdupq_n(__a) __arm_vdupq_n(__a) -#define vabsq(__a) __arm_vabsq(__a) -#define vclsq(__a) __arm_vclsq(__a) -#define vclzq(__a) __arm_vclzq(__a) -#define vnegq(__a) __arm_vnegq(__a) #define vaddlvq(__a) __arm_vaddlvq(__a) #define vaddvq(__a) __arm_vaddvq(__a) #define vmovlbq(__a) __arm_vmovlbq(__a) @@ -55,8 +51,6 @@ #define vrev16q(__a) __arm_vrev16q(__a) #define vrev32q(__a) __arm_vrev32q(__a) #define vrev64q(__a) __arm_vrev64q(__a) -#define vqabsq(__a) __arm_vqabsq(__a) -#define vqnegq(__a) __arm_vqnegq(__a) #define vaddlvq_p(__a, __p) __arm_vaddlvq_p(__a, __p) #define vcmpneq(__a, __b) __arm_vcmpneq(__a, __b) #define vornq(__a, __b) __arm_vornq(__a, __b) @@ -132,7 +126,6 @@ #define vcmpeqq_m(__a, __b, __p) __arm_vcmpeqq_m(__a, __b, __p) #define vcmpcsq_m(__a, __b, __p) __arm_vcmpcsq_m(__a, __b, __p) #define vcmpcsq_m_n(__a, __b, __p) __arm_vcmpcsq_m_n(__a, __b, __p) -#define vclzq_m(__inactive, __a, __p) __arm_vclzq_m(__inactive, __a, __p) #define vaddvaq_p(__a, __b, __p) __arm_vaddvaq_p(__a, __b, __p) #define vsriq(__a, __b, __imm) __arm_vsriq(__a, __b, __imm) #define vsliq(__a, __b, __imm) __arm_vsliq(__a, __b, __imm) @@ -144,14 +137,9 @@ #define vcmpleq_m(__a, __b, __p) __arm_vcmpleq_m(__a, __b, __p) #define vcmpgtq_m(__a, __b, __p) __arm_vcmpgtq_m(__a, __b, __p) #define vcmpgeq_m(__a, __b, __p) __arm_vcmpgeq_m(__a, __b, __p) -#define vqnegq_m(__inactive, __a, __p) __arm_vqnegq_m(__inactive, __a, __p) -#define vqabsq_m(__inactive, __a, __p) __arm_vqabsq_m(__inactive, __a, __p) -#define vnegq_m(__inactive, __a, __p) __arm_vnegq_m(__inactive, __a, __p) #define vmlsdavxq_p(__a, __b, __p) __arm_vmlsdavxq_p(__a, __b, __p) #define vmlsdavq_p(__a, __b, __p) __arm_vmlsdavq_p(__a, __b, __p) #define vmladavxq_p(__a, __b, __p) __arm_vmladavxq_p(__a, __b, __p) -#define vclsq_m(__inactive, __a, __p) __arm_vclsq_m(__inactive, __a, __p) -#define vabsq_m(__inactive, __a, __p) __arm_vabsq_m(__inactive, __a, __p) #define vqrdmlsdhxq(__inactive, __a, __b) __arm_vqrdmlsdhxq(__inactive, __a, __b) #define vqrdmlsdhq(__inactive, __a, __b) __arm_vqrdmlsdhq(__inactive, __a, __b) #define vqrdmladhxq(__inactive, __a, __b) __arm_vqrdmladhxq(__inactive, __a, __b) @@ -307,10 +295,6 @@ #define viwdupq_x_u8(__a, __b, __imm, __p) __arm_viwdupq_x_u8(__a, __b, __imm, __p) #define viwdupq_x_u16(__a, __b, __imm, __p) __arm_viwdupq_x_u16(__a, __b, __imm, __p) #define viwdupq_x_u32(__a, __b, __imm, __p) __arm_viwdupq_x_u32(__a, __b, __imm, __p) -#define vabsq_x(__a, __p) __arm_vabsq_x(__a, __p) -#define vclsq_x(__a, __p) __arm_vclsq_x(__a, __p) -#define vclzq_x(__a, __p) __arm_vclzq_x(__a, __p) -#define vnegq_x(__a, __p) __arm_vnegq_x(__a, __p) #define vmullbq_poly_x(__a, __b, __p) __arm_vmullbq_poly_x(__a, __b, __p) #define vmullbq_int_x(__a, __b, __p) __arm_vmullbq_int_x(__a, __b, __p) #define vmulltq_poly_x(__a, __b, __p) __arm_vmulltq_poly_x(__a, __b, __p) @@ -446,12 +430,8 @@ #define vrndaq_f32(__a) __arm_vrndaq_f32(__a) #define vrev64q_f16(__a) __arm_vrev64q_f16(__a) #define vrev64q_f32(__a) __arm_vrev64q_f32(__a) -#define vnegq_f16(__a) __arm_vnegq_f16(__a) -#define vnegq_f32(__a) __arm_vnegq_f32(__a) #define vdupq_n_f16(__a) __arm_vdupq_n_f16(__a) #define vdupq_n_f32(__a) __arm_vdupq_n_f32(__a) -#define vabsq_f16(__a) __arm_vabsq_f16(__a) -#define vabsq_f32(__a) __arm_vabsq_f32(__a) #define vrev32q_f16(__a) __arm_vrev32q_f16(__a) #define vcvttq_f32_f16(__a) __arm_vcvttq_f32_f16(__a) #define vcvtbq_f32_f16(__a) __arm_vcvtbq_f32_f16(__a) @@ -462,18 +442,6 @@ #define vdupq_n_s8(__a) __arm_vdupq_n_s8(__a) #define vdupq_n_s16(__a) __arm_vdupq_n_s16(__a) #define vdupq_n_s32(__a) __arm_vdupq_n_s32(__a) -#define vabsq_s8(__a) __arm_vabsq_s8(__a) -#define vabsq_s16(__a) __arm_vabsq_s16(__a) -#define vabsq_s32(__a) __arm_vabsq_s32(__a) -#define vclsq_s8(__a) __arm_vclsq_s8(__a) -#define vclsq_s16(__a) __arm_vclsq_s16(__a) -#define vclsq_s32(__a) __arm_vclsq_s32(__a) -#define vclzq_s8(__a) __arm_vclzq_s8(__a) -#define vclzq_s16(__a) __arm_vclzq_s16(__a) -#define vclzq_s32(__a) __arm_vclzq_s32(__a) -#define vnegq_s8(__a) __arm_vnegq_s8(__a) -#define vnegq_s16(__a) __arm_vnegq_s16(__a) -#define vnegq_s32(__a) __arm_vnegq_s32(__a) #define vaddlvq_s32(__a) __arm_vaddlvq_s32(__a) #define vaddvq_s8(__a) __arm_vaddvq_s8(__a) #define vaddvq_s16(__a) __arm_vaddvq_s16(__a) @@ -493,12 +461,6 @@ #define vrev64q_s8(__a) __arm_vrev64q_s8(__a) #define vrev64q_s16(__a) __arm_vrev64q_s16(__a) #define vrev64q_s32(__a) __arm_vrev64q_s32(__a) -#define vqabsq_s8(__a) __arm_vqabsq_s8(__a) -#define vqabsq_s16(__a) __arm_vqabsq_s16(__a) -#define vqabsq_s32(__a) __arm_vqabsq_s32(__a) -#define vqnegq_s8(__a) __arm_vqnegq_s8(__a) -#define vqnegq_s16(__a) __arm_vqnegq_s16(__a) -#define vqnegq_s32(__a) __arm_vqnegq_s32(__a) #define vcvtaq_s16_f16(__a) __arm_vcvtaq_s16_f16(__a) #define vcvtaq_s32_f32(__a) __arm_vcvtaq_s32_f32(__a) #define vcvtnq_s16_f16(__a) __arm_vcvtnq_s16_f16(__a) @@ -518,9 +480,6 @@ #define vdupq_n_u8(__a) __arm_vdupq_n_u8(__a) #define vdupq_n_u16(__a) __arm_vdupq_n_u16(__a) #define vdupq_n_u32(__a) __arm_vdupq_n_u32(__a) -#define vclzq_u8(__a) __arm_vclzq_u8(__a) -#define vclzq_u16(__a) __arm_vclzq_u16(__a) -#define vclzq_u32(__a) __arm_vclzq_u32(__a) #define vaddvq_u8(__a) __arm_vaddvq_u8(__a) #define vaddvq_u16(__a) __arm_vaddvq_u16(__a) #define vaddvq_u32(__a) __arm_vaddvq_u32(__a) @@ -893,7 +852,6 @@ #define vcmpeqq_m_n_u8(__a, __b, __p) __arm_vcmpeqq_m_n_u8(__a, __b, __p) #define vcmpcsq_m_u8(__a, __b, __p) __arm_vcmpcsq_m_u8(__a, __b, __p) #define vcmpcsq_m_n_u8(__a, __b, __p) __arm_vcmpcsq_m_n_u8(__a, __b, __p) -#define vclzq_m_u8(__inactive, __a, __p) __arm_vclzq_m_u8(__inactive, __a, __p) #define vaddvaq_p_u8(__a, __b, __p) __arm_vaddvaq_p_u8(__a, __b, __p) #define vsriq_n_u8(__a, __b, __imm) __arm_vsriq_n_u8(__a, __b, __imm) #define vsliq_n_u8(__a, __b, __imm) __arm_vsliq_n_u8(__a, __b, __imm) @@ -914,9 +872,6 @@ #define vcmpeqq_m_s8(__a, __b, __p) __arm_vcmpeqq_m_s8(__a, __b, __p) #define vcmpeqq_m_n_s8(__a, __b, __p) __arm_vcmpeqq_m_n_s8(__a, __b, __p) #define vrev64q_m_s8(__inactive, __a, __p) __arm_vrev64q_m_s8(__inactive, __a, __p) -#define vqnegq_m_s8(__inactive, __a, __p) __arm_vqnegq_m_s8(__inactive, __a, __p) -#define vqabsq_m_s8(__inactive, __a, __p) __arm_vqabsq_m_s8(__inactive, __a, __p) -#define vnegq_m_s8(__inactive, __a, __p) __arm_vnegq_m_s8(__inactive, __a, __p) #define vmvnq_m_s8(__inactive, __a, __p) __arm_vmvnq_m_s8(__inactive, __a, __p) #define vmlsdavxq_p_s8(__a, __b, __p) __arm_vmlsdavxq_p_s8(__a, __b, __p) #define vmlsdavq_p_s8(__a, __b, __p) __arm_vmlsdavq_p_s8(__a, __b, __p) @@ -925,10 +880,7 @@ #define vminvq_p_s8(__a, __b, __p) __arm_vminvq_p_s8(__a, __b, __p) #define vmaxvq_p_s8(__a, __b, __p) __arm_vmaxvq_p_s8(__a, __b, __p) #define vdupq_m_n_s8(__inactive, __a, __p) __arm_vdupq_m_n_s8(__inactive, __a, __p) -#define vclzq_m_s8(__inactive, __a, __p) __arm_vclzq_m_s8(__inactive, __a, __p) -#define vclsq_m_s8(__inactive, __a, __p) __arm_vclsq_m_s8(__inactive, __a, __p) #define vaddvaq_p_s8(__a, __b, __p) __arm_vaddvaq_p_s8(__a, __b, __p) -#define vabsq_m_s8(__inactive, __a, __p) __arm_vabsq_m_s8(__inactive, __a, __p) #define vqrdmlsdhxq_s8(__inactive, __a, __b) __arm_vqrdmlsdhxq_s8(__inactive, __a, __b) #define vqrdmlsdhq_s8(__inactive, __a, __b) __arm_vqrdmlsdhq_s8(__inactive, __a, __b) #define vqrdmlashq_n_s8(__a, __b, __c) __arm_vqrdmlashq_n_s8(__a, __b, __c) @@ -968,7 +920,6 @@ #define vcmpeqq_m_n_u16(__a, __b, __p) __arm_vcmpeqq_m_n_u16(__a, __b, __p) #define vcmpcsq_m_u16(__a, __b, __p) __arm_vcmpcsq_m_u16(__a, __b, __p) #define vcmpcsq_m_n_u16(__a, __b, __p) __arm_vcmpcsq_m_n_u16(__a, __b, __p) -#define vclzq_m_u16(__inactive, __a, __p) __arm_vclzq_m_u16(__inactive, __a, __p) #define vaddvaq_p_u16(__a, __b, __p) __arm_vaddvaq_p_u16(__a, __b, __p) #define vsriq_n_u16(__a, __b, __imm) __arm_vsriq_n_u16(__a, __b, __imm) #define vsliq_n_u16(__a, __b, __imm) __arm_vsliq_n_u16(__a, __b, __imm) @@ -989,9 +940,6 @@ #define vcmpeqq_m_s16(__a, __b, __p) __arm_vcmpeqq_m_s16(__a, __b, __p) #define vcmpeqq_m_n_s16(__a, __b, __p) __arm_vcmpeqq_m_n_s16(__a, __b, __p) #define vrev64q_m_s16(__inactive, __a, __p) __arm_vrev64q_m_s16(__inactive, __a, __p) -#define vqnegq_m_s16(__inactive, __a, __p) __arm_vqnegq_m_s16(__inactive, __a, __p) -#define vqabsq_m_s16(__inactive, __a, __p) __arm_vqabsq_m_s16(__inactive, __a, __p) -#define vnegq_m_s16(__inactive, __a, __p) __arm_vnegq_m_s16(__inactive, __a, __p) #define vmvnq_m_s16(__inactive, __a, __p) __arm_vmvnq_m_s16(__inactive, __a, __p) #define vmlsdavxq_p_s16(__a, __b, __p) __arm_vmlsdavxq_p_s16(__a, __b, __p) #define vmlsdavq_p_s16(__a, __b, __p) __arm_vmlsdavq_p_s16(__a, __b, __p) @@ -1000,10 +948,7 @@ #define vminvq_p_s16(__a, __b, __p) __arm_vminvq_p_s16(__a, __b, __p) #define vmaxvq_p_s16(__a, __b, __p) __arm_vmaxvq_p_s16(__a, __b, __p) #define vdupq_m_n_s16(__inactive, __a, __p) __arm_vdupq_m_n_s16(__inactive, __a, __p) -#define vclzq_m_s16(__inactive, __a, __p) __arm_vclzq_m_s16(__inactive, __a, __p) -#define vclsq_m_s16(__inactive, __a, __p) __arm_vclsq_m_s16(__inactive, __a, __p) #define vaddvaq_p_s16(__a, __b, __p) __arm_vaddvaq_p_s16(__a, __b, __p) -#define vabsq_m_s16(__inactive, __a, __p) __arm_vabsq_m_s16(__inactive, __a, __p) #define vqrdmlsdhxq_s16(__inactive, __a, __b) __arm_vqrdmlsdhxq_s16(__inactive, __a, __b) #define vqrdmlsdhq_s16(__inactive, __a, __b) __arm_vqrdmlsdhq_s16(__inactive, __a, __b) #define vqrdmlashq_n_s16(__a, __b, __c) __arm_vqrdmlashq_n_s16(__a, __b, __c) @@ -1043,7 +988,6 @@ #define vcmpeqq_m_n_u32(__a, __b, __p) __arm_vcmpeqq_m_n_u32(__a, __b, __p) #define vcmpcsq_m_u32(__a, __b, __p) __arm_vcmpcsq_m_u32(__a, __b, __p) #define vcmpcsq_m_n_u32(__a, __b, __p) __arm_vcmpcsq_m_n_u32(__a, __b, __p) -#define vclzq_m_u32(__inactive, __a, __p) __arm_vclzq_m_u32(__inactive, __a, __p) #define vaddvaq_p_u32(__a, __b, __p) __arm_vaddvaq_p_u32(__a, __b, __p) #define vsriq_n_u32(__a, __b, __imm) __arm_vsriq_n_u32(__a, __b, __imm) #define vsliq_n_u32(__a, __b, __imm) __arm_vsliq_n_u32(__a, __b, __imm) @@ -1064,9 +1008,6 @@ #define vcmpeqq_m_s32(__a, __b, __p) __arm_vcmpeqq_m_s32(__a, __b, __p) #define vcmpeqq_m_n_s32(__a, __b, __p) __arm_vcmpeqq_m_n_s32(__a, __b, __p) #define vrev64q_m_s32(__inactive, __a, __p) __arm_vrev64q_m_s32(__inactive, __a, __p) -#define vqnegq_m_s32(__inactive, __a, __p) __arm_vqnegq_m_s32(__inactive, __a, __p) -#define vqabsq_m_s32(__inactive, __a, __p) __arm_vqabsq_m_s32(__inactive, __a, __p) -#define vnegq_m_s32(__inactive, __a, __p) __arm_vnegq_m_s32(__inactive, __a, __p) #define vmvnq_m_s32(__inactive, __a, __p) __arm_vmvnq_m_s32(__inactive, __a, __p) #define vmlsdavxq_p_s32(__a, __b, __p) __arm_vmlsdavxq_p_s32(__a, __b, __p) #define vmlsdavq_p_s32(__a, __b, __p) __arm_vmlsdavq_p_s32(__a, __b, __p) @@ -1075,10 +1016,7 @@ #define vminvq_p_s32(__a, __b, __p) __arm_vminvq_p_s32(__a, __b, __p) #define vmaxvq_p_s32(__a, __b, __p) __arm_vmaxvq_p_s32(__a, __b, __p) #define vdupq_m_n_s32(__inactive, __a, __p) __arm_vdupq_m_n_s32(__inactive, __a, __p) -#define vclzq_m_s32(__inactive, __a, __p) __arm_vclzq_m_s32(__inactive, __a, __p) -#define vclsq_m_s32(__inactive, __a, __p) __arm_vclsq_m_s32(__inactive, __a, __p) #define vaddvaq_p_s32(__a, __b, __p) __arm_vaddvaq_p_s32(__a, __b, __p) -#define vabsq_m_s32(__inactive, __a, __p) __arm_vabsq_m_s32(__inactive, __a, __p) #define vqrdmlsdhxq_s32(__inactive, __a, __b) __arm_vqrdmlsdhxq_s32(__inactive, __a, __b) #define vqrdmlsdhq_s32(__inactive, __a, __b) __arm_vqrdmlsdhq_s32(__inactive, __a, __b) #define vqrdmlashq_n_s32(__a, __b, __c) __arm_vqrdmlashq_n_s32(__a, __b, __c) @@ -1131,7 +1069,6 @@ #define vmlaldavaxq_s16(__a, __b, __c) __arm_vmlaldavaxq_s16(__a, __b, __c) #define vmlsldavaq_s16(__a, __b, __c) __arm_vmlsldavaq_s16(__a, __b, __c) #define vmlsldavaxq_s16(__a, __b, __c) __arm_vmlsldavaxq_s16(__a, __b, __c) -#define vabsq_m_f16(__inactive, __a, __p) __arm_vabsq_m_f16(__inactive, __a, __p) #define vcvtmq_m_s16_f16(__inactive, __a, __p) __arm_vcvtmq_m_s16_f16(__inactive, __a, __p) #define vcvtnq_m_s16_f16(__inactive, __a, __p) __arm_vcvtnq_m_s16_f16(__inactive, __a, __p) #define vcvtpq_m_s16_f16(__inactive, __a, __p) __arm_vcvtpq_m_s16_f16(__inactive, __a, __p) @@ -1151,7 +1088,6 @@ #define vmovltq_m_s8(__inactive, __a, __p) __arm_vmovltq_m_s8(__inactive, __a, __p) #define vmovnbq_m_s16(__a, __b, __p) __arm_vmovnbq_m_s16(__a, __b, __p) #define vmovntq_m_s16(__a, __b, __p) __arm_vmovntq_m_s16(__a, __b, __p) -#define vnegq_m_f16(__inactive, __a, __p) __arm_vnegq_m_f16(__inactive, __a, __p) #define vpselq_f16(__a, __b, __p) __arm_vpselq_f16(__a, __b, __p) #define vqmovnbq_m_s16(__a, __b, __p) __arm_vqmovnbq_m_s16(__a, __b, __p) #define vqmovntq_m_s16(__a, __b, __p) __arm_vqmovntq_m_s16(__a, __b, __p) @@ -1203,7 +1139,6 @@ #define vmlaldavaxq_s32(__a, __b, __c) __arm_vmlaldavaxq_s32(__a, __b, __c) #define vmlsldavaq_s32(__a, __b, __c) __arm_vmlsldavaq_s32(__a, __b, __c) #define vmlsldavaxq_s32(__a, __b, __c) __arm_vmlsldavaxq_s32(__a, __b, __c) -#define vabsq_m_f32(__inactive, __a, __p) __arm_vabsq_m_f32(__inactive, __a, __p) #define vcvtmq_m_s32_f32(__inactive, __a, __p) __arm_vcvtmq_m_s32_f32(__inactive, __a, __p) #define vcvtnq_m_s32_f32(__inactive, __a, __p) __arm_vcvtnq_m_s32_f32(__inactive, __a, __p) #define vcvtpq_m_s32_f32(__inactive, __a, __p) __arm_vcvtpq_m_s32_f32(__inactive, __a, __p) @@ -1223,7 +1158,6 @@ #define vmovltq_m_s16(__inactive, __a, __p) __arm_vmovltq_m_s16(__inactive, __a, __p) #define vmovnbq_m_s32(__a, __b, __p) __arm_vmovnbq_m_s32(__a, __b, __p) #define vmovntq_m_s32(__a, __b, __p) __arm_vmovntq_m_s32(__a, __b, __p) -#define vnegq_m_f32(__inactive, __a, __p) __arm_vnegq_m_f32(__inactive, __a, __p) #define vpselq_f32(__a, __b, __p) __arm_vpselq_f32(__a, __b, __p) #define vqmovnbq_m_s32(__a, __b, __p) __arm_vqmovnbq_m_s32(__a, __b, __p) #define vqmovntq_m_s32(__a, __b, __p) __arm_vqmovntq_m_s32(__a, __b, __p) @@ -1779,21 +1713,6 @@ #define vdupq_x_n_u8(__a, __p) __arm_vdupq_x_n_u8(__a, __p) #define vdupq_x_n_u16(__a, __p) __arm_vdupq_x_n_u16(__a, __p) #define vdupq_x_n_u32(__a, __p) __arm_vdupq_x_n_u32(__a, __p) -#define vabsq_x_s8(__a, __p) __arm_vabsq_x_s8(__a, __p) -#define vabsq_x_s16(__a, __p) __arm_vabsq_x_s16(__a, __p) -#define vabsq_x_s32(__a, __p) __arm_vabsq_x_s32(__a, __p) -#define vclsq_x_s8(__a, __p) __arm_vclsq_x_s8(__a, __p) -#define vclsq_x_s16(__a, __p) __arm_vclsq_x_s16(__a, __p) -#define vclsq_x_s32(__a, __p) __arm_vclsq_x_s32(__a, __p) -#define vclzq_x_s8(__a, __p) __arm_vclzq_x_s8(__a, __p) -#define vclzq_x_s16(__a, __p) __arm_vclzq_x_s16(__a, __p) -#define vclzq_x_s32(__a, __p) __arm_vclzq_x_s32(__a, __p) -#define vclzq_x_u8(__a, __p) __arm_vclzq_x_u8(__a, __p) -#define vclzq_x_u16(__a, __p) __arm_vclzq_x_u16(__a, __p) -#define vclzq_x_u32(__a, __p) __arm_vclzq_x_u32(__a, __p) -#define vnegq_x_s8(__a, __p) __arm_vnegq_x_s8(__a, __p) -#define vnegq_x_s16(__a, __p) __arm_vnegq_x_s16(__a, __p) -#define vnegq_x_s32(__a, __p) __arm_vnegq_x_s32(__a, __p) #define vmullbq_poly_x_p8(__a, __b, __p) __arm_vmullbq_poly_x_p8(__a, __b, __p) #define vmullbq_poly_x_p16(__a, __b, __p) __arm_vmullbq_poly_x_p16(__a, __b, __p) #define vmullbq_int_x_s8(__a, __b, __p) __arm_vmullbq_int_x_s8(__a, __b, __p) @@ -1890,10 +1809,6 @@ #define vminnmq_x_f32(__a, __b, __p) __arm_vminnmq_x_f32(__a, __b, __p) #define vmaxnmq_x_f16(__a, __b, __p) __arm_vmaxnmq_x_f16(__a, __b, __p) #define vmaxnmq_x_f32(__a, __b, __p) __arm_vmaxnmq_x_f32(__a, __b, __p) -#define vabsq_x_f16(__a, __p) __arm_vabsq_x_f16(__a, __p) -#define vabsq_x_f32(__a, __p) __arm_vabsq_x_f32(__a, __p) -#define vnegq_x_f16(__a, __p) __arm_vnegq_x_f16(__a, __p) -#define vnegq_x_f32(__a, __p) __arm_vnegq_x_f32(__a, __p) #define vcaddq_rot90_x_f16(__a, __b, __p) __arm_vcaddq_rot90_x_f16(__a, __b, __p) #define vcaddq_rot90_x_f32(__a, __b, __p) __arm_vcaddq_rot90_x_f32(__a, __b, __p) #define vcaddq_rot270_x_f16(__a, __b, __p) __arm_vcaddq_rot270_x_f16(__a, __b, __p) @@ -2148,90 +2063,6 @@ __arm_vdupq_n_s32 (int32_t __a) return __builtin_mve_vdupq_n_sv4si (__a); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_s8 (int8x16_t __a) -{ - return __builtin_mve_vabsq_sv16qi (__a); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_s16 (int16x8_t __a) -{ - return __builtin_mve_vabsq_sv8hi (__a); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_s32 (int32x4_t __a) -{ - return __builtin_mve_vabsq_sv4si (__a); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclsq_s8 (int8x16_t __a) -{ - return __builtin_mve_vclsq_sv16qi (__a); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclsq_s16 (int16x8_t __a) -{ - return __builtin_mve_vclsq_sv8hi (__a); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclsq_s32 (int32x4_t __a) -{ - return __builtin_mve_vclsq_sv4si (__a); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_s8 (int8x16_t __a) -{ - return __builtin_mve_vclzq_sv16qi (__a); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_s16 (int16x8_t __a) -{ - return __builtin_mve_vclzq_sv8hi (__a); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_s32 (int32x4_t __a) -{ - return __builtin_mve_vclzq_sv4si (__a); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_s8 (int8x16_t __a) -{ - return __builtin_mve_vnegq_sv16qi (__a); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_s16 (int16x8_t __a) -{ - return __builtin_mve_vnegq_sv8hi (__a); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_s32 (int32x4_t __a) -{ - return __builtin_mve_vnegq_sv4si (__a); -} - __extension__ extern __inline int64_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddlvq_s32 (int32x4_t __a) @@ -2365,48 +2196,6 @@ __arm_vrev64q_s32 (int32x4_t __a) return __builtin_mve_vrev64q_sv4si (__a); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqabsq_s8 (int8x16_t __a) -{ - return __builtin_mve_vqabsq_sv16qi (__a); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqabsq_s16 (int16x8_t __a) -{ - return __builtin_mve_vqabsq_sv8hi (__a); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqabsq_s32 (int32x4_t __a) -{ - return __builtin_mve_vqabsq_sv4si (__a); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqnegq_s8 (int8x16_t __a) -{ - return __builtin_mve_vqnegq_sv16qi (__a); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqnegq_s16 (int16x8_t __a) -{ - return __builtin_mve_vqnegq_sv8hi (__a); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqnegq_s32 (int32x4_t __a) -{ - return __builtin_mve_vqnegq_sv4si (__a); -} - __extension__ extern __inline uint8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vrev64q_u8 (uint8x16_t __a) @@ -2470,27 +2259,6 @@ __arm_vdupq_n_u32 (uint32_t __a) return __builtin_mve_vdupq_n_uv4si (__a); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_u8 (uint8x16_t __a) -{ - return __builtin_mve_vclzq_uv16qi (__a); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_u16 (uint16x8_t __a) -{ - return __builtin_mve_vclzq_uv8hi (__a); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_u32 (uint32x4_t __a) -{ - return __builtin_mve_vclzq_uv4si (__a); -} - __extension__ extern __inline uint32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddvq_u8 (uint8x16_t __a) @@ -4497,13 +4265,6 @@ __arm_vcmpcsq_m_n_u8 (uint8x16_t __a, uint8_t __b, mve_pred16_t __p) return __builtin_mve_vcmpcsq_m_n_uv16qi (__a, __b, __p); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_m_u8 (uint8x16_t __inactive, uint8x16_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vclzq_m_uv16qi (__inactive, __a, __p); -} - __extension__ extern __inline uint32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddvaq_p_u8 (uint32_t __a, uint8x16_t __b, mve_pred16_t __p) @@ -4644,28 +4405,6 @@ __arm_vrev64q_m_s8 (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) return __builtin_mve_vrev64q_m_sv16qi (__inactive, __a, __p); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqnegq_m_s8 (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vqnegq_m_sv16qi (__inactive, __a, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqabsq_m_s8 (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vqabsq_m_sv16qi (__inactive, __a, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_m_s8 (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vnegq_m_sv16qi (__inactive, __a, __p); -} - - __extension__ extern __inline int8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmvnq_m_s8 (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) @@ -4722,20 +4461,6 @@ __arm_vdupq_m_n_s8 (int8x16_t __inactive, int8_t __a, mve_pred16_t __p) return __builtin_mve_vdupq_m_n_sv16qi (__inactive, __a, __p); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_m_s8 (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vclzq_m_sv16qi (__inactive, __a, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclsq_m_s8 (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vclsq_m_sv16qi (__inactive, __a, __p); -} - __extension__ extern __inline int32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddvaq_p_s8 (int32_t __a, int8x16_t __b, mve_pred16_t __p) @@ -4743,13 +4468,6 @@ __arm_vaddvaq_p_s8 (int32_t __a, int8x16_t __b, mve_pred16_t __p) return __builtin_mve_vaddvaq_p_sv16qi (__a, __b, __p); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_m_s8 (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vabsq_m_sv16qi (__inactive, __a, __p); -} - __extension__ extern __inline int8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vqrdmlsdhxq_s8 (int8x16_t __inactive, int8x16_t __a, int8x16_t __b) @@ -5023,13 +4741,6 @@ __arm_vcmpcsq_m_n_u16 (uint16x8_t __a, uint16_t __b, mve_pred16_t __p) return __builtin_mve_vcmpcsq_m_n_uv8hi (__a, __b, __p); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_m_u16 (uint16x8_t __inactive, uint16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vclzq_m_uv8hi (__inactive, __a, __p); -} - __extension__ extern __inline uint32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddvaq_p_u16 (uint32_t __a, uint16x8_t __b, mve_pred16_t __p) @@ -5170,27 +4881,6 @@ __arm_vrev64q_m_s16 (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) return __builtin_mve_vrev64q_m_sv8hi (__inactive, __a, __p); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqnegq_m_s16 (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vqnegq_m_sv8hi (__inactive, __a, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqabsq_m_s16 (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vqabsq_m_sv8hi (__inactive, __a, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_m_s16 (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vnegq_m_sv8hi (__inactive, __a, __p); -} - __extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmvnq_m_s16 (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) @@ -5247,20 +4937,6 @@ __arm_vdupq_m_n_s16 (int16x8_t __inactive, int16_t __a, mve_pred16_t __p) return __builtin_mve_vdupq_m_n_sv8hi (__inactive, __a, __p); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_m_s16 (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vclzq_m_sv8hi (__inactive, __a, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclsq_m_s16 (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vclsq_m_sv8hi (__inactive, __a, __p); -} - __extension__ extern __inline int32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddvaq_p_s16 (int32_t __a, int16x8_t __b, mve_pred16_t __p) @@ -5268,13 +4944,6 @@ __arm_vaddvaq_p_s16 (int32_t __a, int16x8_t __b, mve_pred16_t __p) return __builtin_mve_vaddvaq_p_sv8hi (__a, __b, __p); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_m_s16 (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vabsq_m_sv8hi (__inactive, __a, __p); -} - __extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vqrdmlsdhxq_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b) @@ -5548,13 +5217,6 @@ __arm_vcmpcsq_m_n_u32 (uint32x4_t __a, uint32_t __b, mve_pred16_t __p) return __builtin_mve_vcmpcsq_m_n_uv4si (__a, __b, __p); } -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_m_u32 (uint32x4_t __inactive, uint32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vclzq_m_uv4si (__inactive, __a, __p); -} - __extension__ extern __inline uint32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddvaq_p_u32 (uint32_t __a, uint32x4_t __b, mve_pred16_t __p) @@ -5695,27 +5357,6 @@ __arm_vrev64q_m_s32 (int32x4_t __inactive, int32x4_t __a, mve_pred16_t __p) return __builtin_mve_vrev64q_m_sv4si (__inactive, __a, __p); } -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqnegq_m_s32 (int32x4_t __inactive, int32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vqnegq_m_sv4si (__inactive, __a, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqabsq_m_s32 (int32x4_t __inactive, int32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vqabsq_m_sv4si (__inactive, __a, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_m_s32 (int32x4_t __inactive, int32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vnegq_m_sv4si (__inactive, __a, __p); -} - __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmvnq_m_s32 (int32x4_t __inactive, int32x4_t __a, mve_pred16_t __p) @@ -5772,20 +5413,6 @@ __arm_vdupq_m_n_s32 (int32x4_t __inactive, int32_t __a, mve_pred16_t __p) return __builtin_mve_vdupq_m_n_sv4si (__inactive, __a, __p); } -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_m_s32 (int32x4_t __inactive, int32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vclzq_m_sv4si (__inactive, __a, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclsq_m_s32 (int32x4_t __inactive, int32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vclsq_m_sv4si (__inactive, __a, __p); -} - __extension__ extern __inline int32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddvaq_p_s32 (int32_t __a, int32x4_t __b, mve_pred16_t __p) @@ -5793,13 +5420,6 @@ __arm_vaddvaq_p_s32 (int32_t __a, int32x4_t __b, mve_pred16_t __p) return __builtin_mve_vaddvaq_p_sv4si (__a, __b, __p); } -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_m_s32 (int32x4_t __inactive, int32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vabsq_m_sv4si (__inactive, __a, __p); -} - __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vqrdmlsdhxq_s32 (int32x4_t __inactive, int32x4_t __a, int32x4_t __b) @@ -9552,111 +9172,6 @@ __arm_vdupq_x_n_u32 (uint32_t __a, mve_pred16_t __p) return __builtin_mve_vdupq_m_n_uv4si (__arm_vuninitializedq_u32 (), __a, __p); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_x_s8 (int8x16_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vabsq_m_sv16qi (__arm_vuninitializedq_s8 (), __a, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_x_s16 (int16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vabsq_m_sv8hi (__arm_vuninitializedq_s16 (), __a, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_x_s32 (int32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vabsq_m_sv4si (__arm_vuninitializedq_s32 (), __a, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclsq_x_s8 (int8x16_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vclsq_m_sv16qi (__arm_vuninitializedq_s8 (), __a, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclsq_x_s16 (int16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vclsq_m_sv8hi (__arm_vuninitializedq_s16 (), __a, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclsq_x_s32 (int32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vclsq_m_sv4si (__arm_vuninitializedq_s32 (), __a, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_x_s8 (int8x16_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vclzq_m_sv16qi (__arm_vuninitializedq_s8 (), __a, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_x_s16 (int16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vclzq_m_sv8hi (__arm_vuninitializedq_s16 (), __a, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_x_s32 (int32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vclzq_m_sv4si (__arm_vuninitializedq_s32 (), __a, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_x_u8 (uint8x16_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vclzq_m_uv16qi (__arm_vuninitializedq_u8 (), __a, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_x_u16 (uint16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vclzq_m_uv8hi (__arm_vuninitializedq_u16 (), __a, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_x_u32 (uint32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vclzq_m_uv4si (__arm_vuninitializedq_u32 (), __a, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_x_s8 (int8x16_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vnegq_m_sv16qi (__arm_vuninitializedq_s8 (), __a, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_x_s16 (int16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vnegq_m_sv8hi (__arm_vuninitializedq_s16 (), __a, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_x_s32 (int32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vnegq_m_sv4si (__arm_vuninitializedq_s32 (), __a, __p); -} - __extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmullbq_poly_x_p8 (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) @@ -11105,20 +10620,6 @@ __arm_vrev64q_f32 (float32x4_t __a) return __builtin_mve_vrev64q_fv4sf (__a); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_f16 (float16x8_t __a) -{ - return __builtin_mve_vnegq_fv8hf (__a); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_f32 (float32x4_t __a) -{ - return __builtin_mve_vnegq_fv4sf (__a); -} - __extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vdupq_n_f16 (float16_t __a) @@ -11133,20 +10634,6 @@ __arm_vdupq_n_f32 (float32_t __a) return __builtin_mve_vdupq_n_fv4sf (__a); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_f16 (float16x8_t __a) -{ - return __builtin_mve_vabsq_fv8hf (__a); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_f32 (float32x4_t __a) -{ - return __builtin_mve_vabsq_fv4sf (__a); -} - __extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vrev32q_f16 (float16x8_t __a) @@ -11974,13 +11461,6 @@ __arm_vfmsq_f16 (float16x8_t __a, float16x8_t __b, float16x8_t __c) return __builtin_mve_vfmsq_fv8hf (__a, __b, __c); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_m_f16 (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vabsq_m_fv8hf (__inactive, __a, __p); -} - __extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcvtmq_m_s16_f16 (int16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) @@ -12058,13 +11538,6 @@ __arm_vminnmvq_p_f16 (float16_t __a, float16x8_t __b, mve_pred16_t __p) return __builtin_mve_vminnmvq_p_fv8hf (__a, __b, __p); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_m_f16 (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vnegq_m_fv8hf (__inactive, __a, __p); -} - __extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vpselq_f16 (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) @@ -12282,13 +11755,6 @@ __arm_vfmsq_f32 (float32x4_t __a, float32x4_t __b, float32x4_t __c) return __builtin_mve_vfmsq_fv4sf (__a, __b, __c); } -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_m_f32 (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vabsq_m_fv4sf (__inactive, __a, __p); -} - __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcvtmq_m_s32_f32 (int32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) @@ -12366,13 +11832,6 @@ __arm_vminnmvq_p_f32 (float32_t __a, float32x4_t __b, mve_pred16_t __p) return __builtin_mve_vminnmvq_p_fv4sf (__a, __b, __p); } -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_m_f32 (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vnegq_m_fv4sf (__inactive, __a, __p); -} - __extension__ extern __inline float32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vpselq_f32 (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) @@ -13156,34 +12615,6 @@ __arm_vmaxnmq_x_f32 (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) return __builtin_mve_vmaxnmq_m_fv4sf (__arm_vuninitializedq_f32 (), __a, __b, __p); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_x_f16 (float16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vabsq_m_fv8hf (__arm_vuninitializedq_f16 (), __a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_x_f32 (float32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vabsq_m_fv4sf (__arm_vuninitializedq_f32 (), __a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_x_f16 (float16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vnegq_m_fv8hf (__arm_vuninitializedq_f16 (), __a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_x_f32 (float32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vnegq_m_fv4sf (__arm_vuninitializedq_f32 (), __a, __p); -} - __extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcaddq_rot90_x_f16 (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) @@ -13834,90 +13265,6 @@ __arm_vdupq_n (int32_t __a) return __arm_vdupq_n_s32 (__a); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq (int8x16_t __a) -{ - return __arm_vabsq_s8 (__a); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq (int16x8_t __a) -{ - return __arm_vabsq_s16 (__a); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq (int32x4_t __a) -{ - return __arm_vabsq_s32 (__a); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclsq (int8x16_t __a) -{ - return __arm_vclsq_s8 (__a); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclsq (int16x8_t __a) -{ - return __arm_vclsq_s16 (__a); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclsq (int32x4_t __a) -{ - return __arm_vclsq_s32 (__a); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq (int8x16_t __a) -{ - return __arm_vclzq_s8 (__a); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq (int16x8_t __a) -{ - return __arm_vclzq_s16 (__a); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq (int32x4_t __a) -{ - return __arm_vclzq_s32 (__a); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq (int8x16_t __a) -{ - return __arm_vnegq_s8 (__a); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq (int16x8_t __a) -{ - return __arm_vnegq_s16 (__a); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq (int32x4_t __a) -{ - return __arm_vnegq_s32 (__a); -} - __extension__ extern __inline int64_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddlvq (int32x4_t __a) @@ -14037,48 +13384,6 @@ __arm_vrev64q (int32x4_t __a) return __arm_vrev64q_s32 (__a); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqabsq (int8x16_t __a) -{ - return __arm_vqabsq_s8 (__a); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqabsq (int16x8_t __a) -{ - return __arm_vqabsq_s16 (__a); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqabsq (int32x4_t __a) -{ - return __arm_vqabsq_s32 (__a); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqnegq (int8x16_t __a) -{ - return __arm_vqnegq_s8 (__a); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqnegq (int16x8_t __a) -{ - return __arm_vqnegq_s16 (__a); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqnegq (int32x4_t __a) -{ - return __arm_vqnegq_s32 (__a); -} - __extension__ extern __inline uint8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vrev64q (uint8x16_t __a) @@ -14142,27 +13447,6 @@ __arm_vdupq_n (uint32_t __a) return __arm_vdupq_n_u32 (__a); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq (uint8x16_t __a) -{ - return __arm_vclzq_u8 (__a); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq (uint16x8_t __a) -{ - return __arm_vclzq_u16 (__a); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq (uint32x4_t __a) -{ - return __arm_vclzq_u32 (__a); -} - __extension__ extern __inline uint32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddvq (uint8x16_t __a) @@ -16074,13 +15358,6 @@ __arm_vcmpcsq_m (uint8x16_t __a, uint8_t __b, mve_pred16_t __p) return __arm_vcmpcsq_m_n_u8 (__a, __b, __p); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_m (uint8x16_t __inactive, uint8x16_t __a, mve_pred16_t __p) -{ - return __arm_vclzq_m_u8 (__inactive, __a, __p); -} - __extension__ extern __inline uint32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddvaq_p (uint32_t __a, uint8x16_t __b, mve_pred16_t __p) @@ -16221,27 +15498,6 @@ __arm_vrev64q_m (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) return __arm_vrev64q_m_s8 (__inactive, __a, __p); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqnegq_m (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) -{ - return __arm_vqnegq_m_s8 (__inactive, __a, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqabsq_m (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) -{ - return __arm_vqabsq_m_s8 (__inactive, __a, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_m (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) -{ - return __arm_vnegq_m_s8 (__inactive, __a, __p); -} - __extension__ extern __inline int8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmvnq_m (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) @@ -16298,20 +15554,6 @@ __arm_vdupq_m (int8x16_t __inactive, int8_t __a, mve_pred16_t __p) return __arm_vdupq_m_n_s8 (__inactive, __a, __p); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_m (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) -{ - return __arm_vclzq_m_s8 (__inactive, __a, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclsq_m (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) -{ - return __arm_vclsq_m_s8 (__inactive, __a, __p); -} - __extension__ extern __inline int32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddvaq_p (int32_t __a, int8x16_t __b, mve_pred16_t __p) @@ -16319,13 +15561,6 @@ __arm_vaddvaq_p (int32_t __a, int8x16_t __b, mve_pred16_t __p) return __arm_vaddvaq_p_s8 (__a, __b, __p); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_m (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) -{ - return __arm_vabsq_m_s8 (__inactive, __a, __p); -} - __extension__ extern __inline int8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vqrdmlsdhxq (int8x16_t __inactive, int8x16_t __a, int8x16_t __b) @@ -16599,13 +15834,6 @@ __arm_vcmpcsq_m (uint16x8_t __a, uint16_t __b, mve_pred16_t __p) return __arm_vcmpcsq_m_n_u16 (__a, __b, __p); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_m (uint16x8_t __inactive, uint16x8_t __a, mve_pred16_t __p) -{ - return __arm_vclzq_m_u16 (__inactive, __a, __p); -} - __extension__ extern __inline uint32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddvaq_p (uint32_t __a, uint16x8_t __b, mve_pred16_t __p) @@ -16734,37 +15962,16 @@ __arm_vcmpeqq_m (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) __extension__ extern __inline mve_pred16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m (int16x8_t __a, int16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpeqq_m_n_s16 (__a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrev64q_m (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) -{ - return __arm_vrev64q_m_s16 (__inactive, __a, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqnegq_m (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) -{ - return __arm_vqnegq_m_s16 (__inactive, __a, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqabsq_m (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) +__arm_vcmpeqq_m (int16x8_t __a, int16_t __b, mve_pred16_t __p) { - return __arm_vqabsq_m_s16 (__inactive, __a, __p); + return __arm_vcmpeqq_m_n_s16 (__a, __b, __p); } __extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_m (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) +__arm_vrev64q_m (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) { - return __arm_vnegq_m_s16 (__inactive, __a, __p); + return __arm_vrev64q_m_s16 (__inactive, __a, __p); } __extension__ extern __inline int16x8_t @@ -16823,20 +16030,6 @@ __arm_vdupq_m (int16x8_t __inactive, int16_t __a, mve_pred16_t __p) return __arm_vdupq_m_n_s16 (__inactive, __a, __p); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_m (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) -{ - return __arm_vclzq_m_s16 (__inactive, __a, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclsq_m (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) -{ - return __arm_vclsq_m_s16 (__inactive, __a, __p); -} - __extension__ extern __inline int32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddvaq_p (int32_t __a, int16x8_t __b, mve_pred16_t __p) @@ -16844,13 +16037,6 @@ __arm_vaddvaq_p (int32_t __a, int16x8_t __b, mve_pred16_t __p) return __arm_vaddvaq_p_s16 (__a, __b, __p); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_m (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) -{ - return __arm_vabsq_m_s16 (__inactive, __a, __p); -} - __extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vqrdmlsdhxq (int16x8_t __inactive, int16x8_t __a, int16x8_t __b) @@ -17124,13 +16310,6 @@ __arm_vcmpcsq_m (uint32x4_t __a, uint32_t __b, mve_pred16_t __p) return __arm_vcmpcsq_m_n_u32 (__a, __b, __p); } -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_m (uint32x4_t __inactive, uint32x4_t __a, mve_pred16_t __p) -{ - return __arm_vclzq_m_u32 (__inactive, __a, __p); -} - __extension__ extern __inline uint32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddvaq_p (uint32_t __a, uint32x4_t __b, mve_pred16_t __p) @@ -17271,27 +16450,6 @@ __arm_vrev64q_m (int32x4_t __inactive, int32x4_t __a, mve_pred16_t __p) return __arm_vrev64q_m_s32 (__inactive, __a, __p); } -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqnegq_m (int32x4_t __inactive, int32x4_t __a, mve_pred16_t __p) -{ - return __arm_vqnegq_m_s32 (__inactive, __a, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqabsq_m (int32x4_t __inactive, int32x4_t __a, mve_pred16_t __p) -{ - return __arm_vqabsq_m_s32 (__inactive, __a, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_m (int32x4_t __inactive, int32x4_t __a, mve_pred16_t __p) -{ - return __arm_vnegq_m_s32 (__inactive, __a, __p); -} - __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmvnq_m (int32x4_t __inactive, int32x4_t __a, mve_pred16_t __p) @@ -17348,20 +16506,6 @@ __arm_vdupq_m (int32x4_t __inactive, int32_t __a, mve_pred16_t __p) return __arm_vdupq_m_n_s32 (__inactive, __a, __p); } -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_m (int32x4_t __inactive, int32x4_t __a, mve_pred16_t __p) -{ - return __arm_vclzq_m_s32 (__inactive, __a, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclsq_m (int32x4_t __inactive, int32x4_t __a, mve_pred16_t __p) -{ - return __arm_vclsq_m_s32 (__inactive, __a, __p); -} - __extension__ extern __inline int32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddvaq_p (int32_t __a, int32x4_t __b, mve_pred16_t __p) @@ -17369,13 +16513,6 @@ __arm_vaddvaq_p (int32_t __a, int32x4_t __b, mve_pred16_t __p) return __arm_vaddvaq_p_s32 (__a, __b, __p); } -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_m (int32x4_t __inactive, int32x4_t __a, mve_pred16_t __p) -{ - return __arm_vabsq_m_s32 (__inactive, __a, __p); -} - __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vqrdmlsdhxq (int32x4_t __inactive, int32x4_t __a, int32x4_t __b) @@ -20659,111 +19796,6 @@ __arm_viwdupq_x_u32 (uint32_t *__a, uint32_t __b, const int __imm, mve_pred16_t return __arm_viwdupq_x_wb_u32 (__a, __b, __imm, __p); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_x (int8x16_t __a, mve_pred16_t __p) -{ - return __arm_vabsq_x_s8 (__a, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_x (int16x8_t __a, mve_pred16_t __p) -{ - return __arm_vabsq_x_s16 (__a, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_x (int32x4_t __a, mve_pred16_t __p) -{ - return __arm_vabsq_x_s32 (__a, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclsq_x (int8x16_t __a, mve_pred16_t __p) -{ - return __arm_vclsq_x_s8 (__a, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclsq_x (int16x8_t __a, mve_pred16_t __p) -{ - return __arm_vclsq_x_s16 (__a, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclsq_x (int32x4_t __a, mve_pred16_t __p) -{ - return __arm_vclsq_x_s32 (__a, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_x (int8x16_t __a, mve_pred16_t __p) -{ - return __arm_vclzq_x_s8 (__a, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_x (int16x8_t __a, mve_pred16_t __p) -{ - return __arm_vclzq_x_s16 (__a, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_x (int32x4_t __a, mve_pred16_t __p) -{ - return __arm_vclzq_x_s32 (__a, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_x (uint8x16_t __a, mve_pred16_t __p) -{ - return __arm_vclzq_x_u8 (__a, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_x (uint16x8_t __a, mve_pred16_t __p) -{ - return __arm_vclzq_x_u16 (__a, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vclzq_x (uint32x4_t __a, mve_pred16_t __p) -{ - return __arm_vclzq_x_u32 (__a, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_x (int8x16_t __a, mve_pred16_t __p) -{ - return __arm_vnegq_x_s8 (__a, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_x (int16x8_t __a, mve_pred16_t __p) -{ - return __arm_vnegq_x_s16 (__a, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_x (int32x4_t __a, mve_pred16_t __p) -{ - return __arm_vnegq_x_s32 (__a, __p); -} - __extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmullbq_poly_x (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) @@ -21956,20 +20988,6 @@ __arm_vrev64q (float32x4_t __a) return __arm_vrev64q_f32 (__a); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq (float16x8_t __a) -{ - return __arm_vnegq_f16 (__a); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq (float32x4_t __a) -{ - return __arm_vnegq_f32 (__a); -} - __extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vdupq_n (float16_t __a) @@ -21984,20 +21002,6 @@ __arm_vdupq_n (float32_t __a) return __arm_vdupq_n_f32 (__a); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq (float16x8_t __a) -{ - return __arm_vabsq_f16 (__a); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq (float32x4_t __a) -{ - return __arm_vabsq_f32 (__a); -} - __extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vrev32q (float16x8_t __a) @@ -22642,13 +21646,6 @@ __arm_vfmsq (float16x8_t __a, float16x8_t __b, float16x8_t __c) return __arm_vfmsq_f16 (__a, __b, __c); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_m (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) -{ - return __arm_vabsq_m_f16 (__inactive, __a, __p); -} - __extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcvtmq_m (int16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) @@ -22726,13 +21723,6 @@ __arm_vminnmvq_p (float16_t __a, float16x8_t __b, mve_pred16_t __p) return __arm_vminnmvq_p_f16 (__a, __b, __p); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_m (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) -{ - return __arm_vnegq_m_f16 (__inactive, __a, __p); -} - __extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vpselq (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) @@ -22950,13 +21940,6 @@ __arm_vfmsq (float32x4_t __a, float32x4_t __b, float32x4_t __c) return __arm_vfmsq_f32 (__a, __b, __c); } -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_m (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) -{ - return __arm_vabsq_m_f32 (__inactive, __a, __p); -} - __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcvtmq_m (int32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) @@ -23034,13 +22017,6 @@ __arm_vminnmvq_p (float32_t __a, float32x4_t __b, mve_pred16_t __p) return __arm_vminnmvq_p_f32 (__a, __b, __p); } -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_m (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) -{ - return __arm_vnegq_m_f32 (__inactive, __a, __p); -} - __extension__ extern __inline float32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vpselq (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) @@ -23748,34 +22724,6 @@ __arm_vmaxnmq_x (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) return __arm_vmaxnmq_x_f32 (__a, __b, __p); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_x (float16x8_t __a, mve_pred16_t __p) -{ - return __arm_vabsq_x_f16 (__a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabsq_x (float32x4_t __a, mve_pred16_t __p) -{ - return __arm_vabsq_x_f32 (__a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_x (float16x8_t __a, mve_pred16_t __p) -{ - return __arm_vnegq_x_f16 (__a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vnegq_x (float32x4_t __a, mve_pred16_t __p) -{ - return __arm_vnegq_x_f32 (__a, __p); -} - __extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcaddq_rot90_x (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) @@ -24477,27 +23425,11 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_float16x8_t]: __arm_vrev64q_f16 (__ARM_mve_coerce(__p0, float16x8_t)), \ int (*)[__ARM_mve_type_float32x4_t]: __arm_vrev64q_f32 (__ARM_mve_coerce(__p0, float32x4_t)));}) -#define __arm_vnegq(p0) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vnegq_s8 (__ARM_mve_coerce(__p0, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vnegq_s16 (__ARM_mve_coerce(__p0, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t]: __arm_vnegq_s32 (__ARM_mve_coerce(__p0, int32x4_t)), \ - int (*)[__ARM_mve_type_float16x8_t]: __arm_vnegq_f16 (__ARM_mve_coerce(__p0, float16x8_t)), \ - int (*)[__ARM_mve_type_float32x4_t]: __arm_vnegq_f32 (__ARM_mve_coerce(__p0, float32x4_t)));}) - #define __arm_vdupq_n(p0) ({ __typeof(p0) __p0 = (p0); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ int (*)[__ARM_mve_type_float16x8_t]: __arm_vdupq_n_f16 (__ARM_mve_coerce(__p0, float16x8_t)), \ int (*)[__ARM_mve_type_float32x4_t]: __arm_vdupq_n_f32 (__ARM_mve_coerce(__p0, float32x4_t)));}) -#define __arm_vabsq(p0) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vabsq_s8 (__ARM_mve_coerce(__p0, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vabsq_s16 (__ARM_mve_coerce(__p0, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t]: __arm_vabsq_s32 (__ARM_mve_coerce(__p0, int32x4_t)), \ - int (*)[__ARM_mve_type_float16x8_t]: __arm_vabsq_f16 (__ARM_mve_coerce(__p0, float16x8_t)), \ - int (*)[__ARM_mve_type_float32x4_t]: __arm_vabsq_f32 (__ARM_mve_coerce(__p0, float32x4_t)));}) - #define __arm_vrev32q(p0) ({ __typeof(p0) __p0 = (p0); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ int (*)[__ARM_mve_type_int8x16_t]: __arm_vrev32q_s8 (__ARM_mve_coerce(__p0, int8x16_t)), \ @@ -24519,18 +23451,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_int8x16_t]: __arm_vrev16q_s8 (__ARM_mve_coerce(__p0, int8x16_t)), \ int (*)[__ARM_mve_type_uint8x16_t]: __arm_vrev16q_u8 (__ARM_mve_coerce(__p0, uint8x16_t)));}) -#define __arm_vqabsq(p0) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vqabsq_s8 (__ARM_mve_coerce(__p0, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vqabsq_s16 (__ARM_mve_coerce(__p0, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t]: __arm_vqabsq_s32 (__ARM_mve_coerce(__p0, int32x4_t)));}) - -#define __arm_vqnegq(p0) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vqnegq_s8 (__ARM_mve_coerce(__p0, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vqnegq_s16 (__ARM_mve_coerce(__p0, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t]: __arm_vqnegq_s32 (__ARM_mve_coerce(__p0, int32x4_t)));}) - #define __arm_vmvnq(p0) ({ __typeof(p0) __p0 = (p0); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ int (*)[__ARM_mve_type_int8x16_t]: __arm_vmvnq_s8 (__ARM_mve_coerce(__p0, int8x16_t)), \ @@ -24554,21 +23474,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint8x16_t]: __arm_vmovltq_u8 (__ARM_mve_coerce(__p0, uint8x16_t)), \ int (*)[__ARM_mve_type_uint16x8_t]: __arm_vmovltq_u16 (__ARM_mve_coerce(__p0, uint16x8_t)));}) -#define __arm_vclzq(p0) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vclzq_s8 (__ARM_mve_coerce(__p0, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vclzq_s16 (__ARM_mve_coerce(__p0, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t]: __arm_vclzq_s32 (__ARM_mve_coerce(__p0, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t]: __arm_vclzq_u8 (__ARM_mve_coerce(__p0, uint8x16_t)), \ - int (*)[__ARM_mve_type_uint16x8_t]: __arm_vclzq_u16 (__ARM_mve_coerce(__p0, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint32x4_t]: __arm_vclzq_u32 (__ARM_mve_coerce(__p0, uint32x4_t)));}) - -#define __arm_vclsq(p0) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vclsq_s8 (__ARM_mve_coerce(__p0, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vclsq_s16 (__ARM_mve_coerce(__p0, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t]: __arm_vclsq_s32 (__ARM_mve_coerce(__p0, int32x4_t)));}) - #define __arm_vcvtq(p0) ({ __typeof(p0) __p0 = (p0); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ int (*)[__ARM_mve_type_int16x8_t]: __arm_vcvtq_f16_s16 (__ARM_mve_coerce(__p0, int16x8_t)), \ @@ -24988,23 +23893,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16x8_t]: __arm_vshlcq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), p1, p2), \ int (*)[__ARM_mve_type_uint32x4_t]: __arm_vshlcq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), p1, p2));}) -#define __arm_vclsq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vclsq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vclsq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vclsq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2));}) - -#define __arm_vclzq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vclzq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vclzq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vclzq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vclzq_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vclzq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), p2), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vclzq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), p2));}) - #define __arm_vmaxaq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -25125,13 +24013,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vqrdmladhq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t)), \ int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vqrdmladhq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t)));}) -#define __arm_vqnegq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vqnegq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vqnegq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vqnegq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2));}) - #define __arm_vqdmlsdhxq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ __typeof(p2) __p2 = (p2); \ @@ -25228,15 +24109,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vcvtq_m_n_f16_u16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \ int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vcvtq_m_n_f32_u32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));}) -#define __arm_vabsq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vabsq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vabsq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vabsq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vabsq_m_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vabsq_m_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), p2));}) - #define __arm_vcmlaq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ __typeof(p2) __p2 = (p2); \ @@ -25566,15 +24438,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]: __arm_vqmovuntq_m_s16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]: __arm_vqmovuntq_m_s32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, int32x4_t), p2));}) -#define __arm_vnegq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vnegq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vnegq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vnegq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vnegq_m_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vnegq_m_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), p2));}) - #define __arm_vcmpgeq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -26058,14 +24921,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint32x4_t]: __arm_vstrwq_scatter_base_wb_p_u32 (p0, p1, __ARM_mve_coerce(__p2, uint32x4_t), p3), \ int (*)[__ARM_mve_type_float32x4_t]: __arm_vstrwq_scatter_base_wb_p_f32 (p0, p1, __ARM_mve_coerce(__p2, float32x4_t), p3));}) -#define __arm_vabsq_x(p1,p2) ({ __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vabsq_x_s8 (__ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vabsq_x_s16 (__ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t]: __arm_vabsq_x_s32 (__ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_float16x8_t]: __arm_vabsq_x_f16 (__ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t]: __arm_vabsq_x_f32 (__ARM_mve_coerce(__p1, float32x4_t), p2));}) - #define __arm_vbicq_x(p1,p2,p3) ({ __typeof(p1) __p1 = (p1); \ __typeof(p2) __p2 = (p2); \ _Generic( (int (*)[__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \ @@ -26157,14 +25012,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vminnmq_x_f16 (__ARM_mve_coerce(__p1, float16x8_t), __ARM_mve_coerce(__p2, float16x8_t), p3), \ int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vminnmq_x_f32 (__ARM_mve_coerce(__p1, float32x4_t), __ARM_mve_coerce(__p2, float32x4_t), p3));}) -#define __arm_vnegq_x(p1,p2) ({ __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vnegq_x_s8 (__ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vnegq_x_s16 (__ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t]: __arm_vnegq_x_s32 (__ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_float16x8_t]: __arm_vnegq_x_f16 (__ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t]: __arm_vnegq_x_f32 (__ARM_mve_coerce(__p1, float32x4_t), p2));}) - #define __arm_vornq_x(p1,p2,p3) ({ __typeof(p1) __p1 = (p1); \ __typeof(p2) __p2 = (p2); \ _Generic( (int (*)[__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \ @@ -26280,33 +25127,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16_t_ptr][__ARM_mve_type_uint16x8x4_t]: __arm_vst4q_u16 (__ARM_mve_coerce(p0, uint16_t *), __ARM_mve_coerce(__p1, uint16x8x4_t)), \ int (*)[__ARM_mve_type_uint32_t_ptr][__ARM_mve_type_uint32x4x4_t]: __arm_vst4q_u32 (__ARM_mve_coerce(p0, uint32_t *), __ARM_mve_coerce(__p1, uint32x4x4_t)));}) -#define __arm_vabsq(p0) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vabsq_s8 (__ARM_mve_coerce(__p0, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vabsq_s16 (__ARM_mve_coerce(__p0, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t]: __arm_vabsq_s32 (__ARM_mve_coerce(__p0, int32x4_t)));}) - -#define __arm_vclsq(p0) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vclsq_s8 (__ARM_mve_coerce(__p0, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vclsq_s16 (__ARM_mve_coerce(__p0, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t]: __arm_vclsq_s32 (__ARM_mve_coerce(__p0, int32x4_t)));}) - -#define __arm_vclzq(p0) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vclzq_s8 (__ARM_mve_coerce(__p0, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vclzq_s16 (__ARM_mve_coerce(__p0, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t]: __arm_vclzq_s32 (__ARM_mve_coerce(__p0, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t]: __arm_vclzq_u8 (__ARM_mve_coerce(__p0, uint8x16_t)), \ - int (*)[__ARM_mve_type_uint16x8_t]: __arm_vclzq_u16 (__ARM_mve_coerce(__p0, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint32x4_t]: __arm_vclzq_u32 (__ARM_mve_coerce(__p0, uint32x4_t)));}) - -#define __arm_vnegq(p0) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vnegq_s8 (__ARM_mve_coerce(__p0, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vnegq_s16 (__ARM_mve_coerce(__p0, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t]: __arm_vnegq_s32 (__ARM_mve_coerce(__p0, int32x4_t)));}) - #define __arm_vmovlbq(p0) ({ __typeof(p0) __p0 = (p0); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ int (*)[__ARM_mve_type_int8x16_t]: __arm_vmovlbq_s8 (__ARM_mve_coerce(__p0, int8x16_t)), \ @@ -26351,18 +25171,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16x8_t]: __arm_vrev64q_u16 (__ARM_mve_coerce(__p0, uint16x8_t)), \ int (*)[__ARM_mve_type_uint32x4_t]: __arm_vrev64q_u32 (__ARM_mve_coerce(__p0, uint32x4_t)));}) -#define __arm_vqabsq(p0) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vqabsq_s8 (__ARM_mve_coerce(__p0, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vqabsq_s16 (__ARM_mve_coerce(__p0, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t]: __arm_vqabsq_s32 (__ARM_mve_coerce(__p0, int32x4_t)));}) - -#define __arm_vqnegq(p0) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vqnegq_s8 (__ARM_mve_coerce(__p0, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vqnegq_s16 (__ARM_mve_coerce(__p0, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t]: __arm_vqnegq_s32 (__ARM_mve_coerce(__p0, int32x4_t)));}) - #define __arm_vcmpneq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -26768,13 +25576,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vqrdmladhq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t)), \ int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vqrdmladhq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t)));}) -#define __arm_vqnegq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vqnegq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vqnegq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vqnegq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2));}) - #define __arm_vqdmlsdhxq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ __typeof(p2) __p2 = (p2); \ @@ -26783,30 +25584,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vqdmlsdhxq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t)), \ int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vqdmlsdhxq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t)));}) -#define __arm_vabsq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vabsq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vabsq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vabsq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2));}) - -#define __arm_vclsq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vclsq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vclsq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vclsq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2));}) - -#define __arm_vclzq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vclzq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vclzq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vclzq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vclzq_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vclzq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), p2), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vclzq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), p2));}) - #define __arm_vcmpgeq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -26903,13 +25680,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t][__ARM_mve_type_int_n]: __arm_vmlasq_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), __ARM_mve_coerce3(p2, int)), \ int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t][__ARM_mve_type_int_n]: __arm_vmlasq_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce3(p2, int)));}) -#define __arm_vnegq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vnegq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vnegq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vnegq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2));}) - #define __arm_vpselq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -27349,12 +26119,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint32x4_t]: __arm_vuninitializedq_u32 (), \ int (*)[__ARM_mve_type_uint64x2_t]: __arm_vuninitializedq_u64 ());}) -#define __arm_vabsq_x(p1,p2) ({ __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vabsq_x_s8 (__ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vabsq_x_s16 (__ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t]: __arm_vabsq_x_s32 (__ARM_mve_coerce(__p1, int32x4_t), p2));}) - #define __arm_vcaddq_rot270_x(p1,p2,p3) ({ __typeof(p1) __p1 = (p1); \ __typeof(p2) __p2 = (p2); \ _Generic( (int (*)[__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \ @@ -27421,12 +26185,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vmulltq_poly_x_p8 (__ARM_mve_coerce(__p1, uint8x16_t), __ARM_mve_coerce(__p2, uint8x16_t), p3), \ int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vmulltq_poly_x_p16 (__ARM_mve_coerce(__p1, uint16x8_t), __ARM_mve_coerce(__p2, uint16x8_t), p3));}) -#define __arm_vnegq_x(p1,p2) ({ __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vnegq_x_s8 (__ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vnegq_x_s16 (__ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t]: __arm_vnegq_x_s32 (__ARM_mve_coerce(__p1, int32x4_t), p2));}) - #define __arm_vornq_x(p1,p2,p3) ({ __typeof(p1) __p1 = (p1); \ __typeof(p2) __p2 = (p2); \ _Generic( (int (*)[__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \ @@ -27626,21 +26384,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vhcaddq_rot90_x_s16 (__ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \ int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vhcaddq_rot90_x_s32 (__ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3));}) -#define __arm_vclsq_x(p1,p2) ({ __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vclsq_x_s8 (__ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vclsq_x_s16 (__ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t]: __arm_vclsq_x_s32 (__ARM_mve_coerce(__p1, int32x4_t), p2));}) - -#define __arm_vclzq_x(p1,p2) ({ __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vclzq_x_s8 (__ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vclzq_x_s16 (__ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t]: __arm_vclzq_x_s32 (__ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_uint8x16_t]: __arm_vclzq_x_u8 (__ARM_mve_coerce(__p1, uint8x16_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t]: __arm_vclzq_x_u16 (__ARM_mve_coerce(__p1, uint16x8_t), p2), \ - int (*)[__ARM_mve_type_uint32x4_t]: __arm_vclzq_x_u32 (__ARM_mve_coerce(__p1, uint32x4_t), p2));}) - #define __arm_vadciq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -27868,13 +26611,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vqdmlsdhxq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \ int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vqdmlsdhxq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3));}) -#define __arm_vqabsq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vqabsq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vqabsq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vqabsq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2));}) - #define __arm_vmvnq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ From patchwork Fri May 5 16:49:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Lyon X-Patchwork-Id: 68849 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 02F78385417B for ; Fri, 5 May 2023 16:54:07 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 02F78385417B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1683305647; bh=Iri4k/f1AUDqqxF6raPeQvtwVTgy7Tus0XOISp9XzyU=; h=To:CC:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=cwQ0yGu/Ooul3ABjUZng85b3v76f/IqEgTJSIB6lX6kCWFie6X8XMGQnaMkj3puFY 5Tk3Up9l5uwohkE0Exh2BrnSfM8rEWCAQTHebeRtTUbscXfU9M4tzm6JV8+BdK14Wi DR1L8fWy/CQxP5E11/4756Q7urepWONOEhx9IjuM= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR04-HE1-obe.outbound.protection.outlook.com (mail-he1eur04on2058.outbound.protection.outlook.com [40.107.7.58]) by sourceware.org (Postfix) with ESMTPS id 4C2323858C20 for ; Fri, 5 May 2023 16:49:39 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 4C2323858C20 Received: from AM6P193CA0143.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:85::48) by AS8PR08MB6007.eurprd08.prod.outlook.com (2603:10a6:20b:29e::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 16:49:33 +0000 Received: from AM7EUR03FT063.eop-EUR03.prod.protection.outlook.com (2603:10a6:209:85:cafe::d4) by AM6P193CA0143.outlook.office365.com (2603:10a6:209:85::48) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:33 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM7EUR03FT063.mail.protection.outlook.com (100.127.140.221) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:33 +0000 Received: ("Tessian outbound e13c2446394c:v136"); Fri, 05 May 2023 16:49:33 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 5bc8aa48e8042596 X-CR-MTA-TID: 64aa7808 Received: from 1319569506de.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id F1DB5DFB-C925-4D60-8F30-633B44965801.1; Fri, 05 May 2023 16:49:26 +0000 Received: from EUR04-HE1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1319569506de.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Fri, 05 May 2023 16:49:26 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UAWrcQtOLduFXHaJ1DEwHV52Bi0m6V8UYBJTjgv4e0W/Pkjn1MCuikxwrzC8l6AZtkpXeSgTlX9sR5Y0mdGPTeeuhSgTG7puECWjo480nLqiy6bK9Y2+tY1iI9dE1fC8mk7E5yfwkA8kGyk6Z6hT07mPiyngzYisP0ZRT2BBq1ZlRP7r9X1UFcVpWa/U/b/twpYO1pwSggf7pVLx/I+6IlOm+FqzzPE5DcWnbA9PwIW5Lx7id3jTR/yLGSKP+GoqfrHX0ELccEPH1o9zYDM2dszQnZRpJUZhENixr9H3lAgJ8WtrNp48VVk3NMZBN3IUzuWoGcNt0eJ9u6ioXuo62Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Iri4k/f1AUDqqxF6raPeQvtwVTgy7Tus0XOISp9XzyU=; b=h+CpqP+xpibmstVcOP6DAGr4Rwl/AEejAPsMZr4+e/WBSEYfFA738GlhO6hWKKspFQ4XAXNpe8u/7UlH++nEAN2SHYW2ssc5StjX7k3h9J/GGFvZG69egvfXSKBcXO3zYim+zGpRRCFMfs3ctR8/mINBBaGm4LhH2ldpY2c4lK2X4HIBYyA1dEp475olr12/U2QxgVa+xPZQXRGC10dM0gRWZpxRXJiwz8HNifhym+bQAlHXqbfKbX2tlEF+Q1PCtnQ+AEKTGHJ3SQkIVnzWiYsVFGKI/sJ1MsBKou+s78aaIOYT7zaY5H8v1jbKFyDbEOE5RcjRk9JYG6oj0EtI2g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none Received: from DB6PR0501CA0030.eurprd05.prod.outlook.com (2603:10a6:4:67::16) by DB9PR08MB6554.eurprd08.prod.outlook.com (2603:10a6:10:254::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Fri, 5 May 2023 16:49:21 +0000 Received: from DBAEUR03FT044.eop-EUR03.prod.protection.outlook.com (2603:10a6:4:67:cafe::f5) by DB6PR0501CA0030.outlook.office365.com (2603:10a6:4:67::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend Transport; Fri, 5 May 2023 16:49:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234) smtp.mailfrom=arm.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 40.67.248.234 as permitted sender) receiver=protection.outlook.com; client-ip=40.67.248.234; helo=nebula.arm.com; pr=C Received: from nebula.arm.com (40.67.248.234) by DBAEUR03FT044.mail.protection.outlook.com (100.127.142.189) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:21 +0000 Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Fri, 5 May 2023 16:49:16 +0000 Received: from e129018.arm.com (10.57.22.112) by mail.arm.com (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.23 via Frontend Transport; Fri, 5 May 2023 16:49:15 +0000 To: , , , CC: Christophe Lyon Subject: [PATCH 04/10] arm: [MVE intrinsics] rework vrndq vrndaq vrndmq vrndnq vrndpq vrndxq Date: Fri, 5 May 2023 18:49:00 +0200 Message-ID: <20230505164906.596219-4-christophe.lyon@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230505164906.596219-1-christophe.lyon@arm.com> References: <20230505164906.596219-1-christophe.lyon@arm.com> MIME-Version: 1.0 X-EOPAttributedMessage: 1 X-MS-TrafficTypeDiagnostic: DBAEUR03FT044:EE_|DB9PR08MB6554:EE_|AM7EUR03FT063:EE_|AS8PR08MB6007:EE_ X-MS-Office365-Filtering-Correlation-Id: e1d6dcad-edc9-4721-7122-08db4d88b3f0 x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: OkV83xjARA8ApJe2rAqR9hU9tAr5LX1I/kXsy5Rp0PDdXHwNdxNtimTQiSv9c7Nuq9VOkK64AD4h0TGeY8580flZSMeRFdbxIiG9S2fR3Q3MBIe825sBez980y0iO38gpL8oJoWDoApfPix/EIzeatg+gpem+Kkd47FBmymIrR9Vnf7IRKuA66iAsekHZxKJ/Uk1b9zMxkcszbY4ry/vjNEbvh4+MFrMSTStopVS6hscNRBJo8UoZQ2hd4UxmzCKsAaVJVGK/ZFCCTEvq2N5tew1+8ZCXYCq1X2MyNnfoNS1/zLDLsymUMmxju6n64hCUUBLDcowrG8iSW0ajQnTi5B2+jLCr5FbO8uEXX2Ivamc4LM/m1V/GWNtNXALI2OnPqOqQsiRwPif/ju/7vX5h2+Uqg7cX0KLrku3lOo6EDA1W7WtWNc53CYEeikvC7cuU3b8qzgVYKPofYpUt9vHh8QjsJxmxKnqEGpM6fvkOHeWUfAIMBIf0h3FIkZT5BTXxFvuA39uYvbt1spSnsDOaKEKJAQpjbqvZ8pqYiOnubGBqlF+TOgn8Rn/QOgXgLr8CKJwF/fX9Y11b+zupPLitTigDZxH1k97zzZGVfD7OWY17veyEy8Zd0pcUr/Ny9V5p7sCBc2NTZdC4Dsfy13+3dQDACCLeXG2ssFTSsmPmx3jlU+4+uZrITIyl6ueqHZqo4wxV4xZc1OnGrRpo9ie4rKSNc2oLRtcP32SYZMSwUUA4K/2F+8uyHDV2xzvv3czrxGM+AcaeMjYTfnkq25Fbf3ETEq1nPbR6OjLeiIsiyWlNhawFf4jpme/2PWmNnZL X-Forefront-Antispam-Report-Untrusted: CIP:40.67.248.234; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:nebula.arm.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230028)(4636009)(346002)(39860400002)(136003)(396003)(376002)(451199021)(46966006)(40470700004)(36840700001)(316002)(30864003)(2906002)(186003)(41300700001)(70206006)(82310400005)(6666004)(1076003)(110136005)(26005)(4326008)(336012)(83380400001)(6636002)(47076005)(86362001)(426003)(7696005)(70586007)(36860700001)(2616005)(36756003)(44832011)(478600001)(40460700003)(82740400003)(8936002)(5660300002)(8676002)(40480700001)(81166007)(356005)(40753002)(17423001)(156123004)(36900700001)(579004); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6554 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM7EUR03FT063.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 01a313e2-5ef2-4d67-44c3-08db4d88ad1d X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 4zLQhsmRB01LLPjVgrx+zOtNPO9IIGgQQWPjmENzteikj5CubsjQKZldKRhgUfG4V7OF09DXOGAXOf/rXnYiKL4w4joWepp4HroKDNwsbmv58h8+m/XkITv+c4K8f76GheKf4LaUaGt007JUVTnuVPSOSKdAYbIVsrtCL3AApTTPyhpnLBREEpkCChy0HAH9j4MrsPAIpfaWCDn3UyHMaXSgEFWTvWZ4mR7JkqIfgaNA9u3/CojK6eTYPL7Dh1qAHnDUTZoVeFICXYtkugrrwhtU9aWIYXcgfGJhRHu9JeJBmQ4S9SXoEaVZjeHBXr+N7R/fZ+pXrnQNjTxSpViFgbphtenz9kqZvEQDavz87EdWEmGncGjWxUanw0ZGrNWud9xmmkaEFVk68v3HEM0PbJ8z//H0Y7iAVZBv3XIat9voNQy1RbYLBLkreoDKKnRTapoWKbhzIyGXUrbLuqeoUCWihe0XJFFkj7hbaDDDCN0bmuQh4DiByCXaIqPJXznMeUclxctFd2GFv4nSvWgdAFXPPLw3U9Ei8uSsAaWPKcbonfxNnewOXqbaQ8VNyFWdCLglbDb7MMuq8TRgLMexCZ9WU7zEorEVX1ttyP8Nzyp/oDHZwgNZTV65WX4GZeYXJw7oE44yb2AetD2J2hvrXrCa8bhWiOE5yyTXBfyEL/ew6eOJvFZrsucdPwq0PTHSiQOXfVyT9bIAG9JyaiIYBSo+kMphEQ/SkG8gg067DrOwMs1lXkLSSjUnYqgLUY89jo+oR79LhehpAknsIa5OZce85pxQDH4UeHXS1G/ZwJc= X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230028)(4636009)(376002)(136003)(346002)(39860400002)(396003)(451199021)(40470700004)(46966006)(36840700001)(86362001)(36756003)(6666004)(7696005)(316002)(110136005)(70586007)(6636002)(4326008)(70206006)(478600001)(41300700001)(40480700001)(82310400005)(8676002)(5660300002)(8936002)(30864003)(2906002)(44832011)(82740400003)(81166007)(186003)(26005)(1076003)(36860700001)(2616005)(336012)(83380400001)(426003)(47076005)(40460700003)(40753002)(17423001)(156123004)(579004); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 16:49:33.2046 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e1d6dcad-edc9-4721-7122-08db4d88b3f0 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM7EUR03FT063.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6007 X-Spam-Status: No, score=-12.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Christophe Lyon via Gcc-patches From: Christophe Lyon Reply-To: Christophe Lyon Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" Implement vrndq, vrndaq, vrndmq, vrndnq, vrndpq, vrndxq using the new MVE builtins framework. 2022-09-08 Christophe Lyon gcc/ * config/arm/arm-mve-builtins-base.cc (FUNCTION_ONLY_F): New. (vrndaq, vrndmq, vrndnq, vrndpq, vrndq, vrndxq): New. * config/arm/arm-mve-builtins-base.def (vrndaq, vrndmq, vrndnq) (vrndpq, vrndq, vrndxq): New. * config/arm/arm-mve-builtins-base.h (vrndaq, vrndmq, vrndnq) (vrndpq, vrndq, vrndxq): New. * config/arm/arm_mve.h (vrndxq): Remove. (vrndq): Remove. (vrndpq): Remove. (vrndnq): Remove. (vrndmq): Remove. (vrndaq): Remove. (vrndaq_m): Remove. (vrndmq_m): Remove. (vrndnq_m): Remove. (vrndpq_m): Remove. (vrndq_m): Remove. (vrndxq_m): Remove. (vrndq_x): Remove. (vrndnq_x): Remove. (vrndmq_x): Remove. (vrndpq_x): Remove. (vrndaq_x): Remove. (vrndxq_x): Remove. (vrndxq_f16): Remove. (vrndxq_f32): Remove. (vrndq_f16): Remove. (vrndq_f32): Remove. (vrndpq_f16): Remove. (vrndpq_f32): Remove. (vrndnq_f16): Remove. (vrndnq_f32): Remove. (vrndmq_f16): Remove. (vrndmq_f32): Remove. (vrndaq_f16): Remove. (vrndaq_f32): Remove. (vrndaq_m_f16): Remove. (vrndmq_m_f16): Remove. (vrndnq_m_f16): Remove. (vrndpq_m_f16): Remove. (vrndq_m_f16): Remove. (vrndxq_m_f16): Remove. (vrndaq_m_f32): Remove. (vrndmq_m_f32): Remove. (vrndnq_m_f32): Remove. (vrndpq_m_f32): Remove. (vrndq_m_f32): Remove. (vrndxq_m_f32): Remove. (vrndq_x_f16): Remove. (vrndq_x_f32): Remove. (vrndnq_x_f16): Remove. (vrndnq_x_f32): Remove. (vrndmq_x_f16): Remove. (vrndmq_x_f32): Remove. (vrndpq_x_f16): Remove. (vrndpq_x_f32): Remove. (vrndaq_x_f16): Remove. (vrndaq_x_f32): Remove. (vrndxq_x_f16): Remove. (vrndxq_x_f32): Remove. (__arm_vrndxq_f16): Remove. (__arm_vrndxq_f32): Remove. (__arm_vrndq_f16): Remove. (__arm_vrndq_f32): Remove. (__arm_vrndpq_f16): Remove. (__arm_vrndpq_f32): Remove. (__arm_vrndnq_f16): Remove. (__arm_vrndnq_f32): Remove. (__arm_vrndmq_f16): Remove. (__arm_vrndmq_f32): Remove. (__arm_vrndaq_f16): Remove. (__arm_vrndaq_f32): Remove. (__arm_vrndaq_m_f16): Remove. (__arm_vrndmq_m_f16): Remove. (__arm_vrndnq_m_f16): Remove. (__arm_vrndpq_m_f16): Remove. (__arm_vrndq_m_f16): Remove. (__arm_vrndxq_m_f16): Remove. (__arm_vrndaq_m_f32): Remove. (__arm_vrndmq_m_f32): Remove. (__arm_vrndnq_m_f32): Remove. (__arm_vrndpq_m_f32): Remove. (__arm_vrndq_m_f32): Remove. (__arm_vrndxq_m_f32): Remove. (__arm_vrndq_x_f16): Remove. (__arm_vrndq_x_f32): Remove. (__arm_vrndnq_x_f16): Remove. (__arm_vrndnq_x_f32): Remove. (__arm_vrndmq_x_f16): Remove. (__arm_vrndmq_x_f32): Remove. (__arm_vrndpq_x_f16): Remove. (__arm_vrndpq_x_f32): Remove. (__arm_vrndaq_x_f16): Remove. (__arm_vrndaq_x_f32): Remove. (__arm_vrndxq_x_f16): Remove. (__arm_vrndxq_x_f32): Remove. (__arm_vrndxq): Remove. (__arm_vrndq): Remove. (__arm_vrndpq): Remove. (__arm_vrndnq): Remove. (__arm_vrndmq): Remove. (__arm_vrndaq): Remove. (__arm_vrndaq_m): Remove. (__arm_vrndmq_m): Remove. (__arm_vrndnq_m): Remove. (__arm_vrndpq_m): Remove. (__arm_vrndq_m): Remove. (__arm_vrndxq_m): Remove. (__arm_vrndq_x): Remove. (__arm_vrndnq_x): Remove. (__arm_vrndmq_x): Remove. (__arm_vrndpq_x): Remove. (__arm_vrndaq_x): Remove. (__arm_vrndxq_x): Remove. --- gcc/config/arm/arm-mve-builtins-base.cc | 15 + gcc/config/arm/arm-mve-builtins-base.def | 6 + gcc/config/arm/arm-mve-builtins-base.h | 6 + gcc/config/arm/arm_mve.h | 655 ----------------------- 4 files changed, 27 insertions(+), 655 deletions(-) diff --git a/gcc/config/arm/arm-mve-builtins-base.cc b/gcc/config/arm/arm-mve-builtins-base.cc index 627553f1784..4cf4464a48e 100644 --- a/gcc/config/arm/arm-mve-builtins-base.cc +++ b/gcc/config/arm/arm-mve-builtins-base.cc @@ -203,6 +203,15 @@ namespace arm_mve { UNSPEC##_M_S, -1, -1, \ -1, -1, -1)) + /* Helper for builtins with only unspec codes, _m predicated + overrides, only floating-point. */ +#define FUNCTION_ONLY_F(NAME, UNSPEC) FUNCTION \ + (NAME, unspec_mve_function_exact_insn, \ + (-1, -1, UNSPEC##_F, \ + -1, -1, -1, \ + -1, -1, UNSPEC##_M_F, \ + -1, -1, -1)) + FUNCTION_WITHOUT_N (vabdq, VABDQ) FUNCTION (vabsq, unspec_based_mve_function_exact_insn, (ABS, ABS, ABS, -1, -1, -1, VABSQ_M_S, -1, VABSQ_M_F, -1, -1, -1)) FUNCTION_WITH_RTX_M_N (vaddq, PLUS, VADDQ) @@ -238,6 +247,12 @@ FUNCTION_WITH_M_N_NO_F (vqsubq, VQSUBQ) FUNCTION (vreinterpretq, vreinterpretq_impl,) FUNCTION_WITHOUT_N_NO_F (vrhaddq, VRHADDQ) FUNCTION_WITHOUT_N_NO_F (vrmulhq, VRMULHQ) +FUNCTION_ONLY_F (vrndq, VRNDQ) +FUNCTION_ONLY_F (vrndaq, VRNDAQ) +FUNCTION_ONLY_F (vrndmq, VRNDMQ) +FUNCTION_ONLY_F (vrndnq, VRNDNQ) +FUNCTION_ONLY_F (vrndpq, VRNDPQ) +FUNCTION_ONLY_F (vrndxq, VRNDXQ) FUNCTION_WITH_M_N_NO_F (vrshlq, VRSHLQ) FUNCTION_ONLY_N_NO_F (vrshrnbq, VRSHRNBQ) FUNCTION_ONLY_N_NO_F (vrshrntq, VRSHRNTQ) diff --git a/gcc/config/arm/arm-mve-builtins-base.def b/gcc/config/arm/arm-mve-builtins-base.def index 7a8f5ac78e4..2928a554a11 100644 --- a/gcc/config/arm/arm-mve-builtins-base.def +++ b/gcc/config/arm/arm-mve-builtins-base.def @@ -78,6 +78,12 @@ DEF_MVE_FUNCTION (vmulq, binary_opt_n, all_float, mx_or_none) DEF_MVE_FUNCTION (vnegq, unary, all_float, mx_or_none) DEF_MVE_FUNCTION (vorrq, binary_orrq, all_float, mx_or_none) DEF_MVE_FUNCTION (vreinterpretq, unary_convert, reinterpret_float, none) +DEF_MVE_FUNCTION (vrndaq, unary, all_float, mx_or_none) +DEF_MVE_FUNCTION (vrndmq, unary, all_float, mx_or_none) +DEF_MVE_FUNCTION (vrndnq, unary, all_float, mx_or_none) +DEF_MVE_FUNCTION (vrndpq, unary, all_float, mx_or_none) +DEF_MVE_FUNCTION (vrndq, unary, all_float, mx_or_none) +DEF_MVE_FUNCTION (vrndxq, unary, all_float, mx_or_none) DEF_MVE_FUNCTION (vsubq, binary_opt_n, all_float, mx_or_none) DEF_MVE_FUNCTION (vuninitializedq, inherent, all_float, none) #undef REQUIRES_FLOAT diff --git a/gcc/config/arm/arm-mve-builtins-base.h b/gcc/config/arm/arm-mve-builtins-base.h index 8425a84b9ad..b432011978e 100644 --- a/gcc/config/arm/arm-mve-builtins-base.h +++ b/gcc/config/arm/arm-mve-builtins-base.h @@ -58,6 +58,12 @@ extern const function_base *const vqsubq; extern const function_base *const vreinterpretq; extern const function_base *const vrhaddq; extern const function_base *const vrmulhq; +extern const function_base *const vrndq; +extern const function_base *const vrndaq; +extern const function_base *const vrndmq; +extern const function_base *const vrndnq; +extern const function_base *const vrndpq; +extern const function_base *const vrndxq; extern const function_base *const vrshlq; extern const function_base *const vrshrnbq; extern const function_base *const vrshrntq; diff --git a/gcc/config/arm/arm_mve.h b/gcc/config/arm/arm_mve.h index 8101515497b..aae1f8bf639 100644 --- a/gcc/config/arm/arm_mve.h +++ b/gcc/config/arm/arm_mve.h @@ -330,12 +330,6 @@ #define vsetq_lane(__a, __b, __idx) __arm_vsetq_lane(__a, __b, __idx) #define vgetq_lane(__a, __idx) __arm_vgetq_lane(__a, __idx) #define vshlcq_m(__a, __b, __imm, __p) __arm_vshlcq_m(__a, __b, __imm, __p) -#define vrndxq(__a) __arm_vrndxq(__a) -#define vrndq(__a) __arm_vrndq(__a) -#define vrndpq(__a) __arm_vrndpq(__a) -#define vrndnq(__a) __arm_vrndnq(__a) -#define vrndmq(__a) __arm_vrndmq(__a) -#define vrndaq(__a) __arm_vrndaq(__a) #define vcvttq_f32(__a) __arm_vcvttq_f32(__a) #define vcvtbq_f32(__a) __arm_vcvtbq_f32(__a) #define vcvtq(__a) __arm_vcvtq(__a) @@ -372,12 +366,6 @@ #define vminnmaq_m(__a, __b, __p) __arm_vminnmaq_m(__a, __b, __p) #define vminnmavq_p(__a, __b, __p) __arm_vminnmavq_p(__a, __b, __p) #define vminnmvq_p(__a, __b, __p) __arm_vminnmvq_p(__a, __b, __p) -#define vrndaq_m(__inactive, __a, __p) __arm_vrndaq_m(__inactive, __a, __p) -#define vrndmq_m(__inactive, __a, __p) __arm_vrndmq_m(__inactive, __a, __p) -#define vrndnq_m(__inactive, __a, __p) __arm_vrndnq_m(__inactive, __a, __p) -#define vrndpq_m(__inactive, __a, __p) __arm_vrndpq_m(__inactive, __a, __p) -#define vrndq_m(__inactive, __a, __p) __arm_vrndq_m(__inactive, __a, __p) -#define vrndxq_m(__inactive, __a, __p) __arm_vrndxq_m(__inactive, __a, __p) #define vcvtq_m_n(__inactive, __a, __imm6, __p) __arm_vcvtq_m_n(__inactive, __a, __imm6, __p) #define vcmlaq_m(__a, __b, __c, __p) __arm_vcmlaq_m(__a, __b, __c, __p) #define vcmlaq_rot180_m(__a, __b, __c, __p) __arm_vcmlaq_rot180_m(__a, __b, __c, __p) @@ -400,12 +388,6 @@ #define vcmulq_rot270_x(__a, __b, __p) __arm_vcmulq_rot270_x(__a, __b, __p) #define vcvtq_x(__a, __p) __arm_vcvtq_x(__a, __p) #define vcvtq_x_n(__a, __imm6, __p) __arm_vcvtq_x_n(__a, __imm6, __p) -#define vrndq_x(__a, __p) __arm_vrndq_x(__a, __p) -#define vrndnq_x(__a, __p) __arm_vrndnq_x(__a, __p) -#define vrndmq_x(__a, __p) __arm_vrndmq_x(__a, __p) -#define vrndpq_x(__a, __p) __arm_vrndpq_x(__a, __p) -#define vrndaq_x(__a, __p) __arm_vrndaq_x(__a, __p) -#define vrndxq_x(__a, __p) __arm_vrndxq_x(__a, __p) #define vst4q_s8( __addr, __value) __arm_vst4q_s8( __addr, __value) @@ -416,18 +398,6 @@ #define vst4q_u32( __addr, __value) __arm_vst4q_u32( __addr, __value) #define vst4q_f16( __addr, __value) __arm_vst4q_f16( __addr, __value) #define vst4q_f32( __addr, __value) __arm_vst4q_f32( __addr, __value) -#define vrndxq_f16(__a) __arm_vrndxq_f16(__a) -#define vrndxq_f32(__a) __arm_vrndxq_f32(__a) -#define vrndq_f16(__a) __arm_vrndq_f16(__a) -#define vrndq_f32(__a) __arm_vrndq_f32(__a) -#define vrndpq_f16(__a) __arm_vrndpq_f16(__a) -#define vrndpq_f32(__a) __arm_vrndpq_f32(__a) -#define vrndnq_f16(__a) __arm_vrndnq_f16(__a) -#define vrndnq_f32(__a) __arm_vrndnq_f32(__a) -#define vrndmq_f16(__a) __arm_vrndmq_f16(__a) -#define vrndmq_f32(__a) __arm_vrndmq_f32(__a) -#define vrndaq_f16(__a) __arm_vrndaq_f16(__a) -#define vrndaq_f32(__a) __arm_vrndaq_f32(__a) #define vrev64q_f16(__a) __arm_vrev64q_f16(__a) #define vrev64q_f32(__a) __arm_vrev64q_f32(__a) #define vdupq_n_f16(__a) __arm_vdupq_n_f16(__a) @@ -1093,12 +1063,6 @@ #define vqmovntq_m_s16(__a, __b, __p) __arm_vqmovntq_m_s16(__a, __b, __p) #define vrev32q_m_s8(__inactive, __a, __p) __arm_vrev32q_m_s8(__inactive, __a, __p) #define vrev64q_m_f16(__inactive, __a, __p) __arm_vrev64q_m_f16(__inactive, __a, __p) -#define vrndaq_m_f16(__inactive, __a, __p) __arm_vrndaq_m_f16(__inactive, __a, __p) -#define vrndmq_m_f16(__inactive, __a, __p) __arm_vrndmq_m_f16(__inactive, __a, __p) -#define vrndnq_m_f16(__inactive, __a, __p) __arm_vrndnq_m_f16(__inactive, __a, __p) -#define vrndpq_m_f16(__inactive, __a, __p) __arm_vrndpq_m_f16(__inactive, __a, __p) -#define vrndq_m_f16(__inactive, __a, __p) __arm_vrndq_m_f16(__inactive, __a, __p) -#define vrndxq_m_f16(__inactive, __a, __p) __arm_vrndxq_m_f16(__inactive, __a, __p) #define vcmpeqq_m_n_f16(__a, __b, __p) __arm_vcmpeqq_m_n_f16(__a, __b, __p) #define vcmpgeq_m_f16(__a, __b, __p) __arm_vcmpgeq_m_f16(__a, __b, __p) #define vcmpgeq_m_n_f16(__a, __b, __p) __arm_vcmpgeq_m_n_f16(__a, __b, __p) @@ -1163,12 +1127,6 @@ #define vqmovntq_m_s32(__a, __b, __p) __arm_vqmovntq_m_s32(__a, __b, __p) #define vrev32q_m_s16(__inactive, __a, __p) __arm_vrev32q_m_s16(__inactive, __a, __p) #define vrev64q_m_f32(__inactive, __a, __p) __arm_vrev64q_m_f32(__inactive, __a, __p) -#define vrndaq_m_f32(__inactive, __a, __p) __arm_vrndaq_m_f32(__inactive, __a, __p) -#define vrndmq_m_f32(__inactive, __a, __p) __arm_vrndmq_m_f32(__inactive, __a, __p) -#define vrndnq_m_f32(__inactive, __a, __p) __arm_vrndnq_m_f32(__inactive, __a, __p) -#define vrndpq_m_f32(__inactive, __a, __p) __arm_vrndpq_m_f32(__inactive, __a, __p) -#define vrndq_m_f32(__inactive, __a, __p) __arm_vrndq_m_f32(__inactive, __a, __p) -#define vrndxq_m_f32(__inactive, __a, __p) __arm_vrndxq_m_f32(__inactive, __a, __p) #define vcmpeqq_m_n_f32(__a, __b, __p) __arm_vcmpeqq_m_n_f32(__a, __b, __p) #define vcmpgeq_m_f32(__a, __b, __p) __arm_vcmpgeq_m_f32(__a, __b, __p) #define vcmpgeq_m_n_f32(__a, __b, __p) __arm_vcmpgeq_m_n_f32(__a, __b, __p) @@ -1855,18 +1813,6 @@ #define vcvtq_x_n_s32_f32(__a, __imm6, __p) __arm_vcvtq_x_n_s32_f32(__a, __imm6, __p) #define vcvtq_x_n_u16_f16(__a, __imm6, __p) __arm_vcvtq_x_n_u16_f16(__a, __imm6, __p) #define vcvtq_x_n_u32_f32(__a, __imm6, __p) __arm_vcvtq_x_n_u32_f32(__a, __imm6, __p) -#define vrndq_x_f16(__a, __p) __arm_vrndq_x_f16(__a, __p) -#define vrndq_x_f32(__a, __p) __arm_vrndq_x_f32(__a, __p) -#define vrndnq_x_f16(__a, __p) __arm_vrndnq_x_f16(__a, __p) -#define vrndnq_x_f32(__a, __p) __arm_vrndnq_x_f32(__a, __p) -#define vrndmq_x_f16(__a, __p) __arm_vrndmq_x_f16(__a, __p) -#define vrndmq_x_f32(__a, __p) __arm_vrndmq_x_f32(__a, __p) -#define vrndpq_x_f16(__a, __p) __arm_vrndpq_x_f16(__a, __p) -#define vrndpq_x_f32(__a, __p) __arm_vrndpq_x_f32(__a, __p) -#define vrndaq_x_f16(__a, __p) __arm_vrndaq_x_f16(__a, __p) -#define vrndaq_x_f32(__a, __p) __arm_vrndaq_x_f32(__a, __p) -#define vrndxq_x_f16(__a, __p) __arm_vrndxq_x_f16(__a, __p) -#define vrndxq_x_f32(__a, __p) __arm_vrndxq_x_f32(__a, __p) #define vbicq_x_f16(__a, __b, __p) __arm_vbicq_x_f16(__a, __b, __p) #define vbicq_x_f32(__a, __b, __p) __arm_vbicq_x_f32(__a, __b, __p) #define vbrsrq_x_n_f16(__a, __b, __p) __arm_vbrsrq_x_n_f16(__a, __b, __p) @@ -10522,90 +10468,6 @@ __arm_vst4q_f32 (float32_t * __addr, float32x4x4_t __value) __builtin_mve_vst4qv4sf (__addr, __rv.__o); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndxq_f16 (float16x8_t __a) -{ - return __builtin_mve_vrndxq_fv8hf (__a); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndxq_f32 (float32x4_t __a) -{ - return __builtin_mve_vrndxq_fv4sf (__a); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndq_f16 (float16x8_t __a) -{ - return __builtin_mve_vrndq_fv8hf (__a); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndq_f32 (float32x4_t __a) -{ - return __builtin_mve_vrndq_fv4sf (__a); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndpq_f16 (float16x8_t __a) -{ - return __builtin_mve_vrndpq_fv8hf (__a); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndpq_f32 (float32x4_t __a) -{ - return __builtin_mve_vrndpq_fv4sf (__a); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndnq_f16 (float16x8_t __a) -{ - return __builtin_mve_vrndnq_fv8hf (__a); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndnq_f32 (float32x4_t __a) -{ - return __builtin_mve_vrndnq_fv4sf (__a); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndmq_f16 (float16x8_t __a) -{ - return __builtin_mve_vrndmq_fv8hf (__a); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndmq_f32 (float32x4_t __a) -{ - return __builtin_mve_vrndmq_fv4sf (__a); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndaq_f16 (float16x8_t __a) -{ - return __builtin_mve_vrndaq_fv8hf (__a); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndaq_f32 (float32x4_t __a) -{ - return __builtin_mve_vrndaq_fv4sf (__a); -} - __extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vrev64q_f16 (float16x8_t __a) @@ -11552,48 +11414,6 @@ __arm_vrev64q_m_f16 (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) return __builtin_mve_vrev64q_m_fv8hf (__inactive, __a, __p); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndaq_m_f16 (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndaq_m_fv8hf (__inactive, __a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndmq_m_f16 (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndmq_m_fv8hf (__inactive, __a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndnq_m_f16 (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndnq_m_fv8hf (__inactive, __a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndpq_m_f16 (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndpq_m_fv8hf (__inactive, __a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndq_m_f16 (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndq_m_fv8hf (__inactive, __a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndxq_m_f16 (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndxq_m_fv8hf (__inactive, __a, __p); -} - __extension__ extern __inline mve_pred16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcmpeqq_m_n_f16 (float16x8_t __a, float16_t __b, mve_pred16_t __p) @@ -11846,48 +11666,6 @@ __arm_vrev64q_m_f32 (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) return __builtin_mve_vrev64q_m_fv4sf (__inactive, __a, __p); } -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndaq_m_f32 (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndaq_m_fv4sf (__inactive, __a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndmq_m_f32 (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndmq_m_fv4sf (__inactive, __a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndnq_m_f32 (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndnq_m_fv4sf (__inactive, __a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndpq_m_f32 (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndpq_m_fv4sf (__inactive, __a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndq_m_f32 (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndq_m_fv4sf (__inactive, __a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndxq_m_f32 (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndxq_m_fv4sf (__inactive, __a, __p); -} - __extension__ extern __inline mve_pred16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcmpeqq_m_n_f32 (float32x4_t __a, float32_t __b, mve_pred16_t __p) @@ -12937,90 +12715,6 @@ __arm_vcvtq_x_n_u32_f32 (float32x4_t __a, const int __imm6, mve_pred16_t __p) return __builtin_mve_vcvtq_m_n_from_f_uv4si (__arm_vuninitializedq_u32 (), __a, __imm6, __p); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndq_x_f16 (float16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndq_m_fv8hf (__arm_vuninitializedq_f16 (), __a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndq_x_f32 (float32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndq_m_fv4sf (__arm_vuninitializedq_f32 (), __a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndnq_x_f16 (float16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndnq_m_fv8hf (__arm_vuninitializedq_f16 (), __a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndnq_x_f32 (float32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndnq_m_fv4sf (__arm_vuninitializedq_f32 (), __a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndmq_x_f16 (float16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndmq_m_fv8hf (__arm_vuninitializedq_f16 (), __a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndmq_x_f32 (float32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndmq_m_fv4sf (__arm_vuninitializedq_f32 (), __a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndpq_x_f16 (float16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndpq_m_fv8hf (__arm_vuninitializedq_f16 (), __a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndpq_x_f32 (float32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndpq_m_fv4sf (__arm_vuninitializedq_f32 (), __a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndaq_x_f16 (float16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndaq_m_fv8hf (__arm_vuninitializedq_f16 (), __a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndaq_x_f32 (float32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndaq_m_fv4sf (__arm_vuninitializedq_f32 (), __a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndxq_x_f16 (float16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndxq_m_fv8hf (__arm_vuninitializedq_f16 (), __a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndxq_x_f32 (float32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrndxq_m_fv4sf (__arm_vuninitializedq_f32 (), __a, __p); -} - __extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vbicq_x_f16 (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) @@ -20890,90 +20584,6 @@ __arm_vst4q (float32_t * __addr, float32x4x4_t __value) __arm_vst4q_f32 (__addr, __value); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndxq (float16x8_t __a) -{ - return __arm_vrndxq_f16 (__a); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndxq (float32x4_t __a) -{ - return __arm_vrndxq_f32 (__a); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndq (float16x8_t __a) -{ - return __arm_vrndq_f16 (__a); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndq (float32x4_t __a) -{ - return __arm_vrndq_f32 (__a); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndpq (float16x8_t __a) -{ - return __arm_vrndpq_f16 (__a); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndpq (float32x4_t __a) -{ - return __arm_vrndpq_f32 (__a); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndnq (float16x8_t __a) -{ - return __arm_vrndnq_f16 (__a); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndnq (float32x4_t __a) -{ - return __arm_vrndnq_f32 (__a); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndmq (float16x8_t __a) -{ - return __arm_vrndmq_f16 (__a); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndmq (float32x4_t __a) -{ - return __arm_vrndmq_f32 (__a); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndaq (float16x8_t __a) -{ - return __arm_vrndaq_f16 (__a); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndaq (float32x4_t __a) -{ - return __arm_vrndaq_f32 (__a); -} - __extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vrev64q (float16x8_t __a) @@ -21737,48 +21347,6 @@ __arm_vrev64q_m (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) return __arm_vrev64q_m_f16 (__inactive, __a, __p); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndaq_m (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) -{ - return __arm_vrndaq_m_f16 (__inactive, __a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndmq_m (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) -{ - return __arm_vrndmq_m_f16 (__inactive, __a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndnq_m (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) -{ - return __arm_vrndnq_m_f16 (__inactive, __a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndpq_m (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) -{ - return __arm_vrndpq_m_f16 (__inactive, __a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndq_m (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) -{ - return __arm_vrndq_m_f16 (__inactive, __a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndxq_m (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) -{ - return __arm_vrndxq_m_f16 (__inactive, __a, __p); -} - __extension__ extern __inline mve_pred16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcmpeqq_m (float16x8_t __a, float16_t __b, mve_pred16_t __p) @@ -22031,48 +21599,6 @@ __arm_vrev64q_m (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) return __arm_vrev64q_m_f32 (__inactive, __a, __p); } -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndaq_m (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) -{ - return __arm_vrndaq_m_f32 (__inactive, __a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndmq_m (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) -{ - return __arm_vrndmq_m_f32 (__inactive, __a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndnq_m (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) -{ - return __arm_vrndnq_m_f32 (__inactive, __a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndpq_m (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) -{ - return __arm_vrndpq_m_f32 (__inactive, __a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndq_m (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) -{ - return __arm_vrndq_m_f32 (__inactive, __a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndxq_m (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) -{ - return __arm_vrndxq_m_f32 (__inactive, __a, __p); -} - __extension__ extern __inline mve_pred16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcmpeqq_m (float32x4_t __a, float32_t __b, mve_pred16_t __p) @@ -22864,90 +22390,6 @@ __arm_vcvtq_x_n (uint32x4_t __a, const int __imm6, mve_pred16_t __p) return __arm_vcvtq_x_n_f32_u32 (__a, __imm6, __p); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndq_x (float16x8_t __a, mve_pred16_t __p) -{ - return __arm_vrndq_x_f16 (__a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndq_x (float32x4_t __a, mve_pred16_t __p) -{ - return __arm_vrndq_x_f32 (__a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndnq_x (float16x8_t __a, mve_pred16_t __p) -{ - return __arm_vrndnq_x_f16 (__a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndnq_x (float32x4_t __a, mve_pred16_t __p) -{ - return __arm_vrndnq_x_f32 (__a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndmq_x (float16x8_t __a, mve_pred16_t __p) -{ - return __arm_vrndmq_x_f16 (__a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndmq_x (float32x4_t __a, mve_pred16_t __p) -{ - return __arm_vrndmq_x_f32 (__a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndpq_x (float16x8_t __a, mve_pred16_t __p) -{ - return __arm_vrndpq_x_f16 (__a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndpq_x (float32x4_t __a, mve_pred16_t __p) -{ - return __arm_vrndpq_x_f32 (__a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndaq_x (float16x8_t __a, mve_pred16_t __p) -{ - return __arm_vrndaq_x_f16 (__a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndaq_x (float32x4_t __a, mve_pred16_t __p) -{ - return __arm_vrndaq_x_f32 (__a, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndxq_x (float16x8_t __a, mve_pred16_t __p) -{ - return __arm_vrndxq_x_f16 (__a, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrndxq_x (float32x4_t __a, mve_pred16_t __p) -{ - return __arm_vrndxq_x_f32 (__a, __p); -} - __extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vbicq_x (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) @@ -23384,36 +22826,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_float16_t_ptr][__ARM_mve_type_float16x8x4_t]: __arm_vst4q_f16 (__ARM_mve_coerce(__p0, float16_t *), __ARM_mve_coerce(__p1, float16x8x4_t)), \ int (*)[__ARM_mve_type_float32_t_ptr][__ARM_mve_type_float32x4x4_t]: __arm_vst4q_f32 (__ARM_mve_coerce(__p0, float32_t *), __ARM_mve_coerce(__p1, float32x4x4_t)));}) -#define __arm_vrndxq(p0) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_float16x8_t]: __arm_vrndxq_f16 (__ARM_mve_coerce(__p0, float16x8_t)), \ - int (*)[__ARM_mve_type_float32x4_t]: __arm_vrndxq_f32 (__ARM_mve_coerce(__p0, float32x4_t)));}) - -#define __arm_vrndq(p0) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_float16x8_t]: __arm_vrndq_f16 (__ARM_mve_coerce(__p0, float16x8_t)), \ - int (*)[__ARM_mve_type_float32x4_t]: __arm_vrndq_f32 (__ARM_mve_coerce(__p0, float32x4_t)));}) - -#define __arm_vrndpq(p0) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_float16x8_t]: __arm_vrndpq_f16 (__ARM_mve_coerce(__p0, float16x8_t)), \ - int (*)[__ARM_mve_type_float32x4_t]: __arm_vrndpq_f32 (__ARM_mve_coerce(__p0, float32x4_t)));}) - -#define __arm_vrndnq(p0) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_float16x8_t]: __arm_vrndnq_f16 (__ARM_mve_coerce(__p0, float16x8_t)), \ - int (*)[__ARM_mve_type_float32x4_t]: __arm_vrndnq_f32 (__ARM_mve_coerce(__p0, float32x4_t)));}) - -#define __arm_vrndmq(p0) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_float16x8_t]: __arm_vrndmq_f16 (__ARM_mve_coerce(__p0, float16x8_t)), \ - int (*)[__ARM_mve_type_float32x4_t]: __arm_vrndmq_f32 (__ARM_mve_coerce(__p0, float32x4_t)));}) - -#define __arm_vrndaq(p0) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_float16x8_t]: __arm_vrndaq_f16 (__ARM_mve_coerce(__p0, float16x8_t)), \ - int (*)[__ARM_mve_type_float32x4_t]: __arm_vrndaq_f32 (__ARM_mve_coerce(__p0, float32x4_t)));}) - #define __arm_vrev64q(p0) ({ __typeof(p0) __p0 = (p0); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ int (*)[__ARM_mve_type_int8x16_t]: __arm_vrev64q_s8 (__ARM_mve_coerce(__p0, int8x16_t)), \ @@ -24137,24 +23549,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vcmlaq_rot90_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), __ARM_mve_coerce(__p2, float16x8_t)), \ int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vcmlaq_rot90_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), __ARM_mve_coerce(__p2, float32x4_t)));}) -#define __arm_vrndxq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vrndxq_m_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vrndxq_m_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), p2));}) - -#define __arm_vrndq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vrndq_m_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vrndq_m_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), p2));}) - -#define __arm_vrndpq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vrndpq_m_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vrndpq_m_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), p2));}) - #define __arm_vcmpgtq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -24336,25 +23730,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_fp_n][__ARM_mve_type_float16x8_t]: __arm_vminnmvq_p_f16 (__ARM_mve_coerce2(p0, double), __ARM_mve_coerce(__p1, float16x8_t), p2), \ int (*)[__ARM_mve_type_fp_n][__ARM_mve_type_float32x4_t]: __arm_vminnmvq_p_f32 (__ARM_mve_coerce2(p0, double), __ARM_mve_coerce(__p1, float32x4_t), p2));}) -#define __arm_vrndnq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - __typeof(p2) __p2 = (p2); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vrndnq_m_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vrndnq_m_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), __p2));}) - -#define __arm_vrndaq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vrndaq_m_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vrndaq_m_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), p2));}) - -#define __arm_vrndmq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vrndmq_m_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vrndmq_m_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), p2));}) - #define __arm_vrev64q_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -25043,36 +24418,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_float16x8_t]: __arm_vrev64q_x_f16 (__ARM_mve_coerce(__p1, float16x8_t), p2), \ int (*)[__ARM_mve_type_float32x4_t]: __arm_vrev64q_x_f32 (__ARM_mve_coerce(__p1, float32x4_t), p2));}) -#define __arm_vrndaq_x(p1,p2) ({ __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_float16x8_t]: __arm_vrndaq_x_f16 (__ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t]: __arm_vrndaq_x_f32 (__ARM_mve_coerce(__p1, float32x4_t), p2));}) - -#define __arm_vrndmq_x(p1,p2) ({ __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_float16x8_t]: __arm_vrndmq_x_f16 (__ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t]: __arm_vrndmq_x_f32 (__ARM_mve_coerce(__p1, float32x4_t), p2));}) - -#define __arm_vrndnq_x(p1,p2) ({ __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_float16x8_t]: __arm_vrndnq_x_f16 (__ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t]: __arm_vrndnq_x_f32 (__ARM_mve_coerce(__p1, float32x4_t), p2));}) - -#define __arm_vrndpq_x(p1,p2) ({ __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_float16x8_t]: __arm_vrndpq_x_f16 (__ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t]: __arm_vrndpq_x_f32 (__ARM_mve_coerce(__p1, float32x4_t), p2));}) - -#define __arm_vrndq_x(p1,p2) ({ __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_float16x8_t]: __arm_vrndq_x_f16 (__ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t]: __arm_vrndq_x_f32 (__ARM_mve_coerce(__p1, float32x4_t), p2));}) - -#define __arm_vrndxq_x(p1,p2) ({ __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_float16x8_t]: __arm_vrndxq_x_f16 (__ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t]: __arm_vrndxq_x_f32 (__ARM_mve_coerce(__p1, float32x4_t), p2));}) - #define __arm_vcmulq_rot90_x(p1,p2,p3) ({ __typeof(p1) __p1 = (p1); \ __typeof(p2) __p2 = (p2); \ _Generic( (int (*)[__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \ From patchwork Fri May 5 16:49:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Lyon X-Patchwork-Id: 68843 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 7D9CA385292D for ; Fri, 5 May 2023 16:50:15 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 7D9CA385292D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1683305415; bh=DDWQoRjCyOC6BSa8FrQuVeJXXMP0uf4a1PtX/SoFIcc=; h=To:CC:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=WCSRxCEvhB+gUqzJDzVYGnbtESB4DlFrxsDBVS43ROJRCM+VCJHX2s/v/jWANiJeV I8B1iZTaDqa1Pxz8tOSUah9CQxGPFGqjGNMeDR5RaL8plA3RsOzFySZtwzsfBO+Kig r4vBBvUPfglDDtC8FbSIJc3ST4N/60qLL7iOqRLw= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR04-HE1-obe.outbound.protection.outlook.com (mail-he1eur04on2071.outbound.protection.outlook.com [40.107.7.71]) by sourceware.org (Postfix) with ESMTPS id 192313856940 for ; Fri, 5 May 2023 16:49:36 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 192313856940 Received: from AS9PR06CA0529.eurprd06.prod.outlook.com (2603:10a6:20b:49d::20) by DB9PR08MB8291.eurprd08.prod.outlook.com (2603:10a6:10:3dd::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Fri, 5 May 2023 16:49:33 +0000 Received: from AM7EUR03FT050.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:49d::4) by AS9PR06CA0529.outlook.office365.com (2603:10a6:20b:49d::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:33 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM7EUR03FT050.mail.protection.outlook.com (100.127.141.27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.13 via Frontend Transport; Fri, 5 May 2023 16:49:33 +0000 Received: ("Tessian outbound e13c2446394c:v136"); Fri, 05 May 2023 16:49:33 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 34b44c979954932d X-CR-MTA-TID: 64aa7808 Received: from 99ba2d2652ec.2 by 64aa7808-outbound-1.mta.getcheckrecipient.com id AB1D2820-3E1C-4BD2-8EDE-79CD5AA32FD9.1; Fri, 05 May 2023 16:49:25 +0000 Received: from EUR05-DB8-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 99ba2d2652ec.2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Fri, 05 May 2023 16:49:25 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=oaNTmrqvOsjrqC7+kjQX02dCBItOyK4oatr0d6oHtbM4nLdVn4WuDLu/+Btg9nlC0E0GmTmgQuUfPcXlQTUNVFtuFTFZk5P55yG0VN7LyCCDSGo2EYTAr0AWqTeIgHvbwqr0oRQUKcVnex6bSMSYEBt8SrKo/7XwQfk64kL3S3TxunsRpEZNZ8iMv0VbfLOI1dSkeDSyNL4F+aqCEpT4gV2Chugt+NGYBmzAPrA2hiBdOw1HukC1b2ZnlOtU9Ngkxs2r+9juyLO9nQ7S8DSMaL6iF8L4sg6TOoxitN/neEh2+GpORnavB29hyh5hTwSi6CBUK+sOhs+62F57JdQgPw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DDWQoRjCyOC6BSa8FrQuVeJXXMP0uf4a1PtX/SoFIcc=; b=OBlZFEm3lp/tdJuu5j5wGsXDlW8pkSdzd6FPUq8RlBChsmcmmtY2RMXJPPmJoLMZgXzWF62Xnm/AI8G0YeWGfWJkEUTl+UsklZBViXA1zaUbJEvBWiIrBkWwjZjmbssU4N2FfcVJv9OaY9zNcgqLCqQtUj6SV7jhqTu/WCTbO97zFQX4AhIA4ntO3rlF7G66sPpAmcDBjR9mevpz1NGg80GLMbU66JUoAxH/FJ7YhFTGN0OxNt2kdXus1P8/a55JTStlqatb6SKkTyzjm1zLDy7bfOLQB5P7OvQaP8D6GuUseS6A7mtrMEMSh+iEEl3Bt4FcJcng5UMrJ/kzIda+kg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none Received: from DB6PR0501CA0035.eurprd05.prod.outlook.com (2603:10a6:4:67::21) by DU0PR08MB9130.eurprd08.prod.outlook.com (2603:10a6:10:474::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 16:49:22 +0000 Received: from DBAEUR03FT044.eop-EUR03.prod.protection.outlook.com (2603:10a6:4:67:cafe::b8) by DB6PR0501CA0035.outlook.office365.com (2603:10a6:4:67::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:22 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234) smtp.mailfrom=arm.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 40.67.248.234 as permitted sender) receiver=protection.outlook.com; client-ip=40.67.248.234; helo=nebula.arm.com; pr=C Received: from nebula.arm.com (40.67.248.234) by DBAEUR03FT044.mail.protection.outlook.com (100.127.142.189) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:22 +0000 Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Fri, 5 May 2023 16:49:16 +0000 Received: from e129018.arm.com (10.57.22.112) by mail.arm.com (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.23 via Frontend Transport; Fri, 5 May 2023 16:49:16 +0000 To: , , , CC: Christophe Lyon Subject: [PATCH 05/10] arm: [MVE intrinsics] add binary_move_narrow and binary_move_narrow_unsigned shapes Date: Fri, 5 May 2023 18:49:01 +0200 Message-ID: <20230505164906.596219-5-christophe.lyon@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230505164906.596219-1-christophe.lyon@arm.com> References: <20230505164906.596219-1-christophe.lyon@arm.com> MIME-Version: 1.0 X-EOPAttributedMessage: 1 X-MS-TrafficTypeDiagnostic: DBAEUR03FT044:EE_|DU0PR08MB9130:EE_|AM7EUR03FT050:EE_|DB9PR08MB8291:EE_ X-MS-Office365-Filtering-Correlation-Id: cf0116b4-5838-47d3-525a-08db4d88b3ed x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: Qd50Krsbt1O1i9iqSJOgwj1QpMVhbNZRD4+iMKrT+unSWk8og9Z6EYXuqzx9EqgLu7sRcww0HBJ7eqtaPi85xgmSN7hqHaC8mmDo26gE9JvntqQ+niNqjEwY8gGgbmE1VwnmMqdMyXeg9X8eY84m7h0nyfY5VT9XiynbbREGsHJD5n92icJV3Aj/AJMOyt4+7bE5WcZ6Plt5S/U3WMzwrWuPbawwPNEnT2uFaIdNVHWhLUYvQ4OGhhz7xzx2CeTzgsudCfAlEmmGO1qYhF5wRZLam9as2w9AXgs6AKGlzCzS1FvsvRBUKfXnzSJa4SOc4HNc8V1R0LwQeRCuk/OpgXrXo7NhCWiaRNMlAjEc43AlA1Z4cTuJYv8y5GN1y4+9h0DZ6Xp3ORi2BJ8qNGpuaHhuen2pCIsCHY3lxANboLLO9XcgYhgRZOpaFAHN834grVrBhOa4cPx1ejLM7jf3LDKl7fAhPh64yBfGtpyQLSqVcxxotfXfBmZcRBA0uFa2Ug6vZp7Z9KUR4GrRNPiRD77CRJ2E06mlUoxSAGZcTPEzHXTwtnMNDqaQ8EkSsJlyZ3RfEuDN0OBdPlM/i+xPpL4KbupdarCySEd6JupKxzxXK1V50imQaVvlEpdSOdl4w8GuP6MGzeEPQoTQPfiwuzQB3Q35v4nmErF3GA6/AXEMz8Lcmc+/NQuDcG7Nxh3bbJXheYxvuEh+VnHIqv2EKtOysvyZrTfKi0BwDc7tmW8= X-Forefront-Antispam-Report-Untrusted: CIP:40.67.248.234; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:nebula.arm.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230028)(4636009)(136003)(376002)(346002)(396003)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(6636002)(4326008)(44832011)(70206006)(41300700001)(70586007)(5660300002)(8936002)(8676002)(1076003)(26005)(110136005)(478600001)(316002)(6666004)(7696005)(2906002)(2616005)(47076005)(426003)(40460700003)(186003)(336012)(36860700001)(356005)(81166007)(82740400003)(40480700001)(82310400005)(86362001)(36756003)(36900700001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9130 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM7EUR03FT050.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 78d0db09-1a50-47c1-40bc-08db4d88ad5b X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: NVeEqJ7rr55RDqHtr2rGCofMrQ81Jj5W1PkGza7kj2JIIK+ssXU+bIASKxbNG3RoUcnQjE1yYmu6hFzy2tdL5jOpuS2LvcmEnEtyk3NwkuIVrB/DfXH0vuw1I/hKWVW5RJA9TbZz+v+aGzc/ScGQt3SYX47DUbju+t10M/4aBmJiYmfC4dQejG/+1Mu32JqVMqSPV0ZUqdMnSurDwqxI1u1MiuVWmyGfHd39OcVhD0Qa4bUQ2GGOfwvNVNQpzKH0It+sDizpN9VkzJHSv0tgaO0arv6judC39oYOPWB4leBOOGG5pEA6/oRyEaisBHL1Y40uFAnXhM5O2iulD4lyg9iCdA77e20SpDm2lNU6RsgrPq9GoSIblLyYPlmvNOHiU/DdCuhx/q/JRTZcOx25VNo4y3iLYH7Yqzy4W8U2ZKrW/VkgW14QixRBm8Dd9nTB/CSKT8Dq8FLH1XC4cyWvI80p4y2vCXk0g7G6LG8mhmAZT7LIFERwJO20mZqVxeVKqB32FCCU0SZe1y+sM+VvUpdWjRPkOVNpuljJgI0jP1aE80eIcRHhT6ntJ0mMuNxArusfMPfO+7ByD9LTNBaQWuylOrryLt+yBUp84q8OTQXDrWV6qfrnQa5U6yu3oWO3+q5mZShwnJkMHCHcMDEDcyQ3TJ3AVvbsuAeeNET6kAcK26oWpM5PuGpC0PnNqQ/eIdPIFv1Nm/8drcA93SWXRQ== X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230028)(4636009)(39860400002)(346002)(376002)(396003)(136003)(451199021)(40470700004)(36840700001)(46966006)(86362001)(478600001)(6636002)(316002)(36756003)(7696005)(70586007)(4326008)(70206006)(110136005)(6666004)(8676002)(5660300002)(2906002)(44832011)(40480700001)(8936002)(186003)(82740400003)(81166007)(336012)(26005)(426003)(47076005)(82310400005)(1076003)(36860700001)(41300700001)(2616005)(40460700003); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 16:49:33.2154 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cf0116b4-5838-47d3-525a-08db4d88b3ed X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM7EUR03FT050.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB8291 X-Spam-Status: No, score=-12.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Christophe Lyon via Gcc-patches From: Christophe Lyon Reply-To: Christophe Lyon Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" This patch adds the binary_move_narrow and binary_move_narrow_unsigned shapes descriptions. 2022-09-08 Christophe Lyon gcc/ * config/arm/arm-mve-builtins-shapes.cc (binary_move_narrow): New. (binary_move_narrow_unsigned): New. * config/arm/arm-mve-builtins-shapes.h (binary_move_narrow): New. (binary_move_narrow_unsigned): New. --- gcc/config/arm/arm-mve-builtins-shapes.cc | 73 +++++++++++++++++++++++ gcc/config/arm/arm-mve-builtins-shapes.h | 2 + 2 files changed, 75 insertions(+) diff --git a/gcc/config/arm/arm-mve-builtins-shapes.cc b/gcc/config/arm/arm-mve-builtins-shapes.cc index 7d39cf79aec..e26604510a2 100644 --- a/gcc/config/arm/arm-mve-builtins-shapes.cc +++ b/gcc/config/arm/arm-mve-builtins-shapes.cc @@ -401,6 +401,79 @@ struct binary_rshift_def : public overloaded_base<0> }; SHAPE (binary_rshift) +/* _t vfoo[_t0](_t, _t) + + Example: vmovnbq. + int8x16_t [__arm_]vmovnbq[_s16](int8x16_t a, int16x8_t b) + int8x16_t [__arm_]vmovnbq_m[_s16](int8x16_t a, int16x8_t b, mve_pred16_t p) */ +struct binary_move_narrow_def : public overloaded_base<0> +{ + void + build (function_builder &b, const function_group_info &group, + bool preserve_user_namespace) const override + { + b.add_overloaded_functions (group, MODE_none, preserve_user_namespace); + build_all (b, "vh0,vh0,v0", group, MODE_none, preserve_user_namespace); + } + + tree + resolve (function_resolver &r) const override + { + unsigned int i, nargs; + type_suffix_index type; + if (!r.check_gp_argument (2, i, nargs) + || (type = r.infer_vector_type (1)) == NUM_TYPE_SUFFIXES) + return error_mark_node; + + type_suffix_index narrow_suffix + = find_type_suffix (type_suffixes[type].tclass, + type_suffixes[type].element_bits / 2); + + + if (!r.require_matching_vector_type (0, narrow_suffix)) + return error_mark_node; + + return r.resolve_to (r.mode_suffix_id, type); + } +}; +SHAPE (binary_move_narrow) + +/* _t vfoo[_t0](_t, _t) + + Example: vqmovunbq. + uint8x16_t [__arm_]vqmovunbq[_s16](uint8x16_t a, int16x8_t b) + uint8x16_t [__arm_]vqmovunbq_m[_s16](uint8x16_t a, int16x8_t b, mve_pred16_t p) */ +struct binary_move_narrow_unsigned_def : public overloaded_base<0> +{ + void + build (function_builder &b, const function_group_info &group, + bool preserve_user_namespace) const override + { + b.add_overloaded_functions (group, MODE_none, preserve_user_namespace); + build_all (b, "vhu0,vhu0,v0", group, MODE_none, preserve_user_namespace); + } + + tree + resolve (function_resolver &r) const override + { + unsigned int i, nargs; + type_suffix_index type; + if (!r.check_gp_argument (2, i, nargs) + || (type = r.infer_vector_type (1)) == NUM_TYPE_SUFFIXES) + return error_mark_node; + + type_suffix_index narrow_suffix + = find_type_suffix (TYPE_unsigned, + type_suffixes[type].element_bits / 2); + + if (!r.require_matching_vector_type (0, narrow_suffix)) + return error_mark_node; + + return r.resolve_to (r.mode_suffix_id, type); + } +}; +SHAPE (binary_move_narrow_unsigned) + /* _t vfoo[_t0](_t, _t) _t vfoo[_n_t0](_t, _t) diff --git a/gcc/config/arm/arm-mve-builtins-shapes.h b/gcc/config/arm/arm-mve-builtins-shapes.h index bd7e11b89f6..825e1bb2a3c 100644 --- a/gcc/config/arm/arm-mve-builtins-shapes.h +++ b/gcc/config/arm/arm-mve-builtins-shapes.h @@ -37,6 +37,8 @@ namespace arm_mve extern const function_shape *const binary; extern const function_shape *const binary_lshift; extern const function_shape *const binary_lshift_r; + extern const function_shape *const binary_move_narrow; + extern const function_shape *const binary_move_narrow_unsigned; extern const function_shape *const binary_opt_n; extern const function_shape *const binary_orrq; extern const function_shape *const binary_round_lshift; From patchwork Fri May 5 16:49:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Lyon X-Patchwork-Id: 68848 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id B015B3853823 for ; Fri, 5 May 2023 16:53:05 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org B015B3853823 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1683305585; bh=wFE7xbQ7ZqHvBBz8fRraLQKz+peOa4U3tq90paKDLTs=; h=To:CC:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=Xg9fQ8T7FFfswcSS5KaYBDc4w7QDZrVseykGqt6QhdtBFTqPM6Mh4Rg57Q45SIOOy PoFbxH2RCXibIkyFry4QYNj8F+kCuB6eK9TwzH9r64ON4DhKvhcemdnmRPZ0xiF04H PX0XAqS1eCWKo6Mlm/vxqfQmAhI9xXIM875dFObk= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR03-AM7-obe.outbound.protection.outlook.com (mail-am7eur03on2054.outbound.protection.outlook.com [40.107.105.54]) by sourceware.org (Postfix) with ESMTPS id D1C10385B53E for ; Fri, 5 May 2023 16:49:39 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org D1C10385B53E Received: from DB3PR08CA0030.eurprd08.prod.outlook.com (2603:10a6:8::43) by VE1PR08MB5728.eurprd08.prod.outlook.com (2603:10a6:800:1a0::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 16:49:36 +0000 Received: from DBAEUR03FT028.eop-EUR03.prod.protection.outlook.com (2603:10a6:8:0:cafe::d2) by DB3PR08CA0030.outlook.office365.com (2603:10a6:8::43) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:36 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DBAEUR03FT028.mail.protection.outlook.com (100.127.142.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:36 +0000 Received: ("Tessian outbound 99a3040377ca:v136"); Fri, 05 May 2023 16:49:36 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 7e7d2cca42cbb2e2 X-CR-MTA-TID: 64aa7808 Received: from c8ec5979a388.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 5D95EF23-EC2A-43A3-85B7-BE5BB834097E.1; Fri, 05 May 2023 16:49:25 +0000 Received: from EUR01-VE1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c8ec5979a388.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Fri, 05 May 2023 16:49:25 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gkyCkKqe0K7mLI0giNSUHweAyc2Q42HL5R86G4FSIfIKOY1xsKkP+wB6y3ndfeoBJocwd+khgsCALpIi6uY3/jr/gQHYZqMD4Cz0SPilUv0fIcm+mRiP3O57+Kb98yMT+M2u0XXELIT+PuS9dBVHSW2tTJxEnA9D7GPuh8pzo/S2zN9OjqVL7myNhK0NBjbtr/3v5J8fofNb1pu1cLrEpan+szJIxV9WVnfgYU7HVS+vJDIb26vxR30LAZ25UxuuQZoNI3urbYpYSl/79VsyceOIBtwRk1CpTEOqhecBNuRRufAdeWQxcBLhV47TEB5TWGXbhgauaUK/vLii9t1IfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=wFE7xbQ7ZqHvBBz8fRraLQKz+peOa4U3tq90paKDLTs=; b=HrsrVk6vwvkA1u8ory97eG3WZtGozdl4XXbOfiArOHfNa4W3VSTrGK+uLXTSzFoAfCJAHtVzxVqGH+dWgPOVmM7ZqWj7gZ2f4H08RrQY1+cVe/1MSRSsnm7csPnFnLPgWPcwMJi8/AovnchCkk/tFM1h8LNW3GHIxNycSp38A7pJSJxSPgdJdXnYYKPNIrTu5/1SWsaSszFOmAG26WHKkLAzoRwn78EMjs/2AbFlSqD38cjbrcD0CPY4idfnvvd6Lm2oNlQCahsPZhD64wX7JDs5nFZyciuYDgbvxee76sSj0mS+IahTiVzPVYqnAPIyMgFWlpag+TMgASAUCuTQtg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none Received: from DB6PR0501CA0034.eurprd05.prod.outlook.com (2603:10a6:4:67::20) by VI1PR08MB10275.eurprd08.prod.outlook.com (2603:10a6:800:1bf::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Fri, 5 May 2023 16:49:22 +0000 Received: from DBAEUR03FT044.eop-EUR03.prod.protection.outlook.com (2603:10a6:4:67:cafe::2a) by DB6PR0501CA0034.outlook.office365.com (2603:10a6:4:67::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:22 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234) smtp.mailfrom=arm.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 40.67.248.234 as permitted sender) receiver=protection.outlook.com; client-ip=40.67.248.234; helo=nebula.arm.com; pr=C Received: from nebula.arm.com (40.67.248.234) by DBAEUR03FT044.mail.protection.outlook.com (100.127.142.189) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:22 +0000 Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Fri, 5 May 2023 16:49:16 +0000 Received: from e129018.arm.com (10.57.22.112) by mail.arm.com (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.23 via Frontend Transport; Fri, 5 May 2023 16:49:16 +0000 To: , , , CC: Christophe Lyon Subject: [PATCH 06/10] arm: [MVE intrinsics] factorize vmovnbq vmovntq vqmovnbq vqmovntq vqmovunbq vqmovuntq Date: Fri, 5 May 2023 18:49:02 +0200 Message-ID: <20230505164906.596219-6-christophe.lyon@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230505164906.596219-1-christophe.lyon@arm.com> References: <20230505164906.596219-1-christophe.lyon@arm.com> MIME-Version: 1.0 X-EOPAttributedMessage: 1 X-MS-TrafficTypeDiagnostic: DBAEUR03FT044:EE_|VI1PR08MB10275:EE_|DBAEUR03FT028:EE_|VE1PR08MB5728:EE_ X-MS-Office365-Filtering-Correlation-Id: 479d5f85-a2c8-4f2f-5117-08db4d88b5fb x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: 17bDHnAgtU+n6QEWRiJIsNeJRnbHk2KJ75c17C29zLlF9hSmL4YGo0YXbeZJJ6bqVPu4OBbwIPB6eaDLJtPzi2oQDHYxJhJIZM9ybCLrhwIKH9GrsL5yj+56pY5pmTlpLYYMdQsxQgrq+raOrOkurQkNNBfYLnra8CqtXOF0jktLY4/B9k1YJHwR6hgRy7yDUtXMkSfh93QM2Hn2QKASAbkznmov1hFMx+3giMqOlTc9fFy+go+MCCx5plVv4piXkzcJiRkric3Ht4p5ijcDXH9m/P5Xs2/FiO0jeN14OQFPIud9JHQFl3x4h/3P0t4XfMnxMqLZGHarsBlakohlGXaYAe686/e/Zo136FjtLD9vtWiQD6UoukS/cz2TEr4BHjlKZ4kU72MzZULlZcMUOeSEw307DngQdtHsjIhX9BhiziyFn1sJn8GZflFM+7Hx0COI1+P83WwjUiX08ZDAKxGfPnk1hueQBezVgfJoIXkW23MbZ95s9zBxkb1g+c4eAVR1gp1TO70xuxL3+XWktGbidRJujbtQQDBAMh7FFSuuRSVDbym0wia9LkpZXCHe3i+dcTc2d4U/wVyO4gziZX0COsJonhztUdErVC6WqGiTOnV8PWgwMxUSQHWxm4Kukampp7ul8P31AaJ20S9CaOsjhArHRDTbKY+jeDFzwgotTpmHEHnXIt4fSKFYK1Kvq3qPUTXGBEbXp2ziQZ/ScA== X-Forefront-Antispam-Report-Untrusted: CIP:40.67.248.234; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:nebula.arm.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230028)(4636009)(396003)(39850400004)(376002)(346002)(136003)(451199021)(36840700001)(46966006)(36860700001)(36756003)(336012)(5660300002)(44832011)(2906002)(81166007)(316002)(8676002)(6636002)(86362001)(40480700001)(4326008)(356005)(82310400005)(41300700001)(70586007)(82740400003)(8936002)(70206006)(426003)(83380400001)(47076005)(110136005)(26005)(186003)(478600001)(1076003)(7696005)(2616005)(6666004)(36900700001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB10275 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DBAEUR03FT028.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 91e9028f-379e-48fe-3caa-08db4d88ad99 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: wB1wH5Q4feA6pcX9FhMWKaN88RG+eluF52aImbHfEZRLnJApZQdr9jvlWSCTbunNeJnp4zqSSOgicY6+vp+Moiy8brjJ9Jv1kFooM3IFuG8wCFUr8iVXwQhw2a5e7imVJFxvxsnWXfT3/GQ6RnTu+PDde8gjK7tja1EvII0VOtXofzriu/L+uA4Mo2aIQUxMnEp0TFqZdGGcHQYuW1T+VaX1S0caNFoCjJMVjmxD4nLug4H7qlog0L8he3DKtdX7GuMEROXmEJvxWIqq42juu3lr1qhws0Y6yD421bl1tbXj8VJNmdoREfGT0wPPvgM97TiCn7nhM6HZaYDg8pcs98gKbu76IqPjlppK5LF3deuq7LgDvAPu86O2vc43veSMtW6t+Wib10bliE2v4kPK2hvC6Vme5NtRX12j977onDToS9zNORXR9uaGoqF+HdpLbXQVdg+aRx73flMjWQeI6GZMcAB+Cyynl8uDh3JTDgwG3zNX4nxL0Ps5Qd9pyJkVqbvadnAd/+1+3hMZuSiG2sZ9fL8H6hTPfvDDGAql1161U9GWmcr+bloVUlg+ZWRk5oK38cCNzcihU3eAitDbHjlhrndArZfbwurwvMn+MhQuimg3n74jPpLlNQ2OldRs4SOn1Z6tahRB/MLe4d97u/Omq+mORLX0pBSPsTCAC5qN1B/kmgToWglLpE0UVjqvr7e+86XFIa40KIqS8qhkuQ== X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230028)(4636009)(346002)(136003)(376002)(396003)(39850400004)(451199021)(40470700004)(36840700001)(46966006)(36756003)(86362001)(316002)(110136005)(6636002)(4326008)(70586007)(70206006)(7696005)(6666004)(478600001)(41300700001)(82310400005)(40480700001)(5660300002)(8936002)(8676002)(2906002)(44832011)(82740400003)(81166007)(186003)(2616005)(26005)(36860700001)(1076003)(426003)(47076005)(83380400001)(336012)(40460700003); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 16:49:36.7094 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 479d5f85-a2c8-4f2f-5117-08db4d88b5fb X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DBAEUR03FT028.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5728 X-Spam-Status: No, score=-12.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Christophe Lyon via Gcc-patches From: Christophe Lyon Reply-To: Christophe Lyon Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" Factorize vmovnbq vmovntq vqmovnbq vqmovntq vqmovunbq vqmovuntq so that they use the same pattern. 2022-09-08 Christophe Lyon gcc/ * config/arm/iterators.md (MVE_MOVN, MVE_MOVN_M): New. (mve_insn): Add vmovnb, vmovnt, vqmovnb, vqmovnt, vqmovunb, vqmovunt. (isu): Likewise. (supf): Add VQMOVUNBQ_M_S, VQMOVUNBQ_S, VQMOVUNTQ_M_S, VQMOVUNTQ_S. * config/arm/mve.md (mve_vmovnbq_) (mve_vmovntq_, mve_vqmovnbq_) (mve_vqmovntq_, mve_vqmovunbq_s) (mve_vqmovuntq_s): Merge into ... (@mve_q_): ... this. (mve_vmovnbq_m_, mve_vmovntq_m_) (mve_vqmovnbq_m_, mve_vqmovntq_m_) (mve_vqmovunbq_m_s, mve_vqmovuntq_m_s): Merge into ... (@mve_q_m_): ... this. --- gcc/config/arm/iterators.md | 46 +++++++++ gcc/config/arm/mve.md | 180 ++++-------------------------------- 2 files changed, 64 insertions(+), 162 deletions(-) diff --git a/gcc/config/arm/iterators.md b/gcc/config/arm/iterators.md index 0b4f69ee874..20735284979 100644 --- a/gcc/config/arm/iterators.md +++ b/gcc/config/arm/iterators.md @@ -578,6 +578,24 @@ (define_int_iterator MVE_FP_CREATE_ONLY [ VCREATEQ_F ]) +(define_int_iterator MVE_MOVN [ + VMOVNBQ_S VMOVNBQ_U + VMOVNTQ_S VMOVNTQ_U + VQMOVNBQ_S VQMOVNBQ_U + VQMOVNTQ_S VQMOVNTQ_U + VQMOVUNBQ_S + VQMOVUNTQ_S + ]) + +(define_int_iterator MVE_MOVN_M [ + VMOVNBQ_M_S VMOVNBQ_M_U + VMOVNTQ_M_S VMOVNTQ_M_U + VQMOVNBQ_M_S VQMOVNBQ_M_U + VQMOVNTQ_M_S VQMOVNTQ_M_U + VQMOVUNBQ_M_S + VQMOVUNTQ_M_S + ]) + (define_code_attr mve_addsubmul [ (minus "vsub") (mult "vmul") @@ -613,6 +631,10 @@ (define_int_attr mve_insn [ (VMINQ_M_S "vmin") (VMINQ_M_U "vmin") (VMLAQ_M_N_S "vmla") (VMLAQ_M_N_U "vmla") (VMLASQ_M_N_S "vmlas") (VMLASQ_M_N_U "vmlas") + (VMOVNBQ_M_S "vmovnb") (VMOVNBQ_M_U "vmovnb") + (VMOVNBQ_S "vmovnb") (VMOVNBQ_U "vmovnb") + (VMOVNTQ_M_S "vmovnt") (VMOVNTQ_M_U "vmovnt") + (VMOVNTQ_S "vmovnt") (VMOVNTQ_U "vmovnt") (VMULHQ_M_S "vmulh") (VMULHQ_M_U "vmulh") (VMULHQ_S "vmulh") (VMULHQ_U "vmulh") (VMULQ_M_N_S "vmul") (VMULQ_M_N_U "vmul") (VMULQ_M_N_F "vmul") @@ -639,6 +661,14 @@ (define_int_attr mve_insn [ (VQDMULHQ_M_S "vqdmulh") (VQDMULHQ_N_S "vqdmulh") (VQDMULHQ_S "vqdmulh") + (VQMOVNBQ_M_S "vqmovnb") (VQMOVNBQ_M_U "vqmovnb") + (VQMOVNBQ_S "vqmovnb") (VQMOVNBQ_U "vqmovnb") + (VQMOVNTQ_M_S "vqmovnt") (VQMOVNTQ_M_U "vqmovnt") + (VQMOVNTQ_S "vqmovnt") (VQMOVNTQ_U "vqmovnt") + (VQMOVUNBQ_M_S "vqmovunb") + (VQMOVUNBQ_S "vqmovunb") + (VQMOVUNTQ_M_S "vqmovunt") + (VQMOVUNTQ_S "vqmovunt") (VQNEGQ_M_S "vqneg") (VQNEGQ_S "vqneg") (VQRDMLADHQ_M_S "vqrdmladh") @@ -723,8 +753,20 @@ (define_int_attr isu [ (VCLSQ_M_S "s") (VCLZQ_M_S "i") (VCLZQ_M_U "i") + (VMOVNBQ_M_S "i") (VMOVNBQ_M_U "i") + (VMOVNBQ_S "i") (VMOVNBQ_U "i") + (VMOVNTQ_M_S "i") (VMOVNTQ_M_U "i") + (VMOVNTQ_S "i") (VMOVNTQ_U "i") (VNEGQ_M_S "s") (VQABSQ_M_S "s") + (VQMOVNBQ_M_S "s") (VQMOVNBQ_M_U "u") + (VQMOVNBQ_S "s") (VQMOVNBQ_U "u") + (VQMOVNTQ_M_S "s") (VQMOVNTQ_M_U "u") + (VQMOVNTQ_S "s") (VQMOVNTQ_U "u") + (VQMOVUNBQ_M_S "s") + (VQMOVUNBQ_S "s") + (VQMOVUNTQ_M_S "s") + (VQMOVUNTQ_S "s") (VQNEGQ_M_S "s") (VQRSHRNBQ_M_N_S "s") (VQRSHRNBQ_M_N_U "u") (VQRSHRNBQ_N_S "s") (VQRSHRNBQ_N_U "u") @@ -1942,6 +1984,10 @@ (define_int_attr supf [(VCVTQ_TO_F_S "s") (VCVTQ_TO_F_U "u") (VREV16Q_S "s") (VCLSQ_S "s") (VQABSQ_S "s") (VQNEGQ_S "s") + (VQMOVUNBQ_M_S "s") + (VQMOVUNBQ_S "s") + (VQMOVUNTQ_M_S "s") + (VQMOVUNTQ_S "s") ]) ;; Both kinds of return insn. diff --git a/gcc/config/arm/mve.md b/gcc/config/arm/mve.md index 7bf344d547a..2273078807b 100644 --- a/gcc/config/arm/mve.md +++ b/gcc/config/arm/mve.md @@ -1645,32 +1645,22 @@ (define_insn "mve_vmlsldavxq_s" ]) ;; -;; [vmovnbq_u, vmovnbq_s]) +;; [vmovnbq_u, vmovnbq_s] +;; [vmovntq_s, vmovntq_u] +;; [vqmovnbq_u, vqmovnbq_s] +;; [vqmovntq_u, vqmovntq_s] +;; [vqmovunbq_s] +;; [vqmovuntq_s] ;; -(define_insn "mve_vmovnbq_" - [ - (set (match_operand: 0 "s_register_operand" "=w") - (unspec: [(match_operand: 1 "s_register_operand" "0") - (match_operand:MVE_5 2 "s_register_operand" "w")] - VMOVNBQ)) - ] - "TARGET_HAVE_MVE" - "vmovnb.i%# %q0, %q2" - [(set_attr "type" "mve_move") -]) - -;; -;; [vmovntq_s, vmovntq_u]) -;; -(define_insn "mve_vmovntq_" +(define_insn "@mve_q_" [ (set (match_operand: 0 "s_register_operand" "=w") (unspec: [(match_operand: 1 "s_register_operand" "0") (match_operand:MVE_5 2 "s_register_operand" "w")] - VMOVNTQ)) + MVE_MOVN)) ] "TARGET_HAVE_MVE" - "vmovnt.i%# %q0, %q2" + ".%#\t%q0, %q2" [(set_attr "type" "mve_move") ]) @@ -1794,66 +1784,6 @@ (define_insn "mve_vqdmulltq_s" [(set_attr "type" "mve_move") ]) -;; -;; [vqmovnbq_u, vqmovnbq_s]) -;; -(define_insn "mve_vqmovnbq_" - [ - (set (match_operand: 0 "s_register_operand" "=w") - (unspec: [(match_operand: 1 "s_register_operand" "0") - (match_operand:MVE_5 2 "s_register_operand" "w")] - VQMOVNBQ)) - ] - "TARGET_HAVE_MVE" - "vqmovnb.%# %q0, %q2" - [(set_attr "type" "mve_move") -]) - -;; -;; [vqmovntq_u, vqmovntq_s]) -;; -(define_insn "mve_vqmovntq_" - [ - (set (match_operand: 0 "s_register_operand" "=w") - (unspec: [(match_operand: 1 "s_register_operand" "0") - (match_operand:MVE_5 2 "s_register_operand" "w")] - VQMOVNTQ)) - ] - "TARGET_HAVE_MVE" - "vqmovnt.%# %q0, %q2" - [(set_attr "type" "mve_move") -]) - -;; -;; [vqmovunbq_s]) -;; -(define_insn "mve_vqmovunbq_s" - [ - (set (match_operand: 0 "s_register_operand" "=w") - (unspec: [(match_operand: 1 "s_register_operand" "0") - (match_operand:MVE_5 2 "s_register_operand" "w")] - VQMOVUNBQ_S)) - ] - "TARGET_HAVE_MVE" - "vqmovunb.s%# %q0, %q2" - [(set_attr "type" "mve_move") -]) - -;; -;; [vqmovuntq_s]) -;; -(define_insn "mve_vqmovuntq_s" - [ - (set (match_operand: 0 "s_register_operand" "=w") - (unspec: [(match_operand: 1 "s_register_operand" "0") - (match_operand:MVE_5 2 "s_register_operand" "w")] - VQMOVUNTQ_S)) - ] - "TARGET_HAVE_MVE" - "vqmovunt.s%# %q0, %q2" - [(set_attr "type" "mve_move") -]) - ;; ;; [vrmlaldavhxq_s]) ;; @@ -3620,35 +3550,25 @@ (define_insn "mve_vmovltq_m_" "vpst\;vmovltt.%# %q0, %q2" [(set_attr "type" "mve_move") (set_attr "length""8")]) -;; -;; [vmovnbq_m_u, vmovnbq_m_s]) -;; -(define_insn "mve_vmovnbq_m_" - [ - (set (match_operand: 0 "s_register_operand" "=w") - (unspec: [(match_operand: 1 "s_register_operand" "0") - (match_operand:MVE_5 2 "s_register_operand" "w") - (match_operand: 3 "vpr_register_operand" "Up")] - VMOVNBQ_M)) - ] - "TARGET_HAVE_MVE" - "vpst\;vmovnbt.i%# %q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) ;; -;; [vmovntq_m_u, vmovntq_m_s]) +;; [vmovnbq_m_u, vmovnbq_m_s] +;; [vmovntq_m_u, vmovntq_m_s] +;; [vqmovnbq_m_s, vqmovnbq_m_u] +;; [vqmovntq_m_u, vqmovntq_m_s] +;; [vqmovunbq_m_s] +;; [vqmovuntq_m_s] ;; -(define_insn "mve_vmovntq_m_" +(define_insn "@mve_q_m_" [ (set (match_operand: 0 "s_register_operand" "=w") (unspec: [(match_operand: 1 "s_register_operand" "0") (match_operand:MVE_5 2 "s_register_operand" "w") (match_operand: 3 "vpr_register_operand" "Up")] - VMOVNTQ_M)) + MVE_MOVN_M)) ] "TARGET_HAVE_MVE" - "vpst\;vmovntt.i%# %q0, %q2" + "vpst\;t.%#\t%q0, %q2" [(set_attr "type" "mve_move") (set_attr "length""8")]) @@ -3701,70 +3621,6 @@ (define_insn "@mve_vpselq_f" [(set_attr "type" "mve_move") ]) -;; -;; [vqmovnbq_m_s, vqmovnbq_m_u]) -;; -(define_insn "mve_vqmovnbq_m_" - [ - (set (match_operand: 0 "s_register_operand" "=w") - (unspec: [(match_operand: 1 "s_register_operand" "0") - (match_operand:MVE_5 2 "s_register_operand" "w") - (match_operand: 3 "vpr_register_operand" "Up")] - VQMOVNBQ_M)) - ] - "TARGET_HAVE_MVE" - "vpst\;vqmovnbt.%# %q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) - -;; -;; [vqmovntq_m_u, vqmovntq_m_s]) -;; -(define_insn "mve_vqmovntq_m_" - [ - (set (match_operand: 0 "s_register_operand" "=w") - (unspec: [(match_operand: 1 "s_register_operand" "0") - (match_operand:MVE_5 2 "s_register_operand" "w") - (match_operand: 3 "vpr_register_operand" "Up")] - VQMOVNTQ_M)) - ] - "TARGET_HAVE_MVE" - "vpst\;vqmovntt.%# %q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) - -;; -;; [vqmovunbq_m_s]) -;; -(define_insn "mve_vqmovunbq_m_s" - [ - (set (match_operand: 0 "s_register_operand" "=w") - (unspec: [(match_operand: 1 "s_register_operand" "0") - (match_operand:MVE_5 2 "s_register_operand" "w") - (match_operand: 3 "vpr_register_operand" "Up")] - VQMOVUNBQ_M_S)) - ] - "TARGET_HAVE_MVE" - "vpst\;vqmovunbt.s%# %q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) - -;; -;; [vqmovuntq_m_s]) -;; -(define_insn "mve_vqmovuntq_m_s" - [ - (set (match_operand: 0 "s_register_operand" "=w") - (unspec: [(match_operand: 1 "s_register_operand" "0") - (match_operand:MVE_5 2 "s_register_operand" "w") - (match_operand: 3 "vpr_register_operand" "Up")] - VQMOVUNTQ_M_S)) - ] - "TARGET_HAVE_MVE" - "vpst\;vqmovuntt.s%# %q0, %q2" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) - ;; ;; [vrev32q_m_f]) ;; From patchwork Fri May 5 16:49:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Lyon X-Patchwork-Id: 68850 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id E91AF3852763 for ; Fri, 5 May 2023 16:55:09 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org E91AF3852763 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1683305710; bh=CdFUzhbF8cC3r2dDj2k8dodI4LommZZqWClEJBc1UQM=; h=To:CC:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=pYVVFzmvmyZFrOMuR/L3pVQfFEqrf8SGK4NJZBsjvkz/ggRoIStZMRYGJ6Z+qaZwX qHxcNdCiaQQ9x5OP6cTUxQ55ANnR07gEUHIigl1wZYcwfZyM2bq32+bzCRRMD9saII Y/dDLmSnRoRihh3cGBEAZ+OUWZ7HnX8LR8YHovME= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR04-HE1-obe.outbound.protection.outlook.com (mail-he1eur04on2059.outbound.protection.outlook.com [40.107.7.59]) by sourceware.org (Postfix) with ESMTPS id 637EB3854160 for ; Fri, 5 May 2023 16:49:54 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 637EB3854160 Received: from AM5PR0201CA0012.eurprd02.prod.outlook.com (2603:10a6:203:3d::22) by PAWPR08MB9565.eurprd08.prod.outlook.com (2603:10a6:102:2f1::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 16:49:43 +0000 Received: from AM7EUR03FT065.eop-EUR03.prod.protection.outlook.com (2603:10a6:203:3d:cafe::3e) by AM5PR0201CA0012.outlook.office365.com (2603:10a6:203:3d::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM7EUR03FT065.mail.protection.outlook.com (100.127.140.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend Transport; Fri, 5 May 2023 16:49:42 +0000 Received: ("Tessian outbound 3a01b65b5aad:v136"); Fri, 05 May 2023 16:49:42 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 95b2a51e9277fa26 X-CR-MTA-TID: 64aa7808 Received: from c8066b720b80.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id C67870BD-49F7-41F1-B28C-8C42E4522F57.1; Fri, 05 May 2023 16:49:35 +0000 Received: from EUR04-DB3-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c8066b720b80.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Fri, 05 May 2023 16:49:35 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Upz+ZZHjkpahPxPOPnB8JfMBDtu62Ow3BxiHI1ffawAjJmtkMEI/wjAk+OU/Cwj/ta6Lq/9wJIJjHSAc+KyUeeFwD9R+t4huPkrFEJttL1MSz0dANSHydDf957Npz5dOkgF6/ZX+5O5H/FOVxwWJktwv2DAvHxr1xNqFa8IBE3/B4dXgDbLOMoEeAQWkSkh/EVCZ4agr6/Y26ifCZmffIq4rmK4eA9NHYZSnQzrwJhzgBOXI6m44647h86TEZOSB+IjHsJAcWCXT9AYlkLMHy+Zj9yQO2aXzghNsVS4oWopQymJX7e3m3UC8o3uiyePCkoFK/sTtokkKlVwruz0VdA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=CdFUzhbF8cC3r2dDj2k8dodI4LommZZqWClEJBc1UQM=; b=ocgJdMHPY1zrVX+FL0jvZrTu0WrGQoDqU2jV3rVibXkyQc5l0SxfUZ6SGMwkFPMaiEqIbvFIoMzM4m8y3oFzOABKY5PKfMHOgb+teF5vVhwUkPUMmqm2JSo0veZRTMGG2j6zpi3sE/NhtE4+W0CKuGxTxv9hJHKJj9TJDurOtd5i4hkvFa32hSx8C3yAdRJrX6/oIb2qvPiVnYxEikqSFJBp4sEPK0jO7HNpoMHz1zghrMRfVPJrX2RxPMdV/8WaGcKjYyyOO2lCEjaW6asjP/hdPefzmd3CbvJaRRfe5wjUs8UOSIu6HCkhn2qRCG+pGeWJMSDasVryMX0m1XXf1g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none Received: from DB6PR0501CA0043.eurprd05.prod.outlook.com (2603:10a6:4:67::29) by DBBPR08MB5897.eurprd08.prod.outlook.com (2603:10a6:10:203::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 16:49:25 +0000 Received: from DBAEUR03FT044.eop-EUR03.prod.protection.outlook.com (2603:10a6:4:67:cafe::99) by DB6PR0501CA0043.outlook.office365.com (2603:10a6:4:67::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.25 via Frontend Transport; Fri, 5 May 2023 16:49:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234) smtp.mailfrom=arm.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 40.67.248.234 as permitted sender) receiver=protection.outlook.com; client-ip=40.67.248.234; helo=nebula.arm.com; pr=C Received: from nebula.arm.com (40.67.248.234) by DBAEUR03FT044.mail.protection.outlook.com (100.127.142.189) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:25 +0000 Received: from AZ-NEU-EX02.Emea.Arm.com (10.251.26.5) by AZ-NEU-EX04.Arm.com (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Fri, 5 May 2023 16:49:17 +0000 Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX02.Emea.Arm.com (10.251.26.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Fri, 5 May 2023 16:49:17 +0000 Received: from e129018.arm.com (10.57.22.112) by mail.arm.com (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.23 via Frontend Transport; Fri, 5 May 2023 16:49:16 +0000 To: , , , CC: Christophe Lyon Subject: [PATCH 07/10] arm: [MVE intrinsics] rework vmovnbq vmovntq vqmovnbq vqmovntq vqmovunbq vqmovuntq Date: Fri, 5 May 2023 18:49:03 +0200 Message-ID: <20230505164906.596219-7-christophe.lyon@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230505164906.596219-1-christophe.lyon@arm.com> References: <20230505164906.596219-1-christophe.lyon@arm.com> MIME-Version: 1.0 X-EOPAttributedMessage: 1 X-MS-TrafficTypeDiagnostic: DBAEUR03FT044:EE_|DBBPR08MB5897:EE_|AM7EUR03FT065:EE_|PAWPR08MB9565:EE_ X-MS-Office365-Filtering-Correlation-Id: 65b9a9b9-f601-4c02-6105-08db4d88b99d x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: 3ELzDaJG6GPN6YHixKefBU7358VXio/bKdlMlPadQf65IFXFxH9b7LUax5VpQx92+mpzn0/+UkKyFs1/+NZpHn8TIIR/5szVlmR/o80JMjweD7/A9WXn4IyvypmHIE0YB5hCmJesfTzc6Q5S/Zvv7x7oWE5qc8cE7sFYNMao/c4MUHcM+xK5EgdSnR2YZczmTGzlU+S6iC4kMjkXBiEmm2+FugChuBCEx/nKg7VeV6Ei4LkXZ34AoPCzCFle9ytvpSNGBIj1LIemvmN+ItVIsOrgYENBr7tLuzXmBAe3FBe9DHyDIPvSBHBEjUiikSpfm9A9ow+uSgGLUirCHX2BZqsyh3HFGaoKc+lKRNXxEujBE/9vECEiuZjbaY3B44WzNN7ejzpxSfGYZP9aVQxzyY7P5gTYyXA9Jpc7zZ3qC4uGeJ4ikoG+tLts+SDXTjOaUnZF7AIjkqwp4lEjYiHEBl8YGuIcItxW9h40u56cIKAgcyut6E3FAAiNjLouMeYtaElMutHm5IweEvC9VO++SXs+WAmtQyMOA6KpXRi5HVa4joJ2icysQPFxZoI+yQW5fVPZBjcGHhuDTf3GTVA5fRC2fBlbbd0wU0VHVdaVhESyjfuSLsWVdFp66DXLGKF6NNb7Pyr5ip25r3sV16iJkFtjtozRHLwzVeZsB1KJgMVfayulyEzcw/G578S6s/1TM9iqv9EX1xtxiHizVTSVCR/9JWbPpxr/GsdBUS3KI8E= X-Forefront-Antispam-Report-Untrusted: CIP:40.67.248.234; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:nebula.arm.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230028)(4636009)(136003)(396003)(39860400002)(376002)(346002)(451199021)(46966006)(36840700001)(40470700004)(86362001)(36756003)(6666004)(7696005)(316002)(110136005)(70586007)(6636002)(4326008)(70206006)(478600001)(41300700001)(40480700001)(82310400005)(8676002)(5660300002)(8936002)(30864003)(2906002)(44832011)(82740400003)(81166007)(186003)(356005)(26005)(1076003)(36860700001)(2616005)(336012)(83380400001)(426003)(47076005)(40460700003)(36900700001)(559001)(579004); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB5897 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM7EUR03FT065.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 913fa1bb-aad7-4593-c206-08db4d88af0a X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: NwggN0twWQuh+DF4CenppTtPkP6wAs7qjU/owUaCzjJER7XwKRjboSHc7pptt3QGT95z8SHgmIfCdQ+N4XmnLe4DKrt4YhHNpHlCFn37M/cpVTo7KnmLFBIIbzWT/rIelRVm1t4/3TQJg2B8PIPT+LudI9Bz6sHZCAuBWk87Yn1EvC5a46EvrHOhDZLFrwDYtqVN+j8P4VBuKHXNlyJ9fbudChYqUZ6A5zZkxYvXg+aUmXfFUFjohk5m15QiHvIIAc21nrEl/iysK5ekdayQ+FlKUq6E9zFdUUfUAPWIL/8a+5lWwdk1DYNJTDld3H84eOkS0kQlD20yqurVQoEuMAPouvkhmUPEhEBtvRDIrLMZsM0GczWhit0VDv0EZNJdw/+SO7Vxc5bA/ICqZLnwInzzo2z0QsPxfrHf+d8Wbp3+2HLwJNfySSYYrMTHjCzIUuz449KA9u7wyn6fQoMU8dzzcYXs7ucjG9mKOAj3REdIEokqkeFLKt6AYO8UOUebqn9AvHJLSDVAAqkyJGyJsXbbfvYzmkzdER5EHd7yFtphOKAdfp1jIes/giOmNVHIriwKE5H5n/Mi+9r0Q+rgcCalGMSs2OCVQg6s96HiKYeuxDXLKET/awv1a1fUiu1Ykce7WY22rc1nFMA1S2u7edGZZaCt2syLgGDCiHitMd5m3Uebv6N6kEffO6UE1jS+s+V4A3VwY+8FLZUiCQ1iuA== X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230028)(4636009)(39860400002)(396003)(376002)(346002)(136003)(451199021)(40470700004)(36840700001)(46966006)(2906002)(316002)(6636002)(30864003)(4326008)(86362001)(40480700001)(36756003)(5660300002)(44832011)(8676002)(110136005)(8936002)(41300700001)(81166007)(40460700003)(36860700001)(83380400001)(70586007)(70206006)(82740400003)(7696005)(6666004)(82310400005)(478600001)(336012)(426003)(2616005)(1076003)(186003)(47076005)(26005)(559001)(579004); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 16:49:42.7265 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 65b9a9b9-f601-4c02-6105-08db4d88b99d X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM7EUR03FT065.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB9565 X-Spam-Status: No, score=-12.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Christophe Lyon via Gcc-patches From: Christophe Lyon Reply-To: Christophe Lyon Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" Implement vmovnbq, vmovntq, vqmovnbq, vqmovntq, vqmovunbq, vqmovuntq using the new MVE builtins framework. 2022-09-08 Christophe Lyon gcc/ * config/arm/arm-mve-builtins-base.cc (vmovnbq, vmovntq, vqmovnbq) (vqmovntq, vqmovunbq, vqmovuntq): New. * config/arm/arm-mve-builtins-base.def (vmovnbq, vmovntq) (vqmovnbq, vqmovntq, vqmovunbq, vqmovuntq): New. * config/arm/arm-mve-builtins-base.h (vmovnbq, vmovntq, vqmovnbq) (vqmovntq, vqmovunbq, vqmovuntq): New. * config/arm/arm-mve-builtins.cc (function_instance::has_inactive_argument): Handle vmovnbq, vmovntq, vqmovnbq, vqmovntq, vqmovunbq, vqmovuntq. * config/arm/arm_mve.h (vqmovntq): Remove. (vqmovnbq): Remove. (vqmovnbq_m): Remove. (vqmovntq_m): Remove. (vqmovntq_u16): Remove. (vqmovnbq_u16): Remove. (vqmovntq_s16): Remove. (vqmovnbq_s16): Remove. (vqmovntq_u32): Remove. (vqmovnbq_u32): Remove. (vqmovntq_s32): Remove. (vqmovnbq_s32): Remove. (vqmovnbq_m_s16): Remove. (vqmovntq_m_s16): Remove. (vqmovnbq_m_u16): Remove. (vqmovntq_m_u16): Remove. (vqmovnbq_m_s32): Remove. (vqmovntq_m_s32): Remove. (vqmovnbq_m_u32): Remove. (vqmovntq_m_u32): Remove. (__arm_vqmovntq_u16): Remove. (__arm_vqmovnbq_u16): Remove. (__arm_vqmovntq_s16): Remove. (__arm_vqmovnbq_s16): Remove. (__arm_vqmovntq_u32): Remove. (__arm_vqmovnbq_u32): Remove. (__arm_vqmovntq_s32): Remove. (__arm_vqmovnbq_s32): Remove. (__arm_vqmovnbq_m_s16): Remove. (__arm_vqmovntq_m_s16): Remove. (__arm_vqmovnbq_m_u16): Remove. (__arm_vqmovntq_m_u16): Remove. (__arm_vqmovnbq_m_s32): Remove. (__arm_vqmovntq_m_s32): Remove. (__arm_vqmovnbq_m_u32): Remove. (__arm_vqmovntq_m_u32): Remove. (__arm_vqmovntq): Remove. (__arm_vqmovnbq): Remove. (__arm_vqmovnbq_m): Remove. (__arm_vqmovntq_m): Remove. (vmovntq): Remove. (vmovnbq): Remove. (vmovnbq_m): Remove. (vmovntq_m): Remove. (vmovntq_u16): Remove. (vmovnbq_u16): Remove. (vmovntq_s16): Remove. (vmovnbq_s16): Remove. (vmovntq_u32): Remove. (vmovnbq_u32): Remove. (vmovntq_s32): Remove. (vmovnbq_s32): Remove. (vmovnbq_m_s16): Remove. (vmovntq_m_s16): Remove. (vmovnbq_m_u16): Remove. (vmovntq_m_u16): Remove. (vmovnbq_m_s32): Remove. (vmovntq_m_s32): Remove. (vmovnbq_m_u32): Remove. (vmovntq_m_u32): Remove. (__arm_vmovntq_u16): Remove. (__arm_vmovnbq_u16): Remove. (__arm_vmovntq_s16): Remove. (__arm_vmovnbq_s16): Remove. (__arm_vmovntq_u32): Remove. (__arm_vmovnbq_u32): Remove. (__arm_vmovntq_s32): Remove. (__arm_vmovnbq_s32): Remove. (__arm_vmovnbq_m_s16): Remove. (__arm_vmovntq_m_s16): Remove. (__arm_vmovnbq_m_u16): Remove. (__arm_vmovntq_m_u16): Remove. (__arm_vmovnbq_m_s32): Remove. (__arm_vmovntq_m_s32): Remove. (__arm_vmovnbq_m_u32): Remove. (__arm_vmovntq_m_u32): Remove. (__arm_vmovntq): Remove. (__arm_vmovnbq): Remove. (__arm_vmovnbq_m): Remove. (__arm_vmovntq_m): Remove. (vqmovuntq): Remove. (vqmovunbq): Remove. (vqmovunbq_m): Remove. (vqmovuntq_m): Remove. (vqmovuntq_s16): Remove. (vqmovunbq_s16): Remove. (vqmovuntq_s32): Remove. (vqmovunbq_s32): Remove. (vqmovunbq_m_s16): Remove. (vqmovuntq_m_s16): Remove. (vqmovunbq_m_s32): Remove. (vqmovuntq_m_s32): Remove. (__arm_vqmovuntq_s16): Remove. (__arm_vqmovunbq_s16): Remove. (__arm_vqmovuntq_s32): Remove. (__arm_vqmovunbq_s32): Remove. (__arm_vqmovunbq_m_s16): Remove. (__arm_vqmovuntq_m_s16): Remove. (__arm_vqmovunbq_m_s32): Remove. (__arm_vqmovuntq_m_s32): Remove. (__arm_vqmovuntq): Remove. (__arm_vqmovunbq): Remove. (__arm_vqmovunbq_m): Remove. (__arm_vqmovuntq_m): Remove. --- gcc/config/arm/arm-mve-builtins-base.cc | 6 + gcc/config/arm/arm-mve-builtins-base.def | 6 + gcc/config/arm/arm-mve-builtins-base.h | 8 +- gcc/config/arm/arm-mve-builtins.cc | 6 + gcc/config/arm/arm_mve.h | 788 ----------------------- 5 files changed, 25 insertions(+), 789 deletions(-) diff --git a/gcc/config/arm/arm-mve-builtins-base.cc b/gcc/config/arm/arm-mve-builtins-base.cc index 4cf4464a48e..1dae12b445b 100644 --- a/gcc/config/arm/arm-mve-builtins-base.cc +++ b/gcc/config/arm/arm-mve-builtins-base.cc @@ -224,12 +224,18 @@ FUNCTION_WITH_M_N_NO_F (vhaddq, VHADDQ) FUNCTION_WITH_M_N_NO_F (vhsubq, VHSUBQ) FUNCTION_WITH_RTX_M_NO_F (vmaxq, SMAX, UMAX, VMAXQ) FUNCTION_WITH_RTX_M_NO_F (vminq, SMIN, UMIN, VMINQ) +FUNCTION_WITHOUT_N_NO_F (vmovnbq, VMOVNBQ) +FUNCTION_WITHOUT_N_NO_F (vmovntq, VMOVNTQ) FUNCTION_WITHOUT_N_NO_F (vmulhq, VMULHQ) FUNCTION_WITH_RTX_M_N (vmulq, MULT, VMULQ) FUNCTION (vnegq, unspec_based_mve_function_exact_insn, (NEG, NEG, NEG, -1, -1, -1, VNEGQ_M_S, -1, VNEGQ_M_F, -1, -1, -1)) FUNCTION_WITH_RTX_M_N_NO_N_F (vorrq, IOR, VORRQ) FUNCTION_WITHOUT_N_NO_U_F (vqabsq, VQABSQ) FUNCTION_WITH_M_N_NO_F (vqaddq, VQADDQ) +FUNCTION_WITHOUT_N_NO_F (vqmovnbq, VQMOVNBQ) +FUNCTION_WITHOUT_N_NO_F (vqmovntq, VQMOVNTQ) +FUNCTION_WITHOUT_N_NO_U_F (vqmovunbq, VQMOVUNBQ) +FUNCTION_WITHOUT_N_NO_U_F (vqmovuntq, VQMOVUNTQ) FUNCTION_WITH_M_N_NO_U_F (vqdmulhq, VQDMULHQ) FUNCTION_WITHOUT_N_NO_U_F (vqnegq, VQNEGQ) FUNCTION_WITH_M_N_NO_F (vqrshlq, VQRSHLQ) diff --git a/gcc/config/arm/arm-mve-builtins-base.def b/gcc/config/arm/arm-mve-builtins-base.def index 2928a554a11..f868614fb6b 100644 --- a/gcc/config/arm/arm-mve-builtins-base.def +++ b/gcc/config/arm/arm-mve-builtins-base.def @@ -30,6 +30,8 @@ DEF_MVE_FUNCTION (vhaddq, binary_opt_n, all_integer, mx_or_none) DEF_MVE_FUNCTION (vhsubq, binary_opt_n, all_integer, mx_or_none) DEF_MVE_FUNCTION (vmaxq, binary, all_integer, mx_or_none) DEF_MVE_FUNCTION (vminq, binary, all_integer, mx_or_none) +DEF_MVE_FUNCTION (vmovnbq, binary_move_narrow, integer_16_32, m_or_none) +DEF_MVE_FUNCTION (vmovntq, binary_move_narrow, integer_16_32, m_or_none) DEF_MVE_FUNCTION (vmulhq, binary, all_integer, mx_or_none) DEF_MVE_FUNCTION (vmulq, binary_opt_n, all_integer, mx_or_none) DEF_MVE_FUNCTION (vnegq, unary, all_signed, mx_or_none) @@ -37,6 +39,10 @@ DEF_MVE_FUNCTION (vorrq, binary_orrq, all_integer, mx_or_none) DEF_MVE_FUNCTION (vqabsq, unary, all_signed, m_or_none) DEF_MVE_FUNCTION (vqaddq, binary_opt_n, all_integer, m_or_none) DEF_MVE_FUNCTION (vqdmulhq, binary_opt_n, all_signed, m_or_none) +DEF_MVE_FUNCTION (vqmovnbq, binary_move_narrow, integer_16_32, m_or_none) +DEF_MVE_FUNCTION (vqmovntq, binary_move_narrow, integer_16_32, m_or_none) +DEF_MVE_FUNCTION (vqmovunbq, binary_move_narrow_unsigned, signed_16_32, m_or_none) +DEF_MVE_FUNCTION (vqmovuntq, binary_move_narrow_unsigned, signed_16_32, m_or_none) DEF_MVE_FUNCTION (vqnegq, unary, all_signed, m_or_none) DEF_MVE_FUNCTION (vqrdmulhq, binary_opt_n, all_signed, m_or_none) DEF_MVE_FUNCTION (vqrshlq, binary_round_lshift, all_integer, m_or_none) diff --git a/gcc/config/arm/arm-mve-builtins-base.h b/gcc/config/arm/arm-mve-builtins-base.h index b432011978e..f4960cbbea2 100644 --- a/gcc/config/arm/arm-mve-builtins-base.h +++ b/gcc/config/arm/arm-mve-builtins-base.h @@ -35,6 +35,8 @@ extern const function_base *const vhaddq; extern const function_base *const vhsubq; extern const function_base *const vmaxq; extern const function_base *const vminq; +extern const function_base *const vmovnbq; +extern const function_base *const vmovntq; extern const function_base *const vmulhq; extern const function_base *const vmulq; extern const function_base *const vnegq; @@ -42,6 +44,10 @@ extern const function_base *const vorrq; extern const function_base *const vqabsq; extern const function_base *const vqaddq; extern const function_base *const vqdmulhq; +extern const function_base *const vqmovnbq; +extern const function_base *const vqmovntq; +extern const function_base *const vqmovunbq; +extern const function_base *const vqmovuntq; extern const function_base *const vqnegq; extern const function_base *const vqrdmulhq; extern const function_base *const vqrshlq; @@ -58,11 +64,11 @@ extern const function_base *const vqsubq; extern const function_base *const vreinterpretq; extern const function_base *const vrhaddq; extern const function_base *const vrmulhq; -extern const function_base *const vrndq; extern const function_base *const vrndaq; extern const function_base *const vrndmq; extern const function_base *const vrndnq; extern const function_base *const vrndpq; +extern const function_base *const vrndq; extern const function_base *const vrndxq; extern const function_base *const vrshlq; extern const function_base *const vrshrnbq; diff --git a/gcc/config/arm/arm-mve-builtins.cc b/gcc/config/arm/arm-mve-builtins.cc index 7c34d2a94de..38639f75785 100644 --- a/gcc/config/arm/arm-mve-builtins.cc +++ b/gcc/config/arm/arm-mve-builtins.cc @@ -670,6 +670,12 @@ function_instance::has_inactive_argument () const return false; if (mode_suffix_id == MODE_r + || base == functions::vmovnbq + || base == functions::vmovntq + || base == functions::vqmovnbq + || base == functions::vqmovntq + || base == functions::vqmovunbq + || base == functions::vqmovuntq || (base == functions::vorrq && mode_suffix_id == MODE_n) || (base == functions::vqrshlq && mode_suffix_id == MODE_n) || base == functions::vqrshrnbq diff --git a/gcc/config/arm/arm_mve.h b/gcc/config/arm/arm_mve.h index aae1f8bf639..97f0ef93ee9 100644 --- a/gcc/config/arm/arm_mve.h +++ b/gcc/config/arm/arm_mve.h @@ -82,15 +82,9 @@ #define vmladavxq(__a, __b) __arm_vmladavxq(__a, __b) #define vhcaddq_rot90(__a, __b) __arm_vhcaddq_rot90(__a, __b) #define vhcaddq_rot270(__a, __b) __arm_vhcaddq_rot270(__a, __b) -#define vqmovntq(__a, __b) __arm_vqmovntq(__a, __b) -#define vqmovnbq(__a, __b) __arm_vqmovnbq(__a, __b) #define vmulltq_poly(__a, __b) __arm_vmulltq_poly(__a, __b) #define vmullbq_poly(__a, __b) __arm_vmullbq_poly(__a, __b) -#define vmovntq(__a, __b) __arm_vmovntq(__a, __b) -#define vmovnbq(__a, __b) __arm_vmovnbq(__a, __b) #define vmlaldavq(__a, __b) __arm_vmlaldavq(__a, __b) -#define vqmovuntq(__a, __b) __arm_vqmovuntq(__a, __b) -#define vqmovunbq(__a, __b) __arm_vqmovunbq(__a, __b) #define vshlltq(__a, __imm) __arm_vshlltq(__a, __imm) #define vshllbq(__a, __imm) __arm_vshllbq(__a, __imm) #define vqdmulltq(__a, __b) __arm_vqdmulltq(__a, __b) @@ -170,13 +164,7 @@ #define vmlsldavxq_p(__a, __b, __p) __arm_vmlsldavxq_p(__a, __b, __p) #define vmovlbq_m(__inactive, __a, __p) __arm_vmovlbq_m(__inactive, __a, __p) #define vmovltq_m(__inactive, __a, __p) __arm_vmovltq_m(__inactive, __a, __p) -#define vmovnbq_m(__a, __b, __p) __arm_vmovnbq_m(__a, __b, __p) -#define vmovntq_m(__a, __b, __p) __arm_vmovntq_m(__a, __b, __p) -#define vqmovnbq_m(__a, __b, __p) __arm_vqmovnbq_m(__a, __b, __p) -#define vqmovntq_m(__a, __b, __p) __arm_vqmovntq_m(__a, __b, __p) #define vrev32q_m(__inactive, __a, __p) __arm_vrev32q_m(__inactive, __a, __p) -#define vqmovunbq_m(__a, __b, __p) __arm_vqmovunbq_m(__a, __b, __p) -#define vqmovuntq_m(__a, __b, __p) __arm_vqmovuntq_m(__a, __b, __p) #define vsriq_m(__a, __b, __imm, __p) __arm_vsriq_m(__a, __b, __imm, __p) #define vqshluq_m(__inactive, __a, __imm, __p) __arm_vqshluq_m(__inactive, __a, __imm, __p) #define vabavq_p(__a, __b, __c, __p) __arm_vabavq_p(__a, __b, __c, __p) @@ -652,15 +640,9 @@ #define vbrsrq_n_s32(__a, __b) __arm_vbrsrq_n_s32(__a, __b) #define vbicq_s32(__a, __b) __arm_vbicq_s32(__a, __b) #define vaddvaq_s32(__a, __b) __arm_vaddvaq_s32(__a, __b) -#define vqmovntq_u16(__a, __b) __arm_vqmovntq_u16(__a, __b) -#define vqmovnbq_u16(__a, __b) __arm_vqmovnbq_u16(__a, __b) #define vmulltq_poly_p8(__a, __b) __arm_vmulltq_poly_p8(__a, __b) #define vmullbq_poly_p8(__a, __b) __arm_vmullbq_poly_p8(__a, __b) -#define vmovntq_u16(__a, __b) __arm_vmovntq_u16(__a, __b) -#define vmovnbq_u16(__a, __b) __arm_vmovnbq_u16(__a, __b) #define vmlaldavq_u16(__a, __b) __arm_vmlaldavq_u16(__a, __b) -#define vqmovuntq_s16(__a, __b) __arm_vqmovuntq_s16(__a, __b) -#define vqmovunbq_s16(__a, __b) __arm_vqmovunbq_s16(__a, __b) #define vshlltq_n_u8(__a, __imm) __arm_vshlltq_n_u8(__a, __imm) #define vshllbq_n_u8(__a, __imm) __arm_vshllbq_n_u8(__a, __imm) #define vbicq_n_u16(__a, __imm) __arm_vbicq_n_u16(__a, __imm) @@ -676,15 +658,11 @@ #define vcmpgeq_f16(__a, __b) __arm_vcmpgeq_f16(__a, __b) #define vcmpeqq_n_f16(__a, __b) __arm_vcmpeqq_n_f16(__a, __b) #define vcmpeqq_f16(__a, __b) __arm_vcmpeqq_f16(__a, __b) -#define vqmovntq_s16(__a, __b) __arm_vqmovntq_s16(__a, __b) -#define vqmovnbq_s16(__a, __b) __arm_vqmovnbq_s16(__a, __b) #define vqdmulltq_s16(__a, __b) __arm_vqdmulltq_s16(__a, __b) #define vqdmulltq_n_s16(__a, __b) __arm_vqdmulltq_n_s16(__a, __b) #define vqdmullbq_s16(__a, __b) __arm_vqdmullbq_s16(__a, __b) #define vqdmullbq_n_s16(__a, __b) __arm_vqdmullbq_n_s16(__a, __b) #define vornq_f16(__a, __b) __arm_vornq_f16(__a, __b) -#define vmovntq_s16(__a, __b) __arm_vmovntq_s16(__a, __b) -#define vmovnbq_s16(__a, __b) __arm_vmovnbq_s16(__a, __b) #define vmlsldavxq_s16(__a, __b) __arm_vmlsldavxq_s16(__a, __b) #define vmlsldavq_s16(__a, __b) __arm_vmlsldavq_s16(__a, __b) #define vmlaldavxq_s16(__a, __b) __arm_vmlaldavxq_s16(__a, __b) @@ -707,15 +685,9 @@ #define vshlltq_n_s8(__a, __imm) __arm_vshlltq_n_s8(__a, __imm) #define vshllbq_n_s8(__a, __imm) __arm_vshllbq_n_s8(__a, __imm) #define vbicq_n_s16(__a, __imm) __arm_vbicq_n_s16(__a, __imm) -#define vqmovntq_u32(__a, __b) __arm_vqmovntq_u32(__a, __b) -#define vqmovnbq_u32(__a, __b) __arm_vqmovnbq_u32(__a, __b) #define vmulltq_poly_p16(__a, __b) __arm_vmulltq_poly_p16(__a, __b) #define vmullbq_poly_p16(__a, __b) __arm_vmullbq_poly_p16(__a, __b) -#define vmovntq_u32(__a, __b) __arm_vmovntq_u32(__a, __b) -#define vmovnbq_u32(__a, __b) __arm_vmovnbq_u32(__a, __b) #define vmlaldavq_u32(__a, __b) __arm_vmlaldavq_u32(__a, __b) -#define vqmovuntq_s32(__a, __b) __arm_vqmovuntq_s32(__a, __b) -#define vqmovunbq_s32(__a, __b) __arm_vqmovunbq_s32(__a, __b) #define vshlltq_n_u16(__a, __imm) __arm_vshlltq_n_u16(__a, __imm) #define vshllbq_n_u16(__a, __imm) __arm_vshllbq_n_u16(__a, __imm) #define vbicq_n_u32(__a, __imm) __arm_vbicq_n_u32(__a, __imm) @@ -731,15 +703,11 @@ #define vcmpgeq_f32(__a, __b) __arm_vcmpgeq_f32(__a, __b) #define vcmpeqq_n_f32(__a, __b) __arm_vcmpeqq_n_f32(__a, __b) #define vcmpeqq_f32(__a, __b) __arm_vcmpeqq_f32(__a, __b) -#define vqmovntq_s32(__a, __b) __arm_vqmovntq_s32(__a, __b) -#define vqmovnbq_s32(__a, __b) __arm_vqmovnbq_s32(__a, __b) #define vqdmulltq_s32(__a, __b) __arm_vqdmulltq_s32(__a, __b) #define vqdmulltq_n_s32(__a, __b) __arm_vqdmulltq_n_s32(__a, __b) #define vqdmullbq_s32(__a, __b) __arm_vqdmullbq_s32(__a, __b) #define vqdmullbq_n_s32(__a, __b) __arm_vqdmullbq_n_s32(__a, __b) #define vornq_f32(__a, __b) __arm_vornq_f32(__a, __b) -#define vmovntq_s32(__a, __b) __arm_vmovntq_s32(__a, __b) -#define vmovnbq_s32(__a, __b) __arm_vmovnbq_s32(__a, __b) #define vmlsldavxq_s32(__a, __b) __arm_vmlsldavxq_s32(__a, __b) #define vmlsldavq_s32(__a, __b) __arm_vmlsldavq_s32(__a, __b) #define vmlaldavxq_s32(__a, __b) __arm_vmlaldavxq_s32(__a, __b) @@ -1056,11 +1024,7 @@ #define vmlsldavxq_p_s16(__a, __b, __p) __arm_vmlsldavxq_p_s16(__a, __b, __p) #define vmovlbq_m_s8(__inactive, __a, __p) __arm_vmovlbq_m_s8(__inactive, __a, __p) #define vmovltq_m_s8(__inactive, __a, __p) __arm_vmovltq_m_s8(__inactive, __a, __p) -#define vmovnbq_m_s16(__a, __b, __p) __arm_vmovnbq_m_s16(__a, __b, __p) -#define vmovntq_m_s16(__a, __b, __p) __arm_vmovntq_m_s16(__a, __b, __p) #define vpselq_f16(__a, __b, __p) __arm_vpselq_f16(__a, __b, __p) -#define vqmovnbq_m_s16(__a, __b, __p) __arm_vqmovnbq_m_s16(__a, __b, __p) -#define vqmovntq_m_s16(__a, __b, __p) __arm_vqmovntq_m_s16(__a, __b, __p) #define vrev32q_m_s8(__inactive, __a, __p) __arm_vrev32q_m_s8(__inactive, __a, __p) #define vrev64q_m_f16(__inactive, __a, __p) __arm_vrev64q_m_f16(__inactive, __a, __p) #define vcmpeqq_m_n_f16(__a, __b, __p) __arm_vcmpeqq_m_n_f16(__a, __b, __p) @@ -1079,16 +1043,10 @@ #define vcvtnq_m_u16_f16(__inactive, __a, __p) __arm_vcvtnq_m_u16_f16(__inactive, __a, __p) #define vcvtpq_m_u16_f16(__inactive, __a, __p) __arm_vcvtpq_m_u16_f16(__inactive, __a, __p) #define vcvtq_m_u16_f16(__inactive, __a, __p) __arm_vcvtq_m_u16_f16(__inactive, __a, __p) -#define vqmovunbq_m_s16(__a, __b, __p) __arm_vqmovunbq_m_s16(__a, __b, __p) -#define vqmovuntq_m_s16(__a, __b, __p) __arm_vqmovuntq_m_s16(__a, __b, __p) #define vmlaldavaq_u16(__a, __b, __c) __arm_vmlaldavaq_u16(__a, __b, __c) #define vmlaldavq_p_u16(__a, __b, __p) __arm_vmlaldavq_p_u16(__a, __b, __p) #define vmovlbq_m_u8(__inactive, __a, __p) __arm_vmovlbq_m_u8(__inactive, __a, __p) #define vmovltq_m_u8(__inactive, __a, __p) __arm_vmovltq_m_u8(__inactive, __a, __p) -#define vmovnbq_m_u16(__a, __b, __p) __arm_vmovnbq_m_u16(__a, __b, __p) -#define vmovntq_m_u16(__a, __b, __p) __arm_vmovntq_m_u16(__a, __b, __p) -#define vqmovnbq_m_u16(__a, __b, __p) __arm_vqmovnbq_m_u16(__a, __b, __p) -#define vqmovntq_m_u16(__a, __b, __p) __arm_vqmovntq_m_u16(__a, __b, __p) #define vrev32q_m_u8(__inactive, __a, __p) __arm_vrev32q_m_u8(__inactive, __a, __p) #define vmvnq_m_n_s32(__inactive, __imm, __p) __arm_vmvnq_m_n_s32(__inactive, __imm, __p) #define vcmlaq_f32(__a, __b, __c) __arm_vcmlaq_f32(__a, __b, __c) @@ -1120,11 +1078,7 @@ #define vmlsldavxq_p_s32(__a, __b, __p) __arm_vmlsldavxq_p_s32(__a, __b, __p) #define vmovlbq_m_s16(__inactive, __a, __p) __arm_vmovlbq_m_s16(__inactive, __a, __p) #define vmovltq_m_s16(__inactive, __a, __p) __arm_vmovltq_m_s16(__inactive, __a, __p) -#define vmovnbq_m_s32(__a, __b, __p) __arm_vmovnbq_m_s32(__a, __b, __p) -#define vmovntq_m_s32(__a, __b, __p) __arm_vmovntq_m_s32(__a, __b, __p) #define vpselq_f32(__a, __b, __p) __arm_vpselq_f32(__a, __b, __p) -#define vqmovnbq_m_s32(__a, __b, __p) __arm_vqmovnbq_m_s32(__a, __b, __p) -#define vqmovntq_m_s32(__a, __b, __p) __arm_vqmovntq_m_s32(__a, __b, __p) #define vrev32q_m_s16(__inactive, __a, __p) __arm_vrev32q_m_s16(__inactive, __a, __p) #define vrev64q_m_f32(__inactive, __a, __p) __arm_vrev64q_m_f32(__inactive, __a, __p) #define vcmpeqq_m_n_f32(__a, __b, __p) __arm_vcmpeqq_m_n_f32(__a, __b, __p) @@ -1143,16 +1097,10 @@ #define vcvtnq_m_u32_f32(__inactive, __a, __p) __arm_vcvtnq_m_u32_f32(__inactive, __a, __p) #define vcvtpq_m_u32_f32(__inactive, __a, __p) __arm_vcvtpq_m_u32_f32(__inactive, __a, __p) #define vcvtq_m_u32_f32(__inactive, __a, __p) __arm_vcvtq_m_u32_f32(__inactive, __a, __p) -#define vqmovunbq_m_s32(__a, __b, __p) __arm_vqmovunbq_m_s32(__a, __b, __p) -#define vqmovuntq_m_s32(__a, __b, __p) __arm_vqmovuntq_m_s32(__a, __b, __p) #define vmlaldavaq_u32(__a, __b, __c) __arm_vmlaldavaq_u32(__a, __b, __c) #define vmlaldavq_p_u32(__a, __b, __p) __arm_vmlaldavq_p_u32(__a, __b, __p) #define vmovlbq_m_u16(__inactive, __a, __p) __arm_vmovlbq_m_u16(__inactive, __a, __p) #define vmovltq_m_u16(__inactive, __a, __p) __arm_vmovltq_m_u16(__inactive, __a, __p) -#define vmovnbq_m_u32(__a, __b, __p) __arm_vmovnbq_m_u32(__a, __b, __p) -#define vmovntq_m_u32(__a, __b, __p) __arm_vmovntq_m_u32(__a, __b, __p) -#define vqmovnbq_m_u32(__a, __b, __p) __arm_vqmovnbq_m_u32(__a, __b, __p) -#define vqmovntq_m_u32(__a, __b, __p) __arm_vqmovntq_m_u32(__a, __b, __p) #define vrev32q_m_u16(__inactive, __a, __p) __arm_vrev32q_m_u16(__inactive, __a, __p) #define vsriq_m_n_s8(__a, __b, __imm, __p) __arm_vsriq_m_n_s8(__a, __b, __imm, __p) #define vcvtq_m_n_f16_u16(__inactive, __a, __imm6, __p) __arm_vcvtq_m_n_f16_u16(__inactive, __a, __imm6, __p) @@ -3485,20 +3433,6 @@ __arm_vaddvaq_s32 (int32_t __a, int32x4_t __b) return __builtin_mve_vaddvaq_sv4si (__a, __b); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovntq_u16 (uint8x16_t __a, uint16x8_t __b) -{ - return __builtin_mve_vqmovntq_uv8hi (__a, __b); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovnbq_u16 (uint8x16_t __a, uint16x8_t __b) -{ - return __builtin_mve_vqmovnbq_uv8hi (__a, __b); -} - __extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmulltq_poly_p8 (uint8x16_t __a, uint8x16_t __b) @@ -3513,20 +3447,6 @@ __arm_vmullbq_poly_p8 (uint8x16_t __a, uint8x16_t __b) return __builtin_mve_vmullbq_poly_pv16qi (__a, __b); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovntq_u16 (uint8x16_t __a, uint16x8_t __b) -{ - return __builtin_mve_vmovntq_uv8hi (__a, __b); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovnbq_u16 (uint8x16_t __a, uint16x8_t __b) -{ - return __builtin_mve_vmovnbq_uv8hi (__a, __b); -} - __extension__ extern __inline uint64_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmlaldavq_u16 (uint16x8_t __a, uint16x8_t __b) @@ -3534,20 +3454,6 @@ __arm_vmlaldavq_u16 (uint16x8_t __a, uint16x8_t __b) return __builtin_mve_vmlaldavq_uv8hi (__a, __b); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovuntq_s16 (uint8x16_t __a, int16x8_t __b) -{ - return __builtin_mve_vqmovuntq_sv8hi (__a, __b); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovunbq_s16 (uint8x16_t __a, int16x8_t __b) -{ - return __builtin_mve_vqmovunbq_sv8hi (__a, __b); -} - __extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vshlltq_n_u8 (uint8x16_t __a, const int __imm) @@ -3569,20 +3475,6 @@ __arm_vbicq_n_u16 (uint16x8_t __a, const int __imm) return __builtin_mve_vbicq_n_uv8hi (__a, __imm); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovntq_s16 (int8x16_t __a, int16x8_t __b) -{ - return __builtin_mve_vqmovntq_sv8hi (__a, __b); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovnbq_s16 (int8x16_t __a, int16x8_t __b) -{ - return __builtin_mve_vqmovnbq_sv8hi (__a, __b); -} - __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vqdmulltq_s16 (int16x8_t __a, int16x8_t __b) @@ -3611,20 +3503,6 @@ __arm_vqdmullbq_n_s16 (int16x8_t __a, int16_t __b) return __builtin_mve_vqdmullbq_n_sv8hi (__a, __b); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovntq_s16 (int8x16_t __a, int16x8_t __b) -{ - return __builtin_mve_vmovntq_sv8hi (__a, __b); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovnbq_s16 (int8x16_t __a, int16x8_t __b) -{ - return __builtin_mve_vmovnbq_sv8hi (__a, __b); -} - __extension__ extern __inline int64_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmlsldavxq_s16 (int16x8_t __a, int16x8_t __b) @@ -3674,20 +3552,6 @@ __arm_vbicq_n_s16 (int16x8_t __a, const int __imm) return __builtin_mve_vbicq_n_sv8hi (__a, __imm); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovntq_u32 (uint16x8_t __a, uint32x4_t __b) -{ - return __builtin_mve_vqmovntq_uv4si (__a, __b); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovnbq_u32 (uint16x8_t __a, uint32x4_t __b) -{ - return __builtin_mve_vqmovnbq_uv4si (__a, __b); -} - __extension__ extern __inline uint32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmulltq_poly_p16 (uint16x8_t __a, uint16x8_t __b) @@ -3702,20 +3566,6 @@ __arm_vmullbq_poly_p16 (uint16x8_t __a, uint16x8_t __b) return __builtin_mve_vmullbq_poly_pv8hi (__a, __b); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovntq_u32 (uint16x8_t __a, uint32x4_t __b) -{ - return __builtin_mve_vmovntq_uv4si (__a, __b); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovnbq_u32 (uint16x8_t __a, uint32x4_t __b) -{ - return __builtin_mve_vmovnbq_uv4si (__a, __b); -} - __extension__ extern __inline uint64_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmlaldavq_u32 (uint32x4_t __a, uint32x4_t __b) @@ -3723,20 +3573,6 @@ __arm_vmlaldavq_u32 (uint32x4_t __a, uint32x4_t __b) return __builtin_mve_vmlaldavq_uv4si (__a, __b); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovuntq_s32 (uint16x8_t __a, int32x4_t __b) -{ - return __builtin_mve_vqmovuntq_sv4si (__a, __b); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovunbq_s32 (uint16x8_t __a, int32x4_t __b) -{ - return __builtin_mve_vqmovunbq_sv4si (__a, __b); -} - __extension__ extern __inline uint32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vshlltq_n_u16 (uint16x8_t __a, const int __imm) @@ -3758,20 +3594,6 @@ __arm_vbicq_n_u32 (uint32x4_t __a, const int __imm) return __builtin_mve_vbicq_n_uv4si (__a, __imm); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovntq_s32 (int16x8_t __a, int32x4_t __b) -{ - return __builtin_mve_vqmovntq_sv4si (__a, __b); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovnbq_s32 (int16x8_t __a, int32x4_t __b) -{ - return __builtin_mve_vqmovnbq_sv4si (__a, __b); -} - __extension__ extern __inline int64x2_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vqdmulltq_s32 (int32x4_t __a, int32x4_t __b) @@ -3800,20 +3622,6 @@ __arm_vqdmullbq_n_s32 (int32x4_t __a, int32_t __b) return __builtin_mve_vqdmullbq_n_sv4si (__a, __b); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovntq_s32 (int16x8_t __a, int32x4_t __b) -{ - return __builtin_mve_vmovntq_sv4si (__a, __b); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovnbq_s32 (int16x8_t __a, int32x4_t __b) -{ - return __builtin_mve_vmovnbq_sv4si (__a, __b); -} - __extension__ extern __inline int64_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmlsldavxq_s32 (int32x4_t __a, int32x4_t __b) @@ -5681,34 +5489,6 @@ __arm_vmovltq_m_s8 (int16x8_t __inactive, int8x16_t __a, mve_pred16_t __p) return __builtin_mve_vmovltq_m_sv16qi (__inactive, __a, __p); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovnbq_m_s16 (int8x16_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmovnbq_m_sv8hi (__a, __b, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovntq_m_s16 (int8x16_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmovntq_m_sv8hi (__a, __b, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovnbq_m_s16 (int8x16_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vqmovnbq_m_sv8hi (__a, __b, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovntq_m_s16 (int8x16_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vqmovntq_m_sv8hi (__a, __b, __p); -} - __extension__ extern __inline int8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vrev32q_m_s8 (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) @@ -5723,20 +5503,6 @@ __arm_vmvnq_m_n_u16 (uint16x8_t __inactive, const int __imm, mve_pred16_t __p) return __builtin_mve_vmvnq_m_n_uv8hi (__inactive, __imm, __p); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovunbq_m_s16 (uint8x16_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vqmovunbq_m_sv8hi (__a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovuntq_m_s16 (uint8x16_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vqmovuntq_m_sv8hi (__a, __b, __p); -} - __extension__ extern __inline uint64_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmlaldavaq_u16 (uint64_t __a, uint16x8_t __b, uint16x8_t __c) @@ -5765,34 +5531,6 @@ __arm_vmovltq_m_u8 (uint16x8_t __inactive, uint8x16_t __a, mve_pred16_t __p) return __builtin_mve_vmovltq_m_uv16qi (__inactive, __a, __p); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovnbq_m_u16 (uint8x16_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmovnbq_m_uv8hi (__a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovntq_m_u16 (uint8x16_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmovntq_m_uv8hi (__a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovnbq_m_u16 (uint8x16_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vqmovnbq_m_uv8hi (__a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovntq_m_u16 (uint8x16_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vqmovntq_m_uv8hi (__a, __b, __p); -} - __extension__ extern __inline uint8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vrev32q_m_u8 (uint8x16_t __inactive, uint8x16_t __a, mve_pred16_t __p) @@ -5877,34 +5615,6 @@ __arm_vmovltq_m_s16 (int32x4_t __inactive, int16x8_t __a, mve_pred16_t __p) return __builtin_mve_vmovltq_m_sv8hi (__inactive, __a, __p); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovnbq_m_s32 (int16x8_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmovnbq_m_sv4si (__a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovntq_m_s32 (int16x8_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmovntq_m_sv4si (__a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovnbq_m_s32 (int16x8_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vqmovnbq_m_sv4si (__a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovntq_m_s32 (int16x8_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vqmovntq_m_sv4si (__a, __b, __p); -} - __extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vrev32q_m_s16 (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) @@ -5919,20 +5629,6 @@ __arm_vmvnq_m_n_u32 (uint32x4_t __inactive, const int __imm, mve_pred16_t __p) return __builtin_mve_vmvnq_m_n_uv4si (__inactive, __imm, __p); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovunbq_m_s32 (uint16x8_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vqmovunbq_m_sv4si (__a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovuntq_m_s32 (uint16x8_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vqmovuntq_m_sv4si (__a, __b, __p); -} - __extension__ extern __inline uint64_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmlaldavaq_u32 (uint64_t __a, uint32x4_t __b, uint32x4_t __c) @@ -5961,34 +5657,6 @@ __arm_vmovltq_m_u16 (uint32x4_t __inactive, uint16x8_t __a, mve_pred16_t __p) return __builtin_mve_vmovltq_m_uv8hi (__inactive, __a, __p); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovnbq_m_u32 (uint16x8_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmovnbq_m_uv4si (__a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovntq_m_u32 (uint16x8_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmovntq_m_uv4si (__a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovnbq_m_u32 (uint16x8_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vqmovnbq_m_uv4si (__a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovntq_m_u32 (uint16x8_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vqmovntq_m_uv4si (__a, __b, __p); -} - __extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vrev32q_m_u16 (uint16x8_t __inactive, uint16x8_t __a, mve_pred16_t __p) @@ -14366,20 +14034,6 @@ __arm_vaddvaq (int32_t __a, int32x4_t __b) return __arm_vaddvaq_s32 (__a, __b); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovntq (uint8x16_t __a, uint16x8_t __b) -{ - return __arm_vqmovntq_u16 (__a, __b); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovnbq (uint8x16_t __a, uint16x8_t __b) -{ - return __arm_vqmovnbq_u16 (__a, __b); -} - __extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmulltq_poly (uint8x16_t __a, uint8x16_t __b) @@ -14394,20 +14048,6 @@ __arm_vmullbq_poly (uint8x16_t __a, uint8x16_t __b) return __arm_vmullbq_poly_p8 (__a, __b); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovntq (uint8x16_t __a, uint16x8_t __b) -{ - return __arm_vmovntq_u16 (__a, __b); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovnbq (uint8x16_t __a, uint16x8_t __b) -{ - return __arm_vmovnbq_u16 (__a, __b); -} - __extension__ extern __inline uint64_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmlaldavq (uint16x8_t __a, uint16x8_t __b) @@ -14415,20 +14055,6 @@ __arm_vmlaldavq (uint16x8_t __a, uint16x8_t __b) return __arm_vmlaldavq_u16 (__a, __b); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovuntq (uint8x16_t __a, int16x8_t __b) -{ - return __arm_vqmovuntq_s16 (__a, __b); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovunbq (uint8x16_t __a, int16x8_t __b) -{ - return __arm_vqmovunbq_s16 (__a, __b); -} - __extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vshlltq (uint8x16_t __a, const int __imm) @@ -14450,20 +14076,6 @@ __arm_vbicq (uint16x8_t __a, const int __imm) return __arm_vbicq_n_u16 (__a, __imm); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovntq (int8x16_t __a, int16x8_t __b) -{ - return __arm_vqmovntq_s16 (__a, __b); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovnbq (int8x16_t __a, int16x8_t __b) -{ - return __arm_vqmovnbq_s16 (__a, __b); -} - __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vqdmulltq (int16x8_t __a, int16x8_t __b) @@ -14492,20 +14104,6 @@ __arm_vqdmullbq (int16x8_t __a, int16_t __b) return __arm_vqdmullbq_n_s16 (__a, __b); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovntq (int8x16_t __a, int16x8_t __b) -{ - return __arm_vmovntq_s16 (__a, __b); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovnbq (int8x16_t __a, int16x8_t __b) -{ - return __arm_vmovnbq_s16 (__a, __b); -} - __extension__ extern __inline int64_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmlsldavxq (int16x8_t __a, int16x8_t __b) @@ -14555,20 +14153,6 @@ __arm_vbicq (int16x8_t __a, const int __imm) return __arm_vbicq_n_s16 (__a, __imm); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovntq (uint16x8_t __a, uint32x4_t __b) -{ - return __arm_vqmovntq_u32 (__a, __b); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovnbq (uint16x8_t __a, uint32x4_t __b) -{ - return __arm_vqmovnbq_u32 (__a, __b); -} - __extension__ extern __inline uint32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmulltq_poly (uint16x8_t __a, uint16x8_t __b) @@ -14583,20 +14167,6 @@ __arm_vmullbq_poly (uint16x8_t __a, uint16x8_t __b) return __arm_vmullbq_poly_p16 (__a, __b); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovntq (uint16x8_t __a, uint32x4_t __b) -{ - return __arm_vmovntq_u32 (__a, __b); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovnbq (uint16x8_t __a, uint32x4_t __b) -{ - return __arm_vmovnbq_u32 (__a, __b); -} - __extension__ extern __inline uint64_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmlaldavq (uint32x4_t __a, uint32x4_t __b) @@ -14604,20 +14174,6 @@ __arm_vmlaldavq (uint32x4_t __a, uint32x4_t __b) return __arm_vmlaldavq_u32 (__a, __b); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovuntq (uint16x8_t __a, int32x4_t __b) -{ - return __arm_vqmovuntq_s32 (__a, __b); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovunbq (uint16x8_t __a, int32x4_t __b) -{ - return __arm_vqmovunbq_s32 (__a, __b); -} - __extension__ extern __inline uint32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vshlltq (uint16x8_t __a, const int __imm) @@ -14639,20 +14195,6 @@ __arm_vbicq (uint32x4_t __a, const int __imm) return __arm_vbicq_n_u32 (__a, __imm); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovntq (int16x8_t __a, int32x4_t __b) -{ - return __arm_vqmovntq_s32 (__a, __b); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovnbq (int16x8_t __a, int32x4_t __b) -{ - return __arm_vqmovnbq_s32 (__a, __b); -} - __extension__ extern __inline int64x2_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vqdmulltq (int32x4_t __a, int32x4_t __b) @@ -14681,20 +14223,6 @@ __arm_vqdmullbq (int32x4_t __a, int32_t __b) return __arm_vqdmullbq_n_s32 (__a, __b); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovntq (int16x8_t __a, int32x4_t __b) -{ - return __arm_vmovntq_s32 (__a, __b); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovnbq (int16x8_t __a, int32x4_t __b) -{ - return __arm_vmovnbq_s32 (__a, __b); -} - __extension__ extern __inline int64_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmlsldavxq (int32x4_t __a, int32x4_t __b) @@ -16522,34 +16050,6 @@ __arm_vmovltq_m (int16x8_t __inactive, int8x16_t __a, mve_pred16_t __p) return __arm_vmovltq_m_s8 (__inactive, __a, __p); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovnbq_m (int8x16_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __arm_vmovnbq_m_s16 (__a, __b, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovntq_m (int8x16_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __arm_vmovntq_m_s16 (__a, __b, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovnbq_m (int8x16_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __arm_vqmovnbq_m_s16 (__a, __b, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovntq_m (int8x16_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __arm_vqmovntq_m_s16 (__a, __b, __p); -} - __extension__ extern __inline int8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vrev32q_m (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) @@ -16564,20 +16064,6 @@ __arm_vmvnq_m (uint16x8_t __inactive, const int __imm, mve_pred16_t __p) return __arm_vmvnq_m_n_u16 (__inactive, __imm, __p); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovunbq_m (uint8x16_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __arm_vqmovunbq_m_s16 (__a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovuntq_m (uint8x16_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __arm_vqmovuntq_m_s16 (__a, __b, __p); -} - __extension__ extern __inline uint64_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmlaldavaq (uint64_t __a, uint16x8_t __b, uint16x8_t __c) @@ -16606,34 +16092,6 @@ __arm_vmovltq_m (uint16x8_t __inactive, uint8x16_t __a, mve_pred16_t __p) return __arm_vmovltq_m_u8 (__inactive, __a, __p); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovnbq_m (uint8x16_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __arm_vmovnbq_m_u16 (__a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovntq_m (uint8x16_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __arm_vmovntq_m_u16 (__a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovnbq_m (uint8x16_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __arm_vqmovnbq_m_u16 (__a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovntq_m (uint8x16_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __arm_vqmovntq_m_u16 (__a, __b, __p); -} - __extension__ extern __inline uint8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vrev32q_m (uint8x16_t __inactive, uint8x16_t __a, mve_pred16_t __p) @@ -16718,34 +16176,6 @@ __arm_vmovltq_m (int32x4_t __inactive, int16x8_t __a, mve_pred16_t __p) return __arm_vmovltq_m_s16 (__inactive, __a, __p); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovnbq_m (int16x8_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __arm_vmovnbq_m_s32 (__a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovntq_m (int16x8_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __arm_vmovntq_m_s32 (__a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovnbq_m (int16x8_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __arm_vqmovnbq_m_s32 (__a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovntq_m (int16x8_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __arm_vqmovntq_m_s32 (__a, __b, __p); -} - __extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vrev32q_m (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) @@ -16760,20 +16190,6 @@ __arm_vmvnq_m (uint32x4_t __inactive, const int __imm, mve_pred16_t __p) return __arm_vmvnq_m_n_u32 (__inactive, __imm, __p); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovunbq_m (uint16x8_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __arm_vqmovunbq_m_s32 (__a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovuntq_m (uint16x8_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __arm_vqmovuntq_m_s32 (__a, __b, __p); -} - __extension__ extern __inline uint64_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmlaldavaq (uint64_t __a, uint32x4_t __b, uint32x4_t __c) @@ -16802,34 +16218,6 @@ __arm_vmovltq_m (uint32x4_t __inactive, uint16x8_t __a, mve_pred16_t __p) return __arm_vmovltq_m_u16 (__inactive, __a, __p); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovnbq_m (uint16x8_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __arm_vmovnbq_m_u32 (__a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmovntq_m (uint16x8_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __arm_vmovntq_m_u32 (__a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovnbq_m (uint16x8_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __arm_vqmovnbq_m_u32 (__a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqmovntq_m (uint16x8_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __arm_vqmovntq_m_u32 (__a, __b, __p); -} - __extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vrev32q_m (uint16x8_t __inactive, uint16x8_t __a, mve_pred16_t __p) @@ -23169,28 +22557,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vmlaldavxq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vmlaldavxq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)));}) -#define __arm_vqmovuntq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]: __arm_vqmovuntq_s16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]: __arm_vqmovuntq_s32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, int32x4_t)));}) - -#define __arm_vqmovntq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: __arm_vqmovntq_s16 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: __arm_vqmovntq_s32 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: __arm_vqmovntq_u16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: __arm_vqmovntq_u32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint32x4_t)));}) - -#define __arm_vqmovnbq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: __arm_vqmovnbq_s16 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: __arm_vqmovnbq_s32 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: __arm_vqmovnbq_u16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: __arm_vqmovnbq_u32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint32x4_t)));}) - #define __arm_vqdmulltq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -23199,12 +22565,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vqdmulltq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vqdmulltq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)));}) -#define __arm_vqmovunbq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]: __arm_vqmovunbq_s16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]: __arm_vqmovunbq_s32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, int32x4_t)));}) - #define __arm_vqdmullbq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -23263,22 +22623,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int16x8_t]: __arm_vmaxaq_s16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int32x4_t]: __arm_vmaxaq_s32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, int32x4_t)));}) -#define __arm_vmovntq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: __arm_vmovntq_s16 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: __arm_vmovntq_s32 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: __arm_vmovntq_u16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: __arm_vmovntq_u32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint32x4_t)));}) - -#define __arm_vmovnbq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: __arm_vmovnbq_s16 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: __arm_vmovnbq_s32 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: __arm_vmovnbq_u16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: __arm_vmovnbq_u32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint32x4_t)));}) - #define __arm_vmullbq_int(p0,p1) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -23465,22 +22809,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint8x16_t]: __arm_vmovlbq_m_u8 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint8x16_t), p2), \ int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint16x8_t]: __arm_vmovlbq_m_u16 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint16x8_t), p2));}) -#define __arm_vmovnbq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: __arm_vmovnbq_m_s16 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: __arm_vmovnbq_m_s32 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: __arm_vmovnbq_m_u16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint16x8_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: __arm_vmovnbq_m_u32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint32x4_t), p2));}) - -#define __arm_vmovntq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: __arm_vmovntq_m_s16 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: __arm_vmovntq_m_s32 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: __arm_vmovntq_m_u16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint16x8_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: __arm_vmovntq_m_u32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint32x4_t), p2));}) - #define __arm_vmovltq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -23785,34 +23113,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vrev16q_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vrev16q_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), p2));}) -#define __arm_vqmovnbq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: __arm_vqmovnbq_m_s16 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: __arm_vqmovnbq_m_s32 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: __arm_vqmovnbq_m_u16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint16x8_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: __arm_vqmovnbq_m_u32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint32x4_t), p2));}) - -#define __arm_vqmovntq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: __arm_vqmovntq_m_s16 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: __arm_vqmovntq_m_s32 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: __arm_vqmovntq_m_u16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint16x8_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: __arm_vqmovntq_m_u32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint32x4_t), p2));}) - -#define __arm_vqmovunbq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]: __arm_vqmovunbq_m_s16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]: __arm_vqmovunbq_m_s32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, int32x4_t), p2));}) - -#define __arm_vqmovuntq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]: __arm_vqmovuntq_m_s16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]: __arm_vqmovuntq_m_s32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, int32x4_t), p2));}) - #define __arm_vcmpgeq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -24655,22 +23955,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce3(p1, int)), \ int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce3(p1, int)));}) -#define __arm_vqmovntq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: __arm_vqmovntq_s16 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: __arm_vqmovntq_s32 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: __arm_vqmovntq_u16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: __arm_vqmovntq_u32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint32x4_t)));}) - -#define __arm_vqmovnbq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: __arm_vqmovnbq_s16 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: __arm_vqmovnbq_s32 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: __arm_vqmovnbq_u16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: __arm_vqmovnbq_u32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint32x4_t)));}) - #define __arm_vmulltq_poly(p0,p1) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -24683,34 +23967,12 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vmullbq_poly_p8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t)), \ int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vmullbq_poly_p16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)));}) -#define __arm_vmovntq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: __arm_vmovntq_s16 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: __arm_vmovntq_s32 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: __arm_vmovntq_u16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: __arm_vmovntq_u32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint32x4_t)));}) - -#define __arm_vmovnbq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: __arm_vmovnbq_s16 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: __arm_vmovnbq_s32 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: __arm_vmovnbq_u16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: __arm_vmovnbq_u32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint32x4_t)));}) - #define __arm_vmlaldavxq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vmlaldavxq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vmlaldavxq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)));}) -#define __arm_vqmovuntq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]: __arm_vqmovuntq_s16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]: __arm_vqmovuntq_s32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, int32x4_t)));}) - #define __arm_vshlltq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ int (*)[__ARM_mve_type_int8x16_t]: __arm_vshlltq_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), p1), \ @@ -24725,12 +23987,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint8x16_t]: __arm_vshllbq_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), p1), \ int (*)[__ARM_mve_type_uint16x8_t]: __arm_vshllbq_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), p1));}) -#define __arm_vqmovunbq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]: __arm_vqmovunbq_s16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]: __arm_vqmovunbq_s32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, int32x4_t)));}) - #define __arm_vqdmulltq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -25084,22 +24340,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint8x16_t]: __arm_vmovlbq_m_u8 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint8x16_t), p2), \ int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint16x8_t]: __arm_vmovlbq_m_u16 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint16x8_t), p2));}) -#define __arm_vmovnbq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: __arm_vmovnbq_m_s16 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: __arm_vmovnbq_m_s32 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: __arm_vmovnbq_m_u16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint16x8_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: __arm_vmovnbq_m_u32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint32x4_t), p2));}) - -#define __arm_vmovntq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: __arm_vmovntq_m_s16 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: __arm_vmovntq_m_s32 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: __arm_vmovntq_m_u16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint16x8_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: __arm_vmovntq_m_u32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint32x4_t), p2));}) - #define __arm_vrev32q_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -25114,28 +24354,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vrev16q_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vrev16q_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), p2));}) -#define __arm_vqmovuntq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]: __arm_vqmovuntq_m_s16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]: __arm_vqmovuntq_m_s32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, int32x4_t), p2));}) - -#define __arm_vqmovntq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: __arm_vqmovntq_m_s16 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: __arm_vqmovntq_m_s32 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: __arm_vqmovntq_m_u16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint16x8_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: __arm_vqmovntq_m_u32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint32x4_t), p2));}) - -#define __arm_vqmovnbq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: __arm_vqmovnbq_m_s16 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: __arm_vqmovnbq_m_s32 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: __arm_vqmovnbq_m_u16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint16x8_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: __arm_vqmovnbq_m_u32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint32x4_t), p2));}) - #define __arm_vmovltq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -25144,12 +24362,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint8x16_t]: __arm_vmovltq_m_u8 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint8x16_t), p2), \ int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint16x8_t]: __arm_vmovltq_m_u16 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint16x8_t), p2));}) -#define __arm_vqmovunbq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]: __arm_vqmovunbq_m_s16 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]: __arm_vqmovunbq_m_s32 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, int32x4_t), p2));}) - #define __arm_vabavq_p(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ __typeof(p2) __p2 = (p2); \ From patchwork Fri May 5 16:49:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Lyon X-Patchwork-Id: 68847 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 39402385357B for ; Fri, 5 May 2023 16:52:05 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 39402385357B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1683305525; bh=nikgTg67km8I1I8N4zdn2lWErWYkRIm01mgIEHdyB+g=; h=To:CC:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=qi0dlo1+4UPPwqdSD8jY3vkcWMXDsx8y2qyzxL5uij23eS8EJG4ah9jIUBYpnIYZn l2fnyf/sZIpG4odU2KOfw6iyMyAUUa//MARvojlo/knWCV/x7IrsYSYBm9zRt419f8 /voedu3dIkUQP+2ffvy9kJEkqJcDsZKa4Ei8UOXc= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR04-VI1-obe.outbound.protection.outlook.com (mail-vi1eur04on2044.outbound.protection.outlook.com [40.107.8.44]) by sourceware.org (Postfix) with ESMTPS id 315CD3856968 for ; Fri, 5 May 2023 16:49:40 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 315CD3856968 Received: from AM6PR10CA0049.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:209:80::26) by AS2PR08MB10323.eurprd08.prod.outlook.com (2603:10a6:20b:5fe::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 16:49:36 +0000 Received: from AM7EUR03FT044.eop-EUR03.prod.protection.outlook.com (2603:10a6:209:80:cafe::ec) by AM6PR10CA0049.outlook.office365.com (2603:10a6:209:80::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:36 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM7EUR03FT044.mail.protection.outlook.com (100.127.140.169) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.13 via Frontend Transport; Fri, 5 May 2023 16:49:36 +0000 Received: ("Tessian outbound 8b05220b4215:v136"); Fri, 05 May 2023 16:49:36 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 4d13baa0b01e11cd X-CR-MTA-TID: 64aa7808 Received: from 8edebf0a04ab.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 94C6B187-6C49-4E0C-B8BB-FD5CE228E601.1; Fri, 05 May 2023 16:49:29 +0000 Received: from EUR03-AM7-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8edebf0a04ab.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Fri, 05 May 2023 16:49:29 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=g6HWgg5SiBd3swS1e85OF2PsbJLo0joaOtQmqtXBsUyGqvOwwEWpaAusDKkFnL4HEcSpFWcewkEwVTfxFLLAgCsx5kIRxmmDcFteuEXVbQqbRzwCD7l7zNFrY51JdFsKzFwwa1UVHDGJHaWtP9yvplYnm6FD27qnrm87ZvISFBSYlJG0pq42Zrb81uZ64Bra7SnuqS0pmsmD/sdxqH8CTPhbQiQTTEWDQJgi35dYqdrRT0r2Fl9JAMCqNmo408fcJjzyU6xXFyfUrNWIZi1kJpI3tLSuyHPnMriSAM1zJEa6CPoEaK6Dm6KC4EH7/4COMbbq39XTAeiuvF5cEAC4mQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=nikgTg67km8I1I8N4zdn2lWErWYkRIm01mgIEHdyB+g=; b=i5Vb+9Y43sy9fBmOHycVHGNP7X+DD9vYgEUDRFAyWlutZsYRiqoxIVmK9TeeUl2GDrS5S0sInuR7RJZPNt+sU2vURCAzIDqBVlBCnW8ScL7EJazFDp1w2bDpCK6LHdFR5b3D9O6hHDjsEEuXXVTKP+fSiM+fGMWhSVWJhFidGrtUg1m2H7nM0IyyQ4GSkVgITJ9RSy0MW5tZL81kPIrwctF4N2YUQ8yhg9A8ettiJqDCbNJz/loBtkdzA1Q+ZnqN9xcsUKH3PgthiP/fGZ2hC2kGsGfG9RF4jSzGJqBze/TaM0cGzsgoL6hxkp2bWUJHvWcTUacKrbCT2/XYF43BUQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none Received: from DB6PR07CA0112.eurprd07.prod.outlook.com (2603:10a6:6:2c::26) by AM9PR08MB6001.eurprd08.prod.outlook.com (2603:10a6:20b:2d5::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 16:49:28 +0000 Received: from DBAEUR03FT049.eop-EUR03.prod.protection.outlook.com (2603:10a6:6:2c:cafe::20) by DB6PR07CA0112.outlook.office365.com (2603:10a6:6:2c::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.11 via Frontend Transport; Fri, 5 May 2023 16:49:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234) smtp.mailfrom=arm.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 40.67.248.234 as permitted sender) receiver=protection.outlook.com; client-ip=40.67.248.234; helo=nebula.arm.com; pr=C Received: from nebula.arm.com (40.67.248.234) by DBAEUR03FT049.mail.protection.outlook.com (100.127.142.192) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:26 +0000 Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Fri, 5 May 2023 16:49:17 +0000 Received: from e129018.arm.com (10.57.22.112) by mail.arm.com (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.23 via Frontend Transport; Fri, 5 May 2023 16:49:17 +0000 To: , , , CC: Christophe Lyon Subject: [PATCH 08/10] arm: [MVE intrinsics] add binary_widen_n shape Date: Fri, 5 May 2023 18:49:04 +0200 Message-ID: <20230505164906.596219-8-christophe.lyon@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230505164906.596219-1-christophe.lyon@arm.com> References: <20230505164906.596219-1-christophe.lyon@arm.com> MIME-Version: 1.0 X-EOPAttributedMessage: 1 X-MS-TrafficTypeDiagnostic: DBAEUR03FT049:EE_|AM9PR08MB6001:EE_|AM7EUR03FT044:EE_|AS2PR08MB10323:EE_ X-MS-Office365-Filtering-Correlation-Id: 7a602496-ceba-47b9-e90f-08db4d88b5f4 x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: ukCRg+KRdaAUCjEEcjT8jyK1N8rauoT/bemZAjXEJldmN9LOJj+XImVXmuo3ZkiOWVK14S6VjT6tpHCCeEaWCiF7DLybyACc4oG0cIooZqj0WE0ZBlMx5xBqC9VMy6El6a3HeSc+bdTiRW0nWfm7i39XxcFxxByEqFYvhF3N6N1UmkRq3CDb56Szsi9ADLQScONyDRzOoS1Vfbp0gBF8toU76kGeA+kWH8C98jhNxV1TNwTQRdyvVwaPeZVkSE5hffaWxZBj8+T8K3x+3xkvRx68V+IiCkJgqcz8VUNVHlhpfAmAN9Z32Oau/eS4rFVan0IILQoS45mwFtzA+JpWRc8UI5IhFaLrG0rEswBD+q2TjxPo7g1SMGxa86SaoNImzJjAmAoljSFklt8RG8RLkIDwXTxJtTGTwcYJpmumH49HdraXZ2itgB1gY1ibeX0cgtcH+TXH6CtgOBBgb8VDuFTh+aAXey4KVQzGRZ28YaKOG49AZ6uc/wau++eChJXREbzRgz9N3NOxiTWqlS+/xuau1UjKKeYfPOaUhbbI9pnYxaZZjYr83OjjWt0wzGpvLSQj08jUy75MLIIGwVp79vHcCpv/A2wXx47nEWcLSyqeCtOz/aQy0rTgvUEyDgpcJ1MrhUWV5mZQDNU/zIbOSBKI7yT3HewFqClOePXN6Okgovw9fDK/cnTtgFQ2SJVqbdFEPtDalBp7hRJuD/66OCOb6AhFQfhqTCgyUNrcf2o= X-Forefront-Antispam-Report-Untrusted: CIP:40.67.248.234; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:nebula.arm.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230028)(4636009)(136003)(376002)(346002)(396003)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(36756003)(86362001)(110136005)(316002)(6636002)(4326008)(70586007)(70206006)(7696005)(6666004)(478600001)(41300700001)(40480700001)(82310400005)(8936002)(5660300002)(8676002)(2906002)(44832011)(82740400003)(81166007)(356005)(186003)(36860700001)(1076003)(26005)(47076005)(426003)(336012)(2616005)(40460700003)(36900700001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6001 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM7EUR03FT044.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 614a38e0-519e-4a3b-71f3-08db4d88aff7 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 4vPSyAOEF4wJr81u/cYvKlm46ivTCK/YEkZ0YTGtj9eLWMQYEFLwF76y7frhyb+H1+M0IHwCgz8Q89Y4DK8+eQRvzjMU/QfTyQpLP4xIqCPk6HkRWG/+GcKeW5Px1rN6EBjEHgYXXa4VyZWJFT295YqoAmVJ74GvFy8Y80MQu9aS4XNcZg7YLKgT+sqE/4oqVt/mTYornv0e9fvWtuBbsq1+wgSf0tr5ax6Q+SpQV/kpL1LCL0B8ZCZcOZDOruOjdkwi9XveD9CBkJ8s82ZIr/qG9BEIHP5TD5ncyasO9qlVpLGRAMGN53SWCBYflZKcXW9NkU+yBapaWZytz2249Tp2svI+gUidms1SPjkRuNzPbTmExk/WEWCn+BjhGaMPD178j5HlXbVVXaaQotTvIkV71K2S4XcUn86HnOpQevkaB/JVh29103xTKXdkd6o9/jvz1n1YUegv7uLbjY1GEYXnERRqPOvY3yBtiPuVjTF85YXeFYH9uar/jxtxaHhwkMRrfJVGqGbXd7WsvoSVgur2+/hpdSsYBrhgHOxdVpIbdmYdCebjGlerO0K+Pw+/my/Kzd5CcH5P4pIlSUSRbl8DQ2bmh1WzbGLsuLXceBSZYpin8/4fb0bKWFL6D9Sg+0Fi4/hDUKnNR1JPQswtVKvzhAkfDwN/+XEWz+oSXb2+s0NDgftOEspVoyL1Hv0LRFsta3x9iUBRTrbYWAFCJQ== X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230028)(4636009)(346002)(376002)(136003)(39860400002)(396003)(451199021)(46966006)(40470700004)(36840700001)(40460700003)(70586007)(70206006)(4326008)(7696005)(6636002)(316002)(110136005)(86362001)(36756003)(478600001)(6666004)(47076005)(426003)(336012)(2616005)(1076003)(36860700001)(26005)(8936002)(8676002)(2906002)(44832011)(41300700001)(5660300002)(82310400005)(40480700001)(81166007)(186003)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 16:49:36.6049 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7a602496-ceba-47b9-e90f-08db4d88b5f4 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM7EUR03FT044.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB10323 X-Spam-Status: No, score=-12.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Christophe Lyon via Gcc-patches From: Christophe Lyon Reply-To: Christophe Lyon Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" This patch adds the binary_widen_n shape description. 2022-09-08 Christophe Lyon gcc/ * config/arm/arm-mve-builtins-shapes.cc (binary_widen_n): New. * config/arm/arm-mve-builtins-shapes.h (binary_widen_n): New. --- gcc/config/arm/arm-mve-builtins-shapes.cc | 53 +++++++++++++++++++++++ gcc/config/arm/arm-mve-builtins-shapes.h | 1 + 2 files changed, 54 insertions(+) diff --git a/gcc/config/arm/arm-mve-builtins-shapes.cc b/gcc/config/arm/arm-mve-builtins-shapes.cc index e26604510a2..1d43b8871bf 100644 --- a/gcc/config/arm/arm-mve-builtins-shapes.cc +++ b/gcc/config/arm/arm-mve-builtins-shapes.cc @@ -821,6 +821,59 @@ struct binary_rshift_narrow_unsigned_def : public overloaded_base<0> }; SHAPE (binary_rshift_narrow_unsigned) +/* _t vfoo[_n_t0](_t, const int) + + Check that 'imm' is in the [1..#bits] range. + + Example: vshllbq. + int16x8_t [__arm_]vshllbq[_n_s8](int8x16_t a, const int imm) + int16x8_t [__arm_]vshllbq_m[_n_s8](int16x8_t inactive, int8x16_t a, const int imm, mve_pred16_t p) + int16x8_t [__arm_]vshllbq_x[_n_s8](int8x16_t a, const int imm, mve_pred16_t p) */ +struct binary_widen_n_def : public overloaded_base<0> +{ + void + build (function_builder &b, const function_group_info &group, + bool preserve_user_namespace) const override + { + b.add_overloaded_functions (group, MODE_n, preserve_user_namespace); + build_all (b, "vw0,v0,s0", group, MODE_n, preserve_user_namespace); + } + + tree + resolve (function_resolver &r) const override + { + unsigned int i, nargs; + type_suffix_index type; + tree res; + if (!r.check_gp_argument (2, i, nargs) + || (type = r.infer_vector_type (i - 1)) == NUM_TYPE_SUFFIXES + || !r.require_integer_immediate (i)) + return error_mark_node; + + type_suffix_index wide_suffix + = find_type_suffix (type_suffixes[type].tclass, + type_suffixes[type].element_bits * 2); + + /* Check the inactive argument has the wide type. */ + if (((r.pred == PRED_m) && (r.infer_vector_type (0) == wide_suffix)) + || r.pred == PRED_none + || r.pred == PRED_x) + if ((res = r.lookup_form (r.mode_suffix_id, type))) + return res; + + return r.report_no_such_form (type); + } + + bool + check (function_checker &c) const override + { + unsigned int bits = c.type_suffix (0).element_bits; + return c.require_immediate_range (1, 1, bits); + } + +}; +SHAPE (binary_widen_n) + /* xN_t vfoo[_t0](uint64_t, uint64_t) where there are N arguments in total. diff --git a/gcc/config/arm/arm-mve-builtins-shapes.h b/gcc/config/arm/arm-mve-builtins-shapes.h index 825e1bb2a3c..dd2597dc6f5 100644 --- a/gcc/config/arm/arm-mve-builtins-shapes.h +++ b/gcc/config/arm/arm-mve-builtins-shapes.h @@ -45,6 +45,7 @@ namespace arm_mve extern const function_shape *const binary_rshift; extern const function_shape *const binary_rshift_narrow; extern const function_shape *const binary_rshift_narrow_unsigned; + extern const function_shape *const binary_widen_n; extern const function_shape *const create; extern const function_shape *const inherent; extern const function_shape *const unary; From patchwork Fri May 5 16:49:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Lyon X-Patchwork-Id: 68844 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id D25C73852757 for ; Fri, 5 May 2023 16:51:04 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org D25C73852757 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1683305464; bh=uq44jxVPx6FSY7Ae5rT3F1BZVleVbh9OMg2f/+R9jjQ=; h=To:CC:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=dSn6VPBNG3mA3q5A6jW+vDouzxN+/rlMJ9OuSizNnxWoXKEzS8ACuFdHlWcx1W2qL LfpDjKEmtIy1dqYALirUeVf3qOZIlaIaxs/eo/xkQk4HGLw9RmInfy2cXZhU73Oo0u WcNN54bk2zj0uW1tpQ6bW1YH64QjojZW1ywGzejM= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR04-VI1-obe.outbound.protection.outlook.com (mail-vi1eur04on2081.outbound.protection.outlook.com [40.107.8.81]) by sourceware.org (Postfix) with ESMTPS id 3D6843856948 for ; Fri, 5 May 2023 16:49:36 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 3D6843856948 Received: from AM6P193CA0109.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:85::14) by DB9PR08MB8579.eurprd08.prod.outlook.com (2603:10a6:10:3d4::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 16:49:32 +0000 Received: from AM7EUR03FT063.eop-EUR03.prod.protection.outlook.com (2603:10a6:209:85:cafe::8f) by AM6P193CA0109.outlook.office365.com (2603:10a6:209:85::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM7EUR03FT063.mail.protection.outlook.com (100.127.140.221) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:32 +0000 Received: ("Tessian outbound e13c2446394c:v136"); Fri, 05 May 2023 16:49:31 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 0ef1c65d41b2bae0 X-CR-MTA-TID: 64aa7808 Received: from a7b3058d4045.2 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 45579693-90DF-461A-A4FD-F3A8BF2A8DEA.1; Fri, 05 May 2023 16:49:23 +0000 Received: from EUR02-DB5-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a7b3058d4045.2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Fri, 05 May 2023 16:49:23 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UfsiYMx/bzTld+CSx1kLGOOZ1huC0HWyVFdriQuUYtzH52hHlwy4qz0fFJXRof2Mi1TFjan6+SJfmmN/My7gHHp321VmjI2Kzl9/ZL0GMdIdToHjrWVVJzqSzPA9P8r3xJmK4rDjiiO5VNE01fMBPOSyr7ojdf+sLZb9upSEcd0i2LCodETiQXoCHrp1wSqWYUKDyJXCQ5hs7wyJUlj608uG1Dfhb2oOEtGipFB1kvCFXaHMhHMGuBm23bljCDKY6zdniEvcbMAhM3wEnyQGLL6H64zs8LwoZBeNndLIo1v8OKywkgNySdKes4dVSU6hQ2ib+vwmlg1onAZhcrSZCA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=uq44jxVPx6FSY7Ae5rT3F1BZVleVbh9OMg2f/+R9jjQ=; b=ZqBY14ZmAkqdRl0bZv87qiS0ANhCtVTx9OOwfr67Zdr+YojQq5bRujYuJnOGVDDpXdZQBeK3CQ9jR8hmFSUHsC/rTSFqbbxyhJJeelZs2fXMao7UwAPWjJsLB7AFiA5zpzN4zO09fkTGbtnOemHmPQlPYOk7AOHYI4JPEImm9RLoEKdccoQRmirBuvzxpDFNeCRWFvY7u4goqOkcgxMCDQJ5zZN7ipUf1JAKLDPgmRPOu3z/NyJA2RC97fxfltcinnGRFfzMAvnSYJjsd/kj4wsDBj1ttlCs3EjVdgs7/XPtQRWy3eNfjfwG5BV4zuzyP0UarY2Jm4h9fYOqJos1cw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none Received: from AS9PR04CA0047.eurprd04.prod.outlook.com (2603:10a6:20b:46a::31) by AS2PR08MB9896.eurprd08.prod.outlook.com (2603:10a6:20b:5fe::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 16:49:19 +0000 Received: from AM7EUR03FT012.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:46a:cafe::6f) by AS9PR04CA0047.outlook.office365.com (2603:10a6:20b:46a::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:19 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234) smtp.mailfrom=arm.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 40.67.248.234 as permitted sender) receiver=protection.outlook.com; client-ip=40.67.248.234; helo=nebula.arm.com; pr=C Received: from nebula.arm.com (40.67.248.234) by AM7EUR03FT012.mail.protection.outlook.com (100.127.141.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6387.12 via Frontend Transport; Fri, 5 May 2023 16:49:19 +0000 Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX03.Arm.com (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Fri, 5 May 2023 16:49:18 +0000 Received: from e129018.arm.com (10.57.22.112) by mail.arm.com (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.23 via Frontend Transport; Fri, 5 May 2023 16:49:17 +0000 To: , , , CC: Christophe Lyon Subject: [PATCH 09/10] arm: [MVE intrinsics] factorize vshllbq vshlltq Date: Fri, 5 May 2023 18:49:05 +0200 Message-ID: <20230505164906.596219-9-christophe.lyon@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230505164906.596219-1-christophe.lyon@arm.com> References: <20230505164906.596219-1-christophe.lyon@arm.com> MIME-Version: 1.0 X-EOPAttributedMessage: 1 X-MS-TrafficTypeDiagnostic: AM7EUR03FT012:EE_|AS2PR08MB9896:EE_|AM7EUR03FT063:EE_|DB9PR08MB8579:EE_ X-MS-Office365-Filtering-Correlation-Id: 854fe621-ee2a-4ec4-7969-08db4d88b374 x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: 0NAMvMtwuxRMClY4DeyAO55d2RTBhBM0x7SfGrUlD4p+W3XJCpCl6N+/9z1MVTTjwM/Y/sRbUgYVbMCa+TKvaekVIuo+jqgPFTHoqFer7kyUiOtf1EwOuZfP4mAnFtReN6tZ5+GlH5bBwJ8s1HAYQfwBQ/7wTtrcgCxHuN0FUuy4zGuaQtoQx5ojhvySV5cpE6kH4X1krL0PdGQzksA7T2oz7pSZojd9V0Q9cpSyb6+jla3AhPi47fcXoJvRi2A530EjqkcYBwWRuD/ztlmrV+u4IkIxqoU9c6q91rZAUawQwnye4TZauvhBgMC0+V+IeWtmqPJt4SCsW8pisPEuH7/NK3zLmLP/hjmVRwXhwBj/eRFvMJZ2sDfvgoCByFDKSGDgquXX/7vVunk+/pAS0EqOZ0Zb6Zk/21uQ7L0vrVpN7dIuE9PDBWVvUFa1WlKbrzNopx6Ql7xqq83YBBxqeEWg2/oKecade9j5f1jV/IKH4mmlvg+5f85Y9VeHlDwD942iOrBlAPbgfH5Xo4QAwy8oqcKnq+ZEOhHVlKHzJTkmWgG39KPfQgUe6Eb8iT8KdSWhY40kQzVVBJfArjDfYg8QrAFyVIMu1bCoKsbryWZKUaWXabdfgwZ/2Nz6WTFjRQydp+dKPnSMSF9wqJd9MR+B2Ocz9j0fn4zRmW/8HBAez2l2BL8JRbdlnXg7oZ6LZZ5WAhXQ4Z/Rhez46Pju42711V3SrhaSU6+/KaKniKA= X-Forefront-Antispam-Report-Untrusted: CIP:40.67.248.234; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:nebula.arm.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230028)(4636009)(136003)(396003)(39860400002)(376002)(346002)(451199021)(36840700001)(40470700004)(46966006)(4326008)(6636002)(44832011)(70206006)(70586007)(41300700001)(8936002)(8676002)(5660300002)(26005)(1076003)(110136005)(316002)(478600001)(7696005)(6666004)(2906002)(2616005)(83380400001)(47076005)(426003)(336012)(40460700003)(186003)(36860700001)(356005)(82740400003)(81166007)(40480700001)(82310400005)(86362001)(36756003)(36900700001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9896 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM7EUR03FT063.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 493ffe30-fc0c-4fc0-06d2-08db4d88abbb X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: KXE51XVDgZHH8BexxRcYOQrK4p8SQvknYlVSTPdA76d2vCzssUDOsODA2g8UKTppW+cBLtr5CWA8lFbXE4Kah66j8B9RcaV1hfhv4fxOu7KAP7GxOG2o/SeedJoSJ6PdkhnwKzuwcCn6aoK6XFilKoPtaC8h8nRaRUQKFQfcF9W/g2OJRitc1s2LrvCKA02qD5SY6apktt+N+teY7oucqrGjp/dm0I7b/TfEu2nvNLL5c8QLhvlxi3JKp4TdnPc/Kgxoh7KZHHQTb1SMTl9CjN/pkRHz0PuTSxAieLWD0bNXvkgQZvprZMWbsPqUzqaZFdfQcdek6o1cFQlDZ7b7BOgVSLBf+pDm/LinO6YzusbcyT3ekV+tHRi/cFoZX6IunQM8OF1xlGp1TvnPISib2siiWAKWcRe0+vrgkMGzCUBoYmtbqMTcMc7B8j5TWPEcwfmhrV5pt7XJiSCx1OLkrnBQteHA5+2utSzFncImTJPDKUnJhGFOdxS4+B/muD1CheEpAfh250tAX54F0W9txLyNHVec8EV8IUtC+8/QzuwX0qXauI5dd0ZzbvVpxV10RFnhYZ76RBw2x7LCWICD51m7Hq3NIN63UOBmEaNRCy/DLhWwKJgFuCSFhk5vL3L5bg3rmXEXIibBC0OZKDoeqEb/GHcTeKHE1jUhr1RPLZtiUfn+2c1mswbA0p286sFGBiZvU9svbMyw3DI9KrlzfA== X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230028)(4636009)(39860400002)(376002)(396003)(136003)(346002)(451199021)(36840700001)(46966006)(40470700004)(36756003)(86362001)(7696005)(110136005)(316002)(70586007)(70206006)(6636002)(4326008)(478600001)(6666004)(82310400005)(40480700001)(8936002)(8676002)(41300700001)(2906002)(5660300002)(44832011)(186003)(82740400003)(81166007)(2616005)(1076003)(26005)(36860700001)(336012)(426003)(47076005)(83380400001)(40460700003); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 16:49:32.4234 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 854fe621-ee2a-4ec4-7969-08db4d88b374 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM7EUR03FT063.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB8579 X-Spam-Status: No, score=-12.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Christophe Lyon via Gcc-patches From: Christophe Lyon Reply-To: Christophe Lyon Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" Factorize vshllbq vshlltq so that they use the same pattern. 2022-09-08 Christophe Lyon gcc/ * config/arm/iterators.md (mve_insn): Add vshllb, vshllt. (VSHLLBQ_N, VSHLLTQ_N): Remove. (VSHLLxQ_N): New. (VSHLLBQ_M_N, VSHLLTQ_M_N): Remove. (VSHLLxQ_M_N): New. * config/arm/mve.md (mve_vshllbq_n_) (mve_vshlltq_n_): Merge into ... (@mve_q_n_): ... this. (mve_vshllbq_m_n_, mve_vshlltq_m_n_): Merge into ... (@mve_q_m_n_): ... this. --- gcc/config/arm/iterators.md | 10 +++++--- gcc/config/arm/mve.md | 50 ++++++++----------------------------- 2 files changed, 16 insertions(+), 44 deletions(-) diff --git a/gcc/config/arm/iterators.md b/gcc/config/arm/iterators.md index 20735284979..e82ff0d5d9b 100644 --- a/gcc/config/arm/iterators.md +++ b/gcc/config/arm/iterators.md @@ -731,6 +731,10 @@ (define_int_attr mve_insn [ (VRSHRNTQ_N_S "vrshrnt") (VRSHRNTQ_N_U "vrshrnt") (VRSHRQ_M_N_S "vrshr") (VRSHRQ_M_N_U "vrshr") (VRSHRQ_N_S "vrshr") (VRSHRQ_N_U "vrshr") + (VSHLLBQ_M_N_S "vshllb") (VSHLLBQ_M_N_U "vshllb") + (VSHLLBQ_N_S "vshllb") (VSHLLBQ_N_U "vshllb") + (VSHLLTQ_M_N_S "vshllt") (VSHLLTQ_M_N_U "vshllt") + (VSHLLTQ_N_S "vshllt") (VSHLLTQ_N_U "vshllt") (VSHLQ_M_N_S "vshl") (VSHLQ_M_N_U "vshl") (VSHLQ_M_R_S "vshl") (VSHLQ_M_R_U "vshl") (VSHLQ_M_S "vshl") (VSHLQ_M_U "vshl") @@ -2133,8 +2137,7 @@ (define_int_iterator VMOVNTQ [VMOVNTQ_S VMOVNTQ_U]) (define_int_iterator VORRQ_N [VORRQ_N_U VORRQ_N_S]) (define_int_iterator VQMOVNBQ [VQMOVNBQ_U VQMOVNBQ_S]) (define_int_iterator VQMOVNTQ [VQMOVNTQ_U VQMOVNTQ_S]) -(define_int_iterator VSHLLBQ_N [VSHLLBQ_N_S VSHLLBQ_N_U]) -(define_int_iterator VSHLLTQ_N [VSHLLTQ_N_U VSHLLTQ_N_S]) +(define_int_iterator VSHLLxQ_N [VSHLLBQ_N_S VSHLLBQ_N_U VSHLLTQ_N_S VSHLLTQ_N_U]) (define_int_iterator VRMLALDAVHQ [VRMLALDAVHQ_U VRMLALDAVHQ_S]) (define_int_iterator VBICQ_M_N [VBICQ_M_N_S VBICQ_M_N_U]) (define_int_iterator VCVTAQ_M [VCVTAQ_M_S VCVTAQ_M_U]) @@ -2250,8 +2253,7 @@ (define_int_iterator VQSHRNBQ_M_N [VQSHRNBQ_M_N_U VQSHRNBQ_M_N_S]) (define_int_iterator VQSHRNTQ_M_N [VQSHRNTQ_M_N_S VQSHRNTQ_M_N_U]) (define_int_iterator VRSHRNBQ_M_N [VRSHRNBQ_M_N_U VRSHRNBQ_M_N_S]) (define_int_iterator VRSHRNTQ_M_N [VRSHRNTQ_M_N_U VRSHRNTQ_M_N_S]) -(define_int_iterator VSHLLBQ_M_N [VSHLLBQ_M_N_U VSHLLBQ_M_N_S]) -(define_int_iterator VSHLLTQ_M_N [VSHLLTQ_M_N_U VSHLLTQ_M_N_S]) +(define_int_iterator VSHLLxQ_M_N [VSHLLBQ_M_N_U VSHLLBQ_M_N_S VSHLLTQ_M_N_U VSHLLTQ_M_N_S]) (define_int_iterator VSHRNBQ_M_N [VSHRNBQ_M_N_S VSHRNBQ_M_N_U]) (define_int_iterator VSHRNTQ_M_N [VSHRNTQ_M_N_S VSHRNTQ_M_N_U]) (define_int_iterator VSTRWSBQ [VSTRWQSB_S VSTRWQSB_U]) diff --git a/gcc/config/arm/mve.md b/gcc/config/arm/mve.md index 2273078807b..98728e6f3ef 100644 --- a/gcc/config/arm/mve.md +++ b/gcc/config/arm/mve.md @@ -1830,32 +1830,18 @@ (define_insn "mve_vrmlsldavhxq_sv4si" ]) ;; -;; [vshllbq_n_s, vshllbq_n_u]) +;; [vshllbq_n_s, vshllbq_n_u] +;; [vshlltq_n_u, vshlltq_n_s] ;; -(define_insn "mve_vshllbq_n_" - [ - (set (match_operand: 0 "s_register_operand" "=w") - (unspec: [(match_operand:MVE_3 1 "s_register_operand" "w") - (match_operand:SI 2 "immediate_operand" "i")] - VSHLLBQ_N)) - ] - "TARGET_HAVE_MVE" - "vshllb.%#\t%q0, %q1, %2" - [(set_attr "type" "mve_move") -]) - -;; -;; [vshlltq_n_u, vshlltq_n_s]) -;; -(define_insn "mve_vshlltq_n_" +(define_insn "@mve_q_n_" [ (set (match_operand: 0 "s_register_operand" "=w") (unspec: [(match_operand:MVE_3 1 "s_register_operand" "w") (match_operand:SI 2 "immediate_operand" "i")] - VSHLLTQ_N)) + VSHLLxQ_N)) ] "TARGET_HAVE_MVE" - "vshllt.%#\t%q0, %q1, %2" + ".%#\t%q0, %q1, %2" [(set_attr "type" "mve_move") ]) @@ -4410,36 +4396,20 @@ (define_insn "mve_vrmlaldavhaq_p_sv4si" (set_attr "length""8")]) ;; -;; [vshllbq_m_n_u, vshllbq_m_n_s]) -;; -(define_insn "mve_vshllbq_m_n_" - [ - (set (match_operand: 0 "s_register_operand" "=w") - (unspec: [(match_operand: 1 "s_register_operand" "0") - (match_operand:MVE_3 2 "s_register_operand" "w") - (match_operand:SI 3 "immediate_operand" "i") - (match_operand: 4 "vpr_register_operand" "Up")] - VSHLLBQ_M_N)) - ] - "TARGET_HAVE_MVE" - "vpst\;vshllbt.%#\t%q0, %q2, %3" - [(set_attr "type" "mve_move") - (set_attr "length""8")]) - -;; -;; [vshlltq_m_n_u, vshlltq_m_n_s]) +;; [vshllbq_m_n_u, vshllbq_m_n_s] +;; [vshlltq_m_n_u, vshlltq_m_n_s] ;; -(define_insn "mve_vshlltq_m_n_" +(define_insn "@mve_q_m_n_" [ (set (match_operand: 0 "s_register_operand" "=w") (unspec: [(match_operand: 1 "s_register_operand" "0") (match_operand:MVE_3 2 "s_register_operand" "w") (match_operand:SI 3 "immediate_operand" "i") (match_operand: 4 "vpr_register_operand" "Up")] - VSHLLTQ_M_N)) + VSHLLxQ_M_N)) ] "TARGET_HAVE_MVE" - "vpst\;vshlltt.%#\t%q0, %q2, %3" + "vpst\;t.%#\t%q0, %q2, %3" [(set_attr "type" "mve_move") (set_attr "length""8")]) From patchwork Fri May 5 16:49:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Lyon X-Patchwork-Id: 68846 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id CE84B384F024 for ; Fri, 5 May 2023 16:51:31 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org CE84B384F024 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1683305491; bh=XdO7XiffX8z0bmTJwtoLkJHva8p2ZyRx09FQJA9AR4k=; h=To:CC:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=u8IbX+c18yeR4JkqfmL3qrVNB68jryHuSbwErZkytKiGIwgXK5pKTxP31uwF6ajMF bNSs0SOgPruODar7mwc4mC+37xxMrhdpeozGfZoY7LD2SeK7WHbOYHA0FQVgBDgD9X rno8FXO+4B0Z5PlHTN98war3yb635H81zzREVKls= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR03-DBA-obe.outbound.protection.outlook.com (mail-dbaeur03on2054.outbound.protection.outlook.com [40.107.104.54]) by sourceware.org (Postfix) with ESMTPS id D49A93856240 for ; Fri, 5 May 2023 16:49:39 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org D49A93856240 Received: from DB6PR07CA0191.eurprd07.prod.outlook.com (2603:10a6:6:42::21) by AM8PR08MB6514.eurprd08.prod.outlook.com (2603:10a6:20b:36b::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 16:49:35 +0000 Received: from DBAEUR03FT039.eop-EUR03.prod.protection.outlook.com (2603:10a6:6:42:cafe::cf) by DB6PR07CA0191.outlook.office365.com (2603:10a6:6:42::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.12 via Frontend Transport; Fri, 5 May 2023 16:49:35 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DBAEUR03FT039.mail.protection.outlook.com (100.127.142.225) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.12 via Frontend Transport; Fri, 5 May 2023 16:49:35 +0000 Received: ("Tessian outbound 3570909035da:v136"); Fri, 05 May 2023 16:49:34 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 2b3d4871cb421a60 X-CR-MTA-TID: 64aa7808 Received: from 947f85d8cd3b.2 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 45649928-1B0E-4C31-B57A-8005C3618473.1; Fri, 05 May 2023 16:49:23 +0000 Received: from EUR04-DB3-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 947f85d8cd3b.2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Fri, 05 May 2023 16:49:23 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=iPhHPAWYYRlsXDL3K2BRc0fvgwp/CTRigsnYggTb/EELzg0yxlHn2wmhXkD6b6HAtrNAK/rfuOMT5TtL1Spgs6gIcAHJ5mVBbUDQp8H0izdePNjxdafs2sIU0xrLA2RW4p+TcZI5aq4jB3HHD6dWCMVjNmZcbrD3DzE+KsQHhAVxXDd9Fqs684PEm/GmartcuLcDbZ/Vuq08bAifStx16j8WoBK36nNgQQMWhn7M86gxvlZf5bGqMVioyyIXyOj/hxDo9mWsT/ezAZicuL0YwPpj6zAVW0uMPDbJ345zUa2Lb+bv3segPtYi4Ut3Y3TPrQCa4yeI+wW7xgHNvGUTQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XdO7XiffX8z0bmTJwtoLkJHva8p2ZyRx09FQJA9AR4k=; b=offo6w6DG+Nspy+ZbQV8dpoMmhCSqQ2LtrF/0nquoSabJ9sddNr8N+JOuqT0sATFDaEZixljCya4fKJO/N5Fe09RpTfWU7wT7roO/SDLQB4+JaJr5NvaPHlesO6UAyVB9KdvOh1hYmIhH7UHMDj8Q1h00XDU9A2FDQh7vvjr/r1qyvQLZk2IbweOhbZbjlJXxYHLFMjW8TXReLylpBfztppiQwN1nNU/hzyjmiNbmoYhVjQdAZct+b6ax7WJOH7LWsmnnUWncg4MLp4fQm1GG8eu9V3GalU720p+HwcfCGjmRKzp/qekp1p+AH0ig3blfAnM0PhTmp2FJaV5aI5Wuw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none Received: from AS9PR04CA0055.eurprd04.prod.outlook.com (2603:10a6:20b:46a::11) by AS2PR08MB8832.eurprd08.prod.outlook.com (2603:10a6:20b:5e6::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Fri, 5 May 2023 16:49:20 +0000 Received: from AM7EUR03FT012.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:46a:cafe::c9) by AS9PR04CA0055.outlook.office365.com (2603:10a6:20b:46a::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:49:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234) smtp.mailfrom=arm.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 40.67.248.234 as permitted sender) receiver=protection.outlook.com; client-ip=40.67.248.234; helo=nebula.arm.com; pr=C Received: from nebula.arm.com (40.67.248.234) by AM7EUR03FT012.mail.protection.outlook.com (100.127.141.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6387.12 via Frontend Transport; Fri, 5 May 2023 16:49:20 +0000 Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX03.Arm.com (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Fri, 5 May 2023 16:49:18 +0000 Received: from e129018.arm.com (10.57.22.112) by mail.arm.com (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.23 via Frontend Transport; Fri, 5 May 2023 16:49:18 +0000 To: , , , CC: Christophe Lyon Subject: [PATCH 10/10] arm: [MVE intrinsics] rework vshllbq vshlltq Date: Fri, 5 May 2023 18:49:06 +0200 Message-ID: <20230505164906.596219-10-christophe.lyon@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230505164906.596219-1-christophe.lyon@arm.com> References: <20230505164906.596219-1-christophe.lyon@arm.com> MIME-Version: 1.0 X-EOPAttributedMessage: 1 X-MS-TrafficTypeDiagnostic: AM7EUR03FT012:EE_|AS2PR08MB8832:EE_|DBAEUR03FT039:EE_|AM8PR08MB6514:EE_ X-MS-Office365-Filtering-Correlation-Id: 0391ad8f-4e58-43bd-0d39-08db4d88b50a x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: ZmUCxvgs/6s09yw8AqybswneThyrUSEUFDJSqSrMRT2J3C2/a8Bwh1Vl+8WLifMdJJ3v//wIRRlyWyiNzCaHAV3IGWmBMN4+aQTSU5r62DTzRvmod/eos2OX48JQPv6mE3I38IejQJKfYDxi2GNLju1TwZGnA1C0B3IEsVihTmhI0MCgPZxt03Ww4AFu3fGaBjZ7QSw5Y3rXggAFhbuHla8MGNh7BIBikbeu2IzQ5UFAjKQvMaXhCYbjfQil+El3nplhZ1RJmNf3hwXWOg1QX7jvTr7mSAIg+P4ob7EKSOZy+gmlIYg0IuQ8uNGFYb3Oa4RhpIIpVFULWzplcJwytApzK+Pg5mLX+oHp2oroqHQhSpC61z80i7VOJSSTsUFDTKKd7VjJPqqBTXTki2WvYIqtxDd7FW9KMasW/SIiY/DA6gjn/wupG9grvVtOV54eJKQ+b8dgOnSARrYpLVL1PfQZ0Aq+CQGMYOUUxaML9dqCClOkyeQwOkbQzc9g5M9XWWbui+xQKEcg4TqWrBseVRaIHrC3iJPdZOkROnk+piUqq8AmCpfyUxSsv2BIWxm7TVQMjnu0j383jbZr3hCr8/gsD97rRf9NJy5NTeJgscZ3hNPEp6K3KZHcocR6rAwI1S3tkPxmLdK1ioMNhXfVzJXlbFB3k2q2U3id6+WXRerv0p3CpwoWTtqc/+LeXdATHJm68VGC8SzmRhKdljDx/qZo4M2xS6YGY8UMnIowOSg= X-Forefront-Antispam-Report-Untrusted: CIP:40.67.248.234; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:nebula.arm.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230028)(4636009)(396003)(376002)(346002)(39860400002)(136003)(451199021)(46966006)(40470700004)(36840700001)(1076003)(26005)(186003)(8936002)(2616005)(8676002)(30864003)(336012)(2906002)(40480700001)(426003)(316002)(41300700001)(82310400005)(83380400001)(40460700003)(47076005)(36860700001)(36756003)(6666004)(478600001)(86362001)(70586007)(4326008)(82740400003)(5660300002)(6636002)(70206006)(44832011)(7696005)(356005)(81166007)(110136005)(36900700001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8832 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DBAEUR03FT039.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 461dfdf0-4644-46bf-4caa-08db4d88ac45 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 3VDywp+7Ih4CQOpGbKTAkwDNTmPJkgzA/wWNQyvn0j3icvnYLDF2Vy7zH2HbBJnkOHgcSvYUmVBPkYK3MLNB9ysWi3Tb4wN/X2geJNU2SAB/4uVAcFd+l8quMWWLW9t1Y9xrQHEJSa5OPTR1AbVJBK5juV6rJkprjtX0GysWHd5IQT1u1Oa0Pq+fqahN242Q93ssjZnjsD5ZaGbdJe3Lm5vjWAEbnJ5iC4EqHUhNMTEJfff+mCPG4TcK0keYT9BW46nF5jTshHhajOM2kVJkutrpo3gI4ApFKd/8tfatn0ohHh+WJEe7d5rgR/JGf1UMT0meR1IvSoD0GWnYdU5CnHRvVb50J+nzqHWpheGyzxF+x4TWkkac8XFp/tBuo57q4FQa5RruEa1qtlwf0fUlj+LmN3eHzAXy/wIcrudEWm7demamsnBx+7oCzQcBJpekdwsvyZUClHdyIidLTKSlGN1CyGLQqcbFp5enaWvRcOT+e0GTPS5GCtNRiIirlB6B61mhNUaJhC8bWVvjtprkqYSa6IfNMBoPdcII58AmX1OExf8sCsESizD4VbQmQwKL6lH7TTgP1NJIa4rRybeiFUUQqYH8G02JfBpSvwXWPgQkgZXv0Hg3BaZ5d11o/w+/FpcrA0/mYPmj/hDwkZIhCT3IPKS48OGylTeezz1r7t1waBCOMAtGNaJd2lyo93l3ttl1VNs7jlnRluSRvuIxoQ== X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230028)(4636009)(136003)(396003)(346002)(39860400002)(376002)(451199021)(40470700004)(36840700001)(46966006)(40460700003)(7696005)(4326008)(70586007)(478600001)(6636002)(70206006)(6666004)(316002)(110136005)(36756003)(86362001)(83380400001)(47076005)(426003)(336012)(26005)(36860700001)(1076003)(2616005)(41300700001)(8676002)(5660300002)(44832011)(8936002)(2906002)(30864003)(82310400005)(40480700001)(82740400003)(186003)(81166007); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 16:49:35.1126 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0391ad8f-4e58-43bd-0d39-08db4d88b50a X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DBAEUR03FT039.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6514 X-Spam-Status: No, score=-13.0 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, GIT_PATCH_0, KAM_DMARC_NONE, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Christophe Lyon via Gcc-patches From: Christophe Lyon Reply-To: Christophe Lyon Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" Implement vshllbq and vshlltq using the new MVE builtins framework. 2022-09-08 Christophe Lyon gcc/ * config/arm/arm-mve-builtins-base.cc (vshllbq, vshlltq): New. * config/arm/arm-mve-builtins-base.def (vshllbq, vshlltq): New. * config/arm/arm-mve-builtins-base.h (vshllbq, vshlltq): New. * config/arm/arm_mve.h (vshlltq): Remove. (vshllbq): Remove. (vshllbq_m): Remove. (vshlltq_m): Remove. (vshllbq_x): Remove. (vshlltq_x): Remove. (vshlltq_n_u8): Remove. (vshllbq_n_u8): Remove. (vshlltq_n_s8): Remove. (vshllbq_n_s8): Remove. (vshlltq_n_u16): Remove. (vshllbq_n_u16): Remove. (vshlltq_n_s16): Remove. (vshllbq_n_s16): Remove. (vshllbq_m_n_s8): Remove. (vshllbq_m_n_s16): Remove. (vshllbq_m_n_u8): Remove. (vshllbq_m_n_u16): Remove. (vshlltq_m_n_s8): Remove. (vshlltq_m_n_s16): Remove. (vshlltq_m_n_u8): Remove. (vshlltq_m_n_u16): Remove. (vshllbq_x_n_s8): Remove. (vshllbq_x_n_s16): Remove. (vshllbq_x_n_u8): Remove. (vshllbq_x_n_u16): Remove. (vshlltq_x_n_s8): Remove. (vshlltq_x_n_s16): Remove. (vshlltq_x_n_u8): Remove. (vshlltq_x_n_u16): Remove. (__arm_vshlltq_n_u8): Remove. (__arm_vshllbq_n_u8): Remove. (__arm_vshlltq_n_s8): Remove. (__arm_vshllbq_n_s8): Remove. (__arm_vshlltq_n_u16): Remove. (__arm_vshllbq_n_u16): Remove. (__arm_vshlltq_n_s16): Remove. (__arm_vshllbq_n_s16): Remove. (__arm_vshllbq_m_n_s8): Remove. (__arm_vshllbq_m_n_s16): Remove. (__arm_vshllbq_m_n_u8): Remove. (__arm_vshllbq_m_n_u16): Remove. (__arm_vshlltq_m_n_s8): Remove. (__arm_vshlltq_m_n_s16): Remove. (__arm_vshlltq_m_n_u8): Remove. (__arm_vshlltq_m_n_u16): Remove. (__arm_vshllbq_x_n_s8): Remove. (__arm_vshllbq_x_n_s16): Remove. (__arm_vshllbq_x_n_u8): Remove. (__arm_vshllbq_x_n_u16): Remove. (__arm_vshlltq_x_n_s8): Remove. (__arm_vshlltq_x_n_s16): Remove. (__arm_vshlltq_x_n_u8): Remove. (__arm_vshlltq_x_n_u16): Remove. (__arm_vshlltq): Remove. (__arm_vshllbq): Remove. (__arm_vshllbq_m): Remove. (__arm_vshlltq_m): Remove. (__arm_vshllbq_x): Remove. (__arm_vshlltq_x): Remove. --- gcc/config/arm/arm-mve-builtins-base.cc | 2 + gcc/config/arm/arm-mve-builtins-base.def | 2 + gcc/config/arm/arm-mve-builtins-base.h | 2 + gcc/config/arm/arm_mve.h | 424 ----------------------- 4 files changed, 6 insertions(+), 424 deletions(-) diff --git a/gcc/config/arm/arm-mve-builtins-base.cc b/gcc/config/arm/arm-mve-builtins-base.cc index 1dae12b445b..aafd85b293d 100644 --- a/gcc/config/arm/arm-mve-builtins-base.cc +++ b/gcc/config/arm/arm-mve-builtins-base.cc @@ -263,6 +263,8 @@ FUNCTION_WITH_M_N_NO_F (vrshlq, VRSHLQ) FUNCTION_ONLY_N_NO_F (vrshrnbq, VRSHRNBQ) FUNCTION_ONLY_N_NO_F (vrshrntq, VRSHRNTQ) FUNCTION_ONLY_N_NO_F (vrshrq, VRSHRQ) +FUNCTION_ONLY_N_NO_F (vshllbq, VSHLLBQ) +FUNCTION_ONLY_N_NO_F (vshlltq, VSHLLTQ) FUNCTION_WITH_M_N_R (vshlq, VSHLQ) FUNCTION_ONLY_N_NO_F (vshrnbq, VSHRNBQ) FUNCTION_ONLY_N_NO_F (vshrntq, VSHRNTQ) diff --git a/gcc/config/arm/arm-mve-builtins-base.def b/gcc/config/arm/arm-mve-builtins-base.def index f868614fb6b..78c7515b972 100644 --- a/gcc/config/arm/arm-mve-builtins-base.def +++ b/gcc/config/arm/arm-mve-builtins-base.def @@ -64,6 +64,8 @@ DEF_MVE_FUNCTION (vrshlq, binary_round_lshift, all_integer, mx_or_none) DEF_MVE_FUNCTION (vrshrnbq, binary_rshift_narrow, integer_16_32, m_or_none) DEF_MVE_FUNCTION (vrshrntq, binary_rshift_narrow, integer_16_32, m_or_none) DEF_MVE_FUNCTION (vrshrq, binary_rshift, all_integer, mx_or_none) +DEF_MVE_FUNCTION (vshllbq, binary_widen_n, integer_8_16, mx_or_none) +DEF_MVE_FUNCTION (vshlltq, binary_widen_n, integer_8_16, mx_or_none) DEF_MVE_FUNCTION (vshlq, binary_lshift, all_integer, mx_or_none) DEF_MVE_FUNCTION (vshlq, binary_lshift_r, all_integer, m_or_none) // "_r" forms do not support the "x" predicate DEF_MVE_FUNCTION (vshrnbq, binary_rshift_narrow, integer_16_32, m_or_none) diff --git a/gcc/config/arm/arm-mve-builtins-base.h b/gcc/config/arm/arm-mve-builtins-base.h index f4960cbbea2..e5a83466512 100644 --- a/gcc/config/arm/arm-mve-builtins-base.h +++ b/gcc/config/arm/arm-mve-builtins-base.h @@ -74,6 +74,8 @@ extern const function_base *const vrshlq; extern const function_base *const vrshrnbq; extern const function_base *const vrshrntq; extern const function_base *const vrshrq; +extern const function_base *const vshllbq; +extern const function_base *const vshlltq; extern const function_base *const vshlq; extern const function_base *const vshrnbq; extern const function_base *const vshrntq; diff --git a/gcc/config/arm/arm_mve.h b/gcc/config/arm/arm_mve.h index 97f0ef93ee9..8258ee0b802 100644 --- a/gcc/config/arm/arm_mve.h +++ b/gcc/config/arm/arm_mve.h @@ -85,8 +85,6 @@ #define vmulltq_poly(__a, __b) __arm_vmulltq_poly(__a, __b) #define vmullbq_poly(__a, __b) __arm_vmullbq_poly(__a, __b) #define vmlaldavq(__a, __b) __arm_vmlaldavq(__a, __b) -#define vshlltq(__a, __imm) __arm_vshlltq(__a, __imm) -#define vshllbq(__a, __imm) __arm_vshllbq(__a, __imm) #define vqdmulltq(__a, __b) __arm_vqdmulltq(__a, __b) #define vqdmullbq(__a, __b) __arm_vqdmullbq(__a, __b) #define vmlsldavxq(__a, __b) __arm_vmlsldavxq(__a, __b) @@ -208,8 +206,6 @@ #define vrmlaldavhaxq_p(__a, __b, __c, __p) __arm_vrmlaldavhaxq_p(__a, __b, __c, __p) #define vrmlsldavhaq_p(__a, __b, __c, __p) __arm_vrmlsldavhaq_p(__a, __b, __c, __p) #define vrmlsldavhaxq_p(__a, __b, __c, __p) __arm_vrmlsldavhaxq_p(__a, __b, __c, __p) -#define vshllbq_m(__inactive, __a, __imm, __p) __arm_vshllbq_m(__inactive, __a, __imm, __p) -#define vshlltq_m(__inactive, __a, __imm, __p) __arm_vshlltq_m(__inactive, __a, __imm, __p) #define vstrbq_scatter_offset(__base, __offset, __value) __arm_vstrbq_scatter_offset(__base, __offset, __value) #define vstrbq(__addr, __value) __arm_vstrbq(__addr, __value) #define vstrwq_scatter_base(__addr, __offset, __value) __arm_vstrwq_scatter_base(__addr, __offset, __value) @@ -300,8 +296,6 @@ #define vrev16q_x(__a, __p) __arm_vrev16q_x(__a, __p) #define vrev32q_x(__a, __p) __arm_vrev32q_x(__a, __p) #define vrev64q_x(__a, __p) __arm_vrev64q_x(__a, __p) -#define vshllbq_x(__a, __imm, __p) __arm_vshllbq_x(__a, __imm, __p) -#define vshlltq_x(__a, __imm, __p) __arm_vshlltq_x(__a, __imm, __p) #define vadciq(__a, __b, __carry_out) __arm_vadciq(__a, __b, __carry_out) #define vadciq_m(__inactive, __a, __b, __carry_out, __p) __arm_vadciq_m(__inactive, __a, __b, __carry_out, __p) #define vadcq(__a, __b, __carry) __arm_vadcq(__a, __b, __carry) @@ -643,8 +637,6 @@ #define vmulltq_poly_p8(__a, __b) __arm_vmulltq_poly_p8(__a, __b) #define vmullbq_poly_p8(__a, __b) __arm_vmullbq_poly_p8(__a, __b) #define vmlaldavq_u16(__a, __b) __arm_vmlaldavq_u16(__a, __b) -#define vshlltq_n_u8(__a, __imm) __arm_vshlltq_n_u8(__a, __imm) -#define vshllbq_n_u8(__a, __imm) __arm_vshllbq_n_u8(__a, __imm) #define vbicq_n_u16(__a, __imm) __arm_vbicq_n_u16(__a, __imm) #define vcmpneq_n_f16(__a, __b) __arm_vcmpneq_n_f16(__a, __b) #define vcmpneq_f16(__a, __b) __arm_vcmpneq_f16(__a, __b) @@ -682,14 +674,10 @@ #define vcaddq_rot90_f16(__a, __b) __arm_vcaddq_rot90_f16(__a, __b) #define vcaddq_rot270_f16(__a, __b) __arm_vcaddq_rot270_f16(__a, __b) #define vbicq_f16(__a, __b) __arm_vbicq_f16(__a, __b) -#define vshlltq_n_s8(__a, __imm) __arm_vshlltq_n_s8(__a, __imm) -#define vshllbq_n_s8(__a, __imm) __arm_vshllbq_n_s8(__a, __imm) #define vbicq_n_s16(__a, __imm) __arm_vbicq_n_s16(__a, __imm) #define vmulltq_poly_p16(__a, __b) __arm_vmulltq_poly_p16(__a, __b) #define vmullbq_poly_p16(__a, __b) __arm_vmullbq_poly_p16(__a, __b) #define vmlaldavq_u32(__a, __b) __arm_vmlaldavq_u32(__a, __b) -#define vshlltq_n_u16(__a, __imm) __arm_vshlltq_n_u16(__a, __imm) -#define vshllbq_n_u16(__a, __imm) __arm_vshllbq_n_u16(__a, __imm) #define vbicq_n_u32(__a, __imm) __arm_vbicq_n_u32(__a, __imm) #define vcmpneq_n_f32(__a, __b) __arm_vcmpneq_n_f32(__a, __b) #define vcmpneq_f32(__a, __b) __arm_vcmpneq_f32(__a, __b) @@ -727,8 +715,6 @@ #define vcaddq_rot90_f32(__a, __b) __arm_vcaddq_rot90_f32(__a, __b) #define vcaddq_rot270_f32(__a, __b) __arm_vcaddq_rot270_f32(__a, __b) #define vbicq_f32(__a, __b) __arm_vbicq_f32(__a, __b) -#define vshlltq_n_s16(__a, __imm) __arm_vshlltq_n_s16(__a, __imm) -#define vshllbq_n_s16(__a, __imm) __arm_vshllbq_n_s16(__a, __imm) #define vbicq_n_s32(__a, __imm) __arm_vbicq_n_s32(__a, __imm) #define vrmlaldavhq_u32(__a, __b) __arm_vrmlaldavhq_u32(__a, __b) #define vctp8q_m(__a, __p) __arm_vctp8q_m(__a, __p) @@ -1265,14 +1251,6 @@ #define vrmlaldavhaxq_p_s32(__a, __b, __c, __p) __arm_vrmlaldavhaxq_p_s32(__a, __b, __c, __p) #define vrmlsldavhaq_p_s32(__a, __b, __c, __p) __arm_vrmlsldavhaq_p_s32(__a, __b, __c, __p) #define vrmlsldavhaxq_p_s32(__a, __b, __c, __p) __arm_vrmlsldavhaxq_p_s32(__a, __b, __c, __p) -#define vshllbq_m_n_s8(__inactive, __a, __imm, __p) __arm_vshllbq_m_n_s8(__inactive, __a, __imm, __p) -#define vshllbq_m_n_s16(__inactive, __a, __imm, __p) __arm_vshllbq_m_n_s16(__inactive, __a, __imm, __p) -#define vshllbq_m_n_u8(__inactive, __a, __imm, __p) __arm_vshllbq_m_n_u8(__inactive, __a, __imm, __p) -#define vshllbq_m_n_u16(__inactive, __a, __imm, __p) __arm_vshllbq_m_n_u16(__inactive, __a, __imm, __p) -#define vshlltq_m_n_s8(__inactive, __a, __imm, __p) __arm_vshlltq_m_n_s8(__inactive, __a, __imm, __p) -#define vshlltq_m_n_s16(__inactive, __a, __imm, __p) __arm_vshlltq_m_n_s16(__inactive, __a, __imm, __p) -#define vshlltq_m_n_u8(__inactive, __a, __imm, __p) __arm_vshlltq_m_n_u8(__inactive, __a, __imm, __p) -#define vshlltq_m_n_u16(__inactive, __a, __imm, __p) __arm_vshlltq_m_n_u16(__inactive, __a, __imm, __p) #define vbicq_m_f32(__inactive, __a, __b, __p) __arm_vbicq_m_f32(__inactive, __a, __b, __p) #define vbicq_m_f16(__inactive, __a, __b, __p) __arm_vbicq_m_f16(__inactive, __a, __b, __p) #define vbrsrq_m_n_f32(__inactive, __a, __b, __p) __arm_vbrsrq_m_n_f32(__inactive, __a, __b, __p) @@ -1701,14 +1679,6 @@ #define vrev64q_x_u8(__a, __p) __arm_vrev64q_x_u8(__a, __p) #define vrev64q_x_u16(__a, __p) __arm_vrev64q_x_u16(__a, __p) #define vrev64q_x_u32(__a, __p) __arm_vrev64q_x_u32(__a, __p) -#define vshllbq_x_n_s8(__a, __imm, __p) __arm_vshllbq_x_n_s8(__a, __imm, __p) -#define vshllbq_x_n_s16(__a, __imm, __p) __arm_vshllbq_x_n_s16(__a, __imm, __p) -#define vshllbq_x_n_u8(__a, __imm, __p) __arm_vshllbq_x_n_u8(__a, __imm, __p) -#define vshllbq_x_n_u16(__a, __imm, __p) __arm_vshllbq_x_n_u16(__a, __imm, __p) -#define vshlltq_x_n_s8(__a, __imm, __p) __arm_vshlltq_x_n_s8(__a, __imm, __p) -#define vshlltq_x_n_s16(__a, __imm, __p) __arm_vshlltq_x_n_s16(__a, __imm, __p) -#define vshlltq_x_n_u8(__a, __imm, __p) __arm_vshlltq_x_n_u8(__a, __imm, __p) -#define vshlltq_x_n_u16(__a, __imm, __p) __arm_vshlltq_x_n_u16(__a, __imm, __p) #define vdupq_x_n_f16(__a, __p) __arm_vdupq_x_n_f16(__a, __p) #define vdupq_x_n_f32(__a, __p) __arm_vdupq_x_n_f32(__a, __p) #define vminnmq_x_f16(__a, __b, __p) __arm_vminnmq_x_f16(__a, __b, __p) @@ -3454,20 +3424,6 @@ __arm_vmlaldavq_u16 (uint16x8_t __a, uint16x8_t __b) return __builtin_mve_vmlaldavq_uv8hi (__a, __b); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_n_u8 (uint8x16_t __a, const int __imm) -{ - return __builtin_mve_vshlltq_n_uv16qi (__a, __imm); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_n_u8 (uint8x16_t __a, const int __imm) -{ - return __builtin_mve_vshllbq_n_uv16qi (__a, __imm); -} - __extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vbicq_n_u16 (uint16x8_t __a, const int __imm) @@ -3531,20 +3487,6 @@ __arm_vmlaldavq_s16 (int16x8_t __a, int16x8_t __b) return __builtin_mve_vmlaldavq_sv8hi (__a, __b); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_n_s8 (int8x16_t __a, const int __imm) -{ - return __builtin_mve_vshlltq_n_sv16qi (__a, __imm); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_n_s8 (int8x16_t __a, const int __imm) -{ - return __builtin_mve_vshllbq_n_sv16qi (__a, __imm); -} - __extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vbicq_n_s16 (int16x8_t __a, const int __imm) @@ -3573,20 +3515,6 @@ __arm_vmlaldavq_u32 (uint32x4_t __a, uint32x4_t __b) return __builtin_mve_vmlaldavq_uv4si (__a, __b); } -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_n_u16 (uint16x8_t __a, const int __imm) -{ - return __builtin_mve_vshlltq_n_uv8hi (__a, __imm); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_n_u16 (uint16x8_t __a, const int __imm) -{ - return __builtin_mve_vshllbq_n_uv8hi (__a, __imm); -} - __extension__ extern __inline uint32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vbicq_n_u32 (uint32x4_t __a, const int __imm) @@ -3650,20 +3578,6 @@ __arm_vmlaldavq_s32 (int32x4_t __a, int32x4_t __b) return __builtin_mve_vmlaldavq_sv4si (__a, __b); } -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_n_s16 (int16x8_t __a, const int __imm) -{ - return __builtin_mve_vshlltq_n_sv8hi (__a, __imm); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_n_s16 (int16x8_t __a, const int __imm) -{ - return __builtin_mve_vshllbq_n_sv8hi (__a, __imm); -} - __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vbicq_n_s32 (int32x4_t __a, const int __imm) @@ -6777,62 +6691,6 @@ __arm_vrmlsldavhaxq_p_s32 (int64_t __a, int32x4_t __b, int32x4_t __c, mve_pred16 return __builtin_mve_vrmlsldavhaxq_p_sv4si (__a, __b, __c, __p); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_m_n_s8 (int16x8_t __inactive, int8x16_t __a, const int __imm, mve_pred16_t __p) -{ - return __builtin_mve_vshllbq_m_n_sv16qi (__inactive, __a, __imm, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_m_n_s16 (int32x4_t __inactive, int16x8_t __a, const int __imm, mve_pred16_t __p) -{ - return __builtin_mve_vshllbq_m_n_sv8hi (__inactive, __a, __imm, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_m_n_u8 (uint16x8_t __inactive, uint8x16_t __a, const int __imm, mve_pred16_t __p) -{ - return __builtin_mve_vshllbq_m_n_uv16qi (__inactive, __a, __imm, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_m_n_u16 (uint32x4_t __inactive, uint16x8_t __a, const int __imm, mve_pred16_t __p) -{ - return __builtin_mve_vshllbq_m_n_uv8hi (__inactive, __a, __imm, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_m_n_s8 (int16x8_t __inactive, int8x16_t __a, const int __imm, mve_pred16_t __p) -{ - return __builtin_mve_vshlltq_m_n_sv16qi (__inactive, __a, __imm, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_m_n_s16 (int32x4_t __inactive, int16x8_t __a, const int __imm, mve_pred16_t __p) -{ - return __builtin_mve_vshlltq_m_n_sv8hi (__inactive, __a, __imm, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_m_n_u8 (uint16x8_t __inactive, uint8x16_t __a, const int __imm, mve_pred16_t __p) -{ - return __builtin_mve_vshlltq_m_n_uv16qi (__inactive, __a, __imm, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_m_n_u16 (uint32x4_t __inactive, uint16x8_t __a, const int __imm, mve_pred16_t __p) -{ - return __builtin_mve_vshlltq_m_n_uv8hi (__inactive, __a, __imm, __p); -} - __extension__ extern __inline void __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vstrbq_scatter_offset_s8 (int8_t * __base, uint8x16_t __offset, int8x16_t __value) @@ -9360,62 +9218,6 @@ __arm_vrev64q_x_u32 (uint32x4_t __a, mve_pred16_t __p) return __builtin_mve_vrev64q_m_uv4si (__arm_vuninitializedq_u32 (), __a, __p); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_x_n_s8 (int8x16_t __a, const int __imm, mve_pred16_t __p) -{ - return __builtin_mve_vshllbq_m_n_sv16qi (__arm_vuninitializedq_s16 (), __a, __imm, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_x_n_s16 (int16x8_t __a, const int __imm, mve_pred16_t __p) -{ - return __builtin_mve_vshllbq_m_n_sv8hi (__arm_vuninitializedq_s32 (), __a, __imm, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_x_n_u8 (uint8x16_t __a, const int __imm, mve_pred16_t __p) -{ - return __builtin_mve_vshllbq_m_n_uv16qi (__arm_vuninitializedq_u16 (), __a, __imm, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_x_n_u16 (uint16x8_t __a, const int __imm, mve_pred16_t __p) -{ - return __builtin_mve_vshllbq_m_n_uv8hi (__arm_vuninitializedq_u32 (), __a, __imm, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_x_n_s8 (int8x16_t __a, const int __imm, mve_pred16_t __p) -{ - return __builtin_mve_vshlltq_m_n_sv16qi (__arm_vuninitializedq_s16 (), __a, __imm, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_x_n_s16 (int16x8_t __a, const int __imm, mve_pred16_t __p) -{ - return __builtin_mve_vshlltq_m_n_sv8hi (__arm_vuninitializedq_s32 (), __a, __imm, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_x_n_u8 (uint8x16_t __a, const int __imm, mve_pred16_t __p) -{ - return __builtin_mve_vshlltq_m_n_uv16qi (__arm_vuninitializedq_u16 (), __a, __imm, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_x_n_u16 (uint16x8_t __a, const int __imm, mve_pred16_t __p) -{ - return __builtin_mve_vshlltq_m_n_uv8hi (__arm_vuninitializedq_u32 (), __a, __imm, __p); -} - __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vadciq_s32 (int32x4_t __a, int32x4_t __b, unsigned * __carry_out) @@ -14055,20 +13857,6 @@ __arm_vmlaldavq (uint16x8_t __a, uint16x8_t __b) return __arm_vmlaldavq_u16 (__a, __b); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq (uint8x16_t __a, const int __imm) -{ - return __arm_vshlltq_n_u8 (__a, __imm); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq (uint8x16_t __a, const int __imm) -{ - return __arm_vshllbq_n_u8 (__a, __imm); -} - __extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vbicq (uint16x8_t __a, const int __imm) @@ -14132,20 +13920,6 @@ __arm_vmlaldavq (int16x8_t __a, int16x8_t __b) return __arm_vmlaldavq_s16 (__a, __b); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq (int8x16_t __a, const int __imm) -{ - return __arm_vshlltq_n_s8 (__a, __imm); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq (int8x16_t __a, const int __imm) -{ - return __arm_vshllbq_n_s8 (__a, __imm); -} - __extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vbicq (int16x8_t __a, const int __imm) @@ -14174,20 +13948,6 @@ __arm_vmlaldavq (uint32x4_t __a, uint32x4_t __b) return __arm_vmlaldavq_u32 (__a, __b); } -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq (uint16x8_t __a, const int __imm) -{ - return __arm_vshlltq_n_u16 (__a, __imm); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq (uint16x8_t __a, const int __imm) -{ - return __arm_vshllbq_n_u16 (__a, __imm); -} - __extension__ extern __inline uint32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vbicq (uint32x4_t __a, const int __imm) @@ -14251,20 +14011,6 @@ __arm_vmlaldavq (int32x4_t __a, int32x4_t __b) return __arm_vmlaldavq_s32 (__a, __b); } -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq (int16x8_t __a, const int __imm) -{ - return __arm_vshlltq_n_s16 (__a, __imm); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq (int16x8_t __a, const int __imm) -{ - return __arm_vshllbq_n_s16 (__a, __imm); -} - __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vbicq (int32x4_t __a, const int __imm) @@ -17338,62 +17084,6 @@ __arm_vrmlsldavhaxq_p (int64_t __a, int32x4_t __b, int32x4_t __c, mve_pred16_t _ return __arm_vrmlsldavhaxq_p_s32 (__a, __b, __c, __p); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_m (int16x8_t __inactive, int8x16_t __a, const int __imm, mve_pred16_t __p) -{ - return __arm_vshllbq_m_n_s8 (__inactive, __a, __imm, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_m (int32x4_t __inactive, int16x8_t __a, const int __imm, mve_pred16_t __p) -{ - return __arm_vshllbq_m_n_s16 (__inactive, __a, __imm, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_m (uint16x8_t __inactive, uint8x16_t __a, const int __imm, mve_pred16_t __p) -{ - return __arm_vshllbq_m_n_u8 (__inactive, __a, __imm, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_m (uint32x4_t __inactive, uint16x8_t __a, const int __imm, mve_pred16_t __p) -{ - return __arm_vshllbq_m_n_u16 (__inactive, __a, __imm, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_m (int16x8_t __inactive, int8x16_t __a, const int __imm, mve_pred16_t __p) -{ - return __arm_vshlltq_m_n_s8 (__inactive, __a, __imm, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_m (int32x4_t __inactive, int16x8_t __a, const int __imm, mve_pred16_t __p) -{ - return __arm_vshlltq_m_n_s16 (__inactive, __a, __imm, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_m (uint16x8_t __inactive, uint8x16_t __a, const int __imm, mve_pred16_t __p) -{ - return __arm_vshlltq_m_n_u8 (__inactive, __a, __imm, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_m (uint32x4_t __inactive, uint16x8_t __a, const int __imm, mve_pred16_t __p) -{ - return __arm_vshlltq_m_n_u16 (__inactive, __a, __imm, __p); -} - __extension__ extern __inline void __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vstrbq_scatter_offset (int8_t * __base, uint8x16_t __offset, int8x16_t __value) @@ -19424,62 +19114,6 @@ __arm_vrev64q_x (uint32x4_t __a, mve_pred16_t __p) return __arm_vrev64q_x_u32 (__a, __p); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_x (int8x16_t __a, const int __imm, mve_pred16_t __p) -{ - return __arm_vshllbq_x_n_s8 (__a, __imm, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_x (int16x8_t __a, const int __imm, mve_pred16_t __p) -{ - return __arm_vshllbq_x_n_s16 (__a, __imm, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_x (uint8x16_t __a, const int __imm, mve_pred16_t __p) -{ - return __arm_vshllbq_x_n_u8 (__a, __imm, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshllbq_x (uint16x8_t __a, const int __imm, mve_pred16_t __p) -{ - return __arm_vshllbq_x_n_u16 (__a, __imm, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_x (int8x16_t __a, const int __imm, mve_pred16_t __p) -{ - return __arm_vshlltq_x_n_s8 (__a, __imm, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_x (int16x8_t __a, const int __imm, mve_pred16_t __p) -{ - return __arm_vshlltq_x_n_s16 (__a, __imm, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_x (uint8x16_t __a, const int __imm, mve_pred16_t __p) -{ - return __arm_vshlltq_x_n_u8 (__a, __imm, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vshlltq_x (uint16x8_t __a, const int __imm, mve_pred16_t __p) -{ - return __arm_vshlltq_x_n_u16 (__a, __imm, __p); -} - __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vadciq (int32x4_t __a, int32x4_t __b, unsigned * __carry_out) @@ -22531,20 +22165,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_fp_n][__ARM_mve_type_float16x8_t]: __arm_vminnmvq_f16 (__ARM_mve_coerce2(p0, double), __ARM_mve_coerce(__p1, float16x8_t)), \ int (*)[__ARM_mve_type_fp_n][__ARM_mve_type_float32x4_t]: __arm_vminnmvq_f32 (__ARM_mve_coerce2(p0, double), __ARM_mve_coerce(__p1, float32x4_t)));}) -#define __arm_vshlltq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vshlltq_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), p1), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vshlltq_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), p1), \ - int (*)[__ARM_mve_type_uint8x16_t]: __arm_vshlltq_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), p1), \ - int (*)[__ARM_mve_type_uint16x8_t]: __arm_vshlltq_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), p1));}) - -#define __arm_vshllbq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vshllbq_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), p1), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vshllbq_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), p1), \ - int (*)[__ARM_mve_type_uint8x16_t]: __arm_vshllbq_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), p1), \ - int (*)[__ARM_mve_type_uint16x8_t]: __arm_vshllbq_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), p1));}) - #define __arm_vqshluq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ int (*)[__ARM_mve_type_int8x16_t]: __arm_vqshluq_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), p1), \ @@ -23973,20 +23593,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vmlaldavxq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vmlaldavxq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)));}) -#define __arm_vshlltq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vshlltq_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), p1), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vshlltq_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), p1), \ - int (*)[__ARM_mve_type_uint8x16_t]: __arm_vshlltq_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), p1), \ - int (*)[__ARM_mve_type_uint16x8_t]: __arm_vshlltq_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), p1));}) - -#define __arm_vshllbq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vshllbq_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), p1), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vshllbq_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), p1), \ - int (*)[__ARM_mve_type_uint8x16_t]: __arm_vshllbq_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), p1), \ - int (*)[__ARM_mve_type_uint16x8_t]: __arm_vshllbq_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), p1));}) - #define __arm_vqdmulltq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -24853,20 +24459,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_int8x16_t]: __arm_vrev16q_x_s8 (__ARM_mve_coerce(__p1, int8x16_t), p2), \ int (*)[__ARM_mve_type_uint8x16_t]: __arm_vrev16q_x_u8 (__ARM_mve_coerce(__p1, uint8x16_t), p2));}) -#define __arm_vshllbq_x(p1,p2,p3) ({ __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vshllbq_x_n_s8 (__ARM_mve_coerce(__p1, int8x16_t), p2, p3), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vshllbq_x_n_s16 (__ARM_mve_coerce(__p1, int16x8_t), p2, p3), \ - int (*)[__ARM_mve_type_uint8x16_t]: __arm_vshllbq_x_n_u8 (__ARM_mve_coerce(__p1, uint8x16_t), p2, p3), \ - int (*)[__ARM_mve_type_uint16x8_t]: __arm_vshllbq_x_n_u16 (__ARM_mve_coerce(__p1, uint16x8_t), p2, p3));}) - -#define __arm_vshlltq_x(p1,p2,p3) ({ __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t]: __arm_vshlltq_x_n_s8 (__ARM_mve_coerce(__p1, int8x16_t), p2, p3), \ - int (*)[__ARM_mve_type_int16x8_t]: __arm_vshlltq_x_n_s16 (__ARM_mve_coerce(__p1, int16x8_t), p2, p3), \ - int (*)[__ARM_mve_type_uint8x16_t]: __arm_vshlltq_x_n_u8 (__ARM_mve_coerce(__p1, uint8x16_t), p2, p3), \ - int (*)[__ARM_mve_type_uint16x8_t]: __arm_vshlltq_x_n_u16 (__ARM_mve_coerce(__p1, uint16x8_t), p2, p3));}) - #define __arm_vdwdupq_x_u8(p1,p2,p3,p4) ({ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p1)])0, \ int (*)[__ARM_mve_type_int_n]: __arm_vdwdupq_x_n_u8 ((uint32_t) __p1, p2, p3, p4), \ @@ -25084,22 +24676,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vqrdmlsdhq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \ int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vqrdmlsdhq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3));}) -#define __arm_vshllbq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int8x16_t]: __arm_vshllbq_m_n_s8 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int8x16_t), p2, p3), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int16x8_t]: __arm_vshllbq_m_n_s16 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint8x16_t]: __arm_vshllbq_m_n_u8 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint8x16_t), p2, p3), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint16x8_t]: __arm_vshllbq_m_n_u16 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint16x8_t), p2, p3));}) - -#define __arm_vshlltq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int8x16_t]: __arm_vshlltq_m_n_s8 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int8x16_t), p2, p3), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int16x8_t]: __arm_vshlltq_m_n_s16 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint8x16_t]: __arm_vshlltq_m_n_u8 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint8x16_t), p2, p3), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint16x8_t]: __arm_vshlltq_m_n_u16 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint16x8_t), p2, p3));}) - #define __arm_vmlaldavaq_p(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ __typeof(p2) __p2 = (p2); \