From patchwork Wed May 10 13:30:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Lyon X-Patchwork-Id: 69069 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 17FFB3882020 for ; Wed, 10 May 2023 13:40:22 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 17FFB3882020 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1683726022; bh=qgmlkyzZdX7SLvipP/R2YODB81hFQY9VHDjaGRO5PIA=; h=To:CC:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=LEGZQ0ocVeY+f87y1ekjwj9OVPphK/WwhmAnhNT7FPF4Bp3CPtUcfPqC73kezJ85f hPuvBl1wdEaMmNslGuu1IhorsRGTjpUYE3sWKEK9LlTklTNHzVc37mhEbRyyW87BqV vx8fExZwsdQpfVttnbFunbwskGNRE9MbaT60m48A= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR05-VI1-obe.outbound.protection.outlook.com (mail-vi1eur05on2064.outbound.protection.outlook.com [40.107.21.64]) by sourceware.org (Postfix) with ESMTPS id 707723873E63 for ; Wed, 10 May 2023 13:31:46 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 707723873E63 Received: from AM5PR0601CA0056.eurprd06.prod.outlook.com (2603:10a6:206::21) by AM8PR08MB6482.eurprd08.prod.outlook.com (2603:10a6:20b:367::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20; Wed, 10 May 2023 13:31:19 +0000 Received: from AM7EUR03FT052.eop-EUR03.prod.protection.outlook.com (2603:10a6:206:0:cafe::fc) by AM5PR0601CA0056.outlook.office365.com (2603:10a6:206::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20 via Frontend Transport; Wed, 10 May 2023 13:31:18 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM7EUR03FT052.mail.protection.outlook.com (100.127.140.214) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20 via Frontend Transport; Wed, 10 May 2023 13:31:18 +0000 Received: ("Tessian outbound 8b05220b4215:v136"); Wed, 10 May 2023 13:31:18 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 9991753b68fa0e7c X-CR-MTA-TID: 64aa7808 Received: from d844f30e9a6a.2 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 5868176B-0BFA-4D29-9628-CF0AFC134AEA.1; Wed, 10 May 2023 13:31:11 +0000 Received: from EUR04-HE1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d844f30e9a6a.2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Wed, 10 May 2023 13:31:11 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CTqmIyAVpa1QpLL4fI8ObwcxJrMtV0klWKvbMwhOuHWEg5IxgbOLbJwxcA2uhUbkys40ymqR1ma2zBf4LhSYc9rDm0hInAh6Cg1nS0i/Zi7c/7S/2X102x+wGEiLgCZyP+FHIW2RUgV20Ccoys86ryVa7TDVQA64pttUs214Uwrer6YS1Jt+8bM08/RydKCv6Y0bmHMJ5iFtkZE9lgOmL0rWNq8UPemcpAjahX3FnkonNwayvoYPP1GkvHCU8co2PBqHy+rnbsVYpJn4ov8c3Cf8ty1YGBktpTjTBn0t7u/rc4VNi9i0VyIJLMiL3yxILshQsFEscwwHCkfBwRg9mA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qgmlkyzZdX7SLvipP/R2YODB81hFQY9VHDjaGRO5PIA=; b=X96Yw3UJGvEXDvhiEyRQCqzpn7E6pOdqfBE5nqOzuuB8LJ9BdzCQiyb8vT5+fKUIcMQXQ8t7Z2e/EjgB8yQHrU/6cB7KVER7VIIMobNf8ju4iMuq9uT5klE/uYw24krN8TbWH4o5YP3x7/V36el/QI1LEUjbB63/y9slRJI2Pnt+0Mmx0JIf8HhDoJcP6tazYhbu1nGomvfNjbjl34lzYGW8lLoQhIagDXmyuiP60c8t+CCqlL0jW08B6NWyRPWej5OZzaOxCuiAfyv5swNFn4tzgnOHDRF9AePdD16V5oxODftqgwAu8lyle5ejD4BJtE1GeUhT7KOS4kFk4CMJaA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none Received: from AS9PR06CA0617.eurprd06.prod.outlook.com (2603:10a6:20b:46e::28) by PAWPR08MB9806.eurprd08.prod.outlook.com (2603:10a6:102:2e2::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Wed, 10 May 2023 13:30:43 +0000 Received: from AM7EUR03FT046.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:46e:cafe::58) by AS9PR06CA0617.outlook.office365.com (2603:10a6:20b:46e::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.19 via Frontend Transport; Wed, 10 May 2023 13:30:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234) smtp.mailfrom=arm.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 40.67.248.234 as permitted sender) receiver=protection.outlook.com; client-ip=40.67.248.234; helo=nebula.arm.com; pr=C Received: from nebula.arm.com (40.67.248.234) by AM7EUR03FT046.mail.protection.outlook.com (100.127.140.78) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6387.20 via Frontend Transport; Wed, 10 May 2023 13:30:43 +0000 Received: from AZ-NEU-EX02.Emea.Arm.com (10.251.26.5) by AZ-NEU-EX03.Arm.com (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 10 May 2023 13:30:42 +0000 Received: from AZ-NEU-EX03.Arm.com (10.251.24.31) by AZ-NEU-EX02.Emea.Arm.com (10.251.26.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 10 May 2023 13:30:42 +0000 Received: from e129018.arm.com (10.57.23.51) by mail.arm.com (10.251.24.31) with Microsoft SMTP Server id 15.1.2507.23 via Frontend Transport; Wed, 10 May 2023 13:30:41 +0000 To: , , , CC: Christophe Lyon Subject: [PATCH 03/20] arm: [MVE intrinsics] rework vcmp Date: Wed, 10 May 2023 15:30:19 +0200 Message-ID: <20230510133036.596530-3-christophe.lyon@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230510133036.596530-1-christophe.lyon@arm.com> References: <20230510133036.596530-1-christophe.lyon@arm.com> MIME-Version: 1.0 X-EOPAttributedMessage: 1 X-MS-TrafficTypeDiagnostic: AM7EUR03FT046:EE_|PAWPR08MB9806:EE_|AM7EUR03FT052:EE_|AM8PR08MB6482:EE_ X-MS-Office365-Filtering-Correlation-Id: e0bed026-66b8-4e1d-c889-08db515ad66a x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: UAvfBIJNllKaKJr+wXNd5z9y+JOf7iM/ekNFxTj6PKOxD6zaHusg1hmKdfALxmWz/QSuS8zOJhMervN0S+6jLGfhJZA8d7pbFDq59+y8tT9NMuXzPwWPbfspCXG9mOyB4oiglxX9WA79CNDqgsuAAPkah/DGDRduui6Lj+QWUvjwfWkrjUcqWz2jjwVjfC+gmDG2dggC7JxxIxd4PBK8SsCQh+PgrpCzEGHPbIt9Blpeq27cXxJPVtYFfJFSE2guszmq092R3AFDd1pxuC0MCvYGLEVBJQ4Gljh2bdjQrzWXi3dF+HxBxYAJ+fDOtWG3XtdjE5aNjjhNhdTeRhmWXYKLo/fPJ5DgLUSIrOXZr5jTG7KFVRE4HvXXFEwc3tnir1tKIAOPV1c5/KHl3X80PyN2llALPhSYcYKF3GAw312QrlHq8dxyEPZcFwWcJF25HYF7Cwcse7F3zCuIBnnJzGDKQ3J3OJxhD9TJDOhcyyT393gObEBG4YpBwv9x+zxvvyAlU4X4zHcJ48372KClFRLRkH4uevYvuvZLl+7EvPkTaWNRosQrBPjoLnW3mhLSYN2mPCXGLQHCEZeUo/lgdCZJBafmhfPEiZTib57WA8uaxLl3XROnv/ffnCe0V67jZ7i0JjlrKYtwXa8E4DEu18mRUkDjxYklSAIx4MZXb9BWu1xbUDCiSi4TUQYaKqV3QGJIh+0TFRr0PDo/HRftHoSrzFvoMN40h9ak6qLRVXM= X-Forefront-Antispam-Report-Untrusted: CIP:40.67.248.234; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:nebula.arm.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230028)(4636009)(39860400002)(396003)(376002)(346002)(136003)(451199021)(36840700001)(40470700004)(46966006)(36756003)(86362001)(4326008)(110136005)(316002)(7696005)(70206006)(478600001)(6636002)(70586007)(82310400005)(40480700001)(5660300002)(30864003)(8936002)(41300700001)(8676002)(356005)(44832011)(81166007)(186003)(4001150100001)(2906002)(82740400003)(26005)(36860700001)(2616005)(1076003)(426003)(47076005)(83380400001)(336012)(6666004)(40460700003)(36900700001)(579004)(559001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB9806 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM7EUR03FT052.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: c95b4fa4-a901-45d8-7f5c-08db515ac158 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 0CvfB+Dkza9kebFA+UdVcAcJZMRvzXwP0j/bMIp2jUYSLpCzkrIhxW/166ZraYrP26e7pYeHpYuRbI9R88LsVMkAngHEMTOSjZbNKVHl++91sc6zj1ZlOxjv3Yr5IweeCd6c6WMXkkPVhK1QwOqe4DNNLi61PmB++3PuXtGd6diHDD1F5zivPx+fm6iTaP39KhBydYpalA7TIGoJm+ufWjQ1p7q6YZN4jiTQUv2qu9qIEdtdoVrDxc91aWO6LktKNRFVy00mn9DebP2Q18L61DpEGVhDaicPadSRGZ7usAip/jKTWuoEnzqc3WVLpdstHW0ooX778z6VzxcLsCCYSpOOeEIySGmFC4Qb7yWNNgivtMSTnFdUuETeKM0YAGeQjaqsYByyLqFnJDCHy2wfJL0juR+tUjMhQfhm5UXVvv7DlRC+LHHXqfFzdKOCp8dga3dMVTXGXTJgy55KFkIaSnEKU1oxL2YW3WIun6YXskmYrZvHNndh9w9JLZdCvtk1M4Nc6XQ1HS2KZkgOL4w2MdI+aviWbC5VXnlCFG0rX2EN3c0CkN37nxBd4L5TV9Txgsii9RJ1ZJpUlqCdPAHLGucv+GVaSxJWWVW/JtmY5fwnV3OccwZ8ATMBImD39Y+qJ+4c4PilfVZco6OPy3G7taRAX8nJF9odFS2njnqYw0J6prgfLzbFDsOfypiefj+hfytZ6sK6ziS5rSSNT661Fd20wRAnSnBuzFbKJmdr0zk= X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230028)(4636009)(136003)(376002)(346002)(396003)(39860400002)(451199021)(40470700004)(46966006)(36840700001)(1076003)(336012)(83380400001)(26005)(40480700001)(186003)(2616005)(40460700003)(478600001)(426003)(110136005)(86362001)(316002)(41300700001)(70206006)(70586007)(4326008)(6636002)(81166007)(82740400003)(7696005)(6666004)(36756003)(36860700001)(47076005)(4001150100001)(2906002)(30864003)(8676002)(8936002)(44832011)(5660300002)(82310400005)(559001)(579004); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 13:31:18.8504 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e0bed026-66b8-4e1d-c889-08db515ad66a X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM7EUR03FT052.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6482 X-Spam-Status: No, score=-12.6 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Christophe Lyon via Gcc-patches From: Christophe Lyon Reply-To: Christophe Lyon Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" Implement vcmp using the new MVE builtins framework. 2022-10-25 Christophe Lyon gcc/ * config/arm/arm-mve-builtins-base.cc (vcmpeqq, vcmpneq, vcmpgeq) (vcmpgtq, vcmpleq, vcmpltq, vcmpcsq, vcmphiq): New. * config/arm/arm-mve-builtins-base.def (vcmpeqq, vcmpneq, vcmpgeq) (vcmpgtq, vcmpleq, vcmpltq, vcmpcsq, vcmphiq): New. * config/arm/arm-mve-builtins-base.h (vcmpeqq, vcmpneq, vcmpgeq) (vcmpgtq, vcmpleq, vcmpltq, vcmpcsq, vcmphiq): New. * config/arm/arm-mve-builtins-functions.h (class unspec_based_mve_function_exact_insn_vcmp): New. * config/arm/arm-mve-builtins.cc (function_instance::has_inactive_argument): Handle vcmp. * config/arm/arm_mve.h (vcmpneq): Remove. (vcmphiq): Remove. (vcmpeqq): Remove. (vcmpcsq): Remove. (vcmpltq): Remove. (vcmpleq): Remove. (vcmpgtq): Remove. (vcmpgeq): Remove. (vcmpneq_m): Remove. (vcmphiq_m): Remove. (vcmpeqq_m): Remove. (vcmpcsq_m): Remove. (vcmpcsq_m_n): Remove. (vcmpltq_m): Remove. (vcmpleq_m): Remove. (vcmpgtq_m): Remove. (vcmpgeq_m): Remove. (vcmpneq_s8): Remove. (vcmpneq_s16): Remove. (vcmpneq_s32): Remove. (vcmpneq_u8): Remove. (vcmpneq_u16): Remove. (vcmpneq_u32): Remove. (vcmpneq_n_u8): Remove. (vcmphiq_u8): Remove. (vcmphiq_n_u8): Remove. (vcmpeqq_u8): Remove. (vcmpeqq_n_u8): Remove. (vcmpcsq_u8): Remove. (vcmpcsq_n_u8): Remove. (vcmpneq_n_s8): Remove. (vcmpltq_s8): Remove. (vcmpltq_n_s8): Remove. (vcmpleq_s8): Remove. (vcmpleq_n_s8): Remove. (vcmpgtq_s8): Remove. (vcmpgtq_n_s8): Remove. (vcmpgeq_s8): Remove. (vcmpgeq_n_s8): Remove. (vcmpeqq_s8): Remove. (vcmpeqq_n_s8): Remove. (vcmpneq_n_u16): Remove. (vcmphiq_u16): Remove. (vcmphiq_n_u16): Remove. (vcmpeqq_u16): Remove. (vcmpeqq_n_u16): Remove. (vcmpcsq_u16): Remove. (vcmpcsq_n_u16): Remove. (vcmpneq_n_s16): Remove. (vcmpltq_s16): Remove. (vcmpltq_n_s16): Remove. (vcmpleq_s16): Remove. (vcmpleq_n_s16): Remove. (vcmpgtq_s16): Remove. (vcmpgtq_n_s16): Remove. (vcmpgeq_s16): Remove. (vcmpgeq_n_s16): Remove. (vcmpeqq_s16): Remove. (vcmpeqq_n_s16): Remove. (vcmpneq_n_u32): Remove. (vcmphiq_u32): Remove. (vcmphiq_n_u32): Remove. (vcmpeqq_u32): Remove. (vcmpeqq_n_u32): Remove. (vcmpcsq_u32): Remove. (vcmpcsq_n_u32): Remove. (vcmpneq_n_s32): Remove. (vcmpltq_s32): Remove. (vcmpltq_n_s32): Remove. (vcmpleq_s32): Remove. (vcmpleq_n_s32): Remove. (vcmpgtq_s32): Remove. (vcmpgtq_n_s32): Remove. (vcmpgeq_s32): Remove. (vcmpgeq_n_s32): Remove. (vcmpeqq_s32): Remove. (vcmpeqq_n_s32): Remove. (vcmpneq_n_f16): Remove. (vcmpneq_f16): Remove. (vcmpltq_n_f16): Remove. (vcmpltq_f16): Remove. (vcmpleq_n_f16): Remove. (vcmpleq_f16): Remove. (vcmpgtq_n_f16): Remove. (vcmpgtq_f16): Remove. (vcmpgeq_n_f16): Remove. (vcmpgeq_f16): Remove. (vcmpeqq_n_f16): Remove. (vcmpeqq_f16): Remove. (vcmpneq_n_f32): Remove. (vcmpneq_f32): Remove. (vcmpltq_n_f32): Remove. (vcmpltq_f32): Remove. (vcmpleq_n_f32): Remove. (vcmpleq_f32): Remove. (vcmpgtq_n_f32): Remove. (vcmpgtq_f32): Remove. (vcmpgeq_n_f32): Remove. (vcmpgeq_f32): Remove. (vcmpeqq_n_f32): Remove. (vcmpeqq_f32): Remove. (vcmpeqq_m_f16): Remove. (vcmpeqq_m_f32): Remove. (vcmpneq_m_u8): Remove. (vcmpneq_m_n_u8): Remove. (vcmphiq_m_u8): Remove. (vcmphiq_m_n_u8): Remove. (vcmpeqq_m_u8): Remove. (vcmpeqq_m_n_u8): Remove. (vcmpcsq_m_u8): Remove. (vcmpcsq_m_n_u8): Remove. (vcmpneq_m_s8): Remove. (vcmpneq_m_n_s8): Remove. (vcmpltq_m_s8): Remove. (vcmpltq_m_n_s8): Remove. (vcmpleq_m_s8): Remove. (vcmpleq_m_n_s8): Remove. (vcmpgtq_m_s8): Remove. (vcmpgtq_m_n_s8): Remove. (vcmpgeq_m_s8): Remove. (vcmpgeq_m_n_s8): Remove. (vcmpeqq_m_s8): Remove. (vcmpeqq_m_n_s8): Remove. (vcmpneq_m_u16): Remove. (vcmpneq_m_n_u16): Remove. (vcmphiq_m_u16): Remove. (vcmphiq_m_n_u16): Remove. (vcmpeqq_m_u16): Remove. (vcmpeqq_m_n_u16): Remove. (vcmpcsq_m_u16): Remove. (vcmpcsq_m_n_u16): Remove. (vcmpneq_m_s16): Remove. (vcmpneq_m_n_s16): Remove. (vcmpltq_m_s16): Remove. (vcmpltq_m_n_s16): Remove. (vcmpleq_m_s16): Remove. (vcmpleq_m_n_s16): Remove. (vcmpgtq_m_s16): Remove. (vcmpgtq_m_n_s16): Remove. (vcmpgeq_m_s16): Remove. (vcmpgeq_m_n_s16): Remove. (vcmpeqq_m_s16): Remove. (vcmpeqq_m_n_s16): Remove. (vcmpneq_m_u32): Remove. (vcmpneq_m_n_u32): Remove. (vcmphiq_m_u32): Remove. (vcmphiq_m_n_u32): Remove. (vcmpeqq_m_u32): Remove. (vcmpeqq_m_n_u32): Remove. (vcmpcsq_m_u32): Remove. (vcmpcsq_m_n_u32): Remove. (vcmpneq_m_s32): Remove. (vcmpneq_m_n_s32): Remove. (vcmpltq_m_s32): Remove. (vcmpltq_m_n_s32): Remove. (vcmpleq_m_s32): Remove. (vcmpleq_m_n_s32): Remove. (vcmpgtq_m_s32): Remove. (vcmpgtq_m_n_s32): Remove. (vcmpgeq_m_s32): Remove. (vcmpgeq_m_n_s32): Remove. (vcmpeqq_m_s32): Remove. (vcmpeqq_m_n_s32): Remove. (vcmpeqq_m_n_f16): Remove. (vcmpgeq_m_f16): Remove. (vcmpgeq_m_n_f16): Remove. (vcmpgtq_m_f16): Remove. (vcmpgtq_m_n_f16): Remove. (vcmpleq_m_f16): Remove. (vcmpleq_m_n_f16): Remove. (vcmpltq_m_f16): Remove. (vcmpltq_m_n_f16): Remove. (vcmpneq_m_f16): Remove. (vcmpneq_m_n_f16): Remove. (vcmpeqq_m_n_f32): Remove. (vcmpgeq_m_f32): Remove. (vcmpgeq_m_n_f32): Remove. (vcmpgtq_m_f32): Remove. (vcmpgtq_m_n_f32): Remove. (vcmpleq_m_f32): Remove. (vcmpleq_m_n_f32): Remove. (vcmpltq_m_f32): Remove. (vcmpltq_m_n_f32): Remove. (vcmpneq_m_f32): Remove. (vcmpneq_m_n_f32): Remove. (__arm_vcmpneq_s8): Remove. (__arm_vcmpneq_s16): Remove. (__arm_vcmpneq_s32): Remove. (__arm_vcmpneq_u8): Remove. (__arm_vcmpneq_u16): Remove. (__arm_vcmpneq_u32): Remove. (__arm_vcmpneq_n_u8): Remove. (__arm_vcmphiq_u8): Remove. (__arm_vcmphiq_n_u8): Remove. (__arm_vcmpeqq_u8): Remove. (__arm_vcmpeqq_n_u8): Remove. (__arm_vcmpcsq_u8): Remove. (__arm_vcmpcsq_n_u8): Remove. (__arm_vcmpneq_n_s8): Remove. (__arm_vcmpltq_s8): Remove. (__arm_vcmpltq_n_s8): Remove. (__arm_vcmpleq_s8): Remove. (__arm_vcmpleq_n_s8): Remove. (__arm_vcmpgtq_s8): Remove. (__arm_vcmpgtq_n_s8): Remove. (__arm_vcmpgeq_s8): Remove. (__arm_vcmpgeq_n_s8): Remove. (__arm_vcmpeqq_s8): Remove. (__arm_vcmpeqq_n_s8): Remove. (__arm_vcmpneq_n_u16): Remove. (__arm_vcmphiq_u16): Remove. (__arm_vcmphiq_n_u16): Remove. (__arm_vcmpeqq_u16): Remove. (__arm_vcmpeqq_n_u16): Remove. (__arm_vcmpcsq_u16): Remove. (__arm_vcmpcsq_n_u16): Remove. (__arm_vcmpneq_n_s16): Remove. (__arm_vcmpltq_s16): Remove. (__arm_vcmpltq_n_s16): Remove. (__arm_vcmpleq_s16): Remove. (__arm_vcmpleq_n_s16): Remove. (__arm_vcmpgtq_s16): Remove. (__arm_vcmpgtq_n_s16): Remove. (__arm_vcmpgeq_s16): Remove. (__arm_vcmpgeq_n_s16): Remove. (__arm_vcmpeqq_s16): Remove. (__arm_vcmpeqq_n_s16): Remove. (__arm_vcmpneq_n_u32): Remove. (__arm_vcmphiq_u32): Remove. (__arm_vcmphiq_n_u32): Remove. (__arm_vcmpeqq_u32): Remove. (__arm_vcmpeqq_n_u32): Remove. (__arm_vcmpcsq_u32): Remove. (__arm_vcmpcsq_n_u32): Remove. (__arm_vcmpneq_n_s32): Remove. (__arm_vcmpltq_s32): Remove. (__arm_vcmpltq_n_s32): Remove. (__arm_vcmpleq_s32): Remove. (__arm_vcmpleq_n_s32): Remove. (__arm_vcmpgtq_s32): Remove. (__arm_vcmpgtq_n_s32): Remove. (__arm_vcmpgeq_s32): Remove. (__arm_vcmpgeq_n_s32): Remove. (__arm_vcmpeqq_s32): Remove. (__arm_vcmpeqq_n_s32): Remove. (__arm_vcmpneq_m_u8): Remove. (__arm_vcmpneq_m_n_u8): Remove. (__arm_vcmphiq_m_u8): Remove. (__arm_vcmphiq_m_n_u8): Remove. (__arm_vcmpeqq_m_u8): Remove. (__arm_vcmpeqq_m_n_u8): Remove. (__arm_vcmpcsq_m_u8): Remove. (__arm_vcmpcsq_m_n_u8): Remove. (__arm_vcmpneq_m_s8): Remove. (__arm_vcmpneq_m_n_s8): Remove. (__arm_vcmpltq_m_s8): Remove. (__arm_vcmpltq_m_n_s8): Remove. (__arm_vcmpleq_m_s8): Remove. (__arm_vcmpleq_m_n_s8): Remove. (__arm_vcmpgtq_m_s8): Remove. (__arm_vcmpgtq_m_n_s8): Remove. (__arm_vcmpgeq_m_s8): Remove. (__arm_vcmpgeq_m_n_s8): Remove. (__arm_vcmpeqq_m_s8): Remove. (__arm_vcmpeqq_m_n_s8): Remove. (__arm_vcmpneq_m_u16): Remove. (__arm_vcmpneq_m_n_u16): Remove. (__arm_vcmphiq_m_u16): Remove. (__arm_vcmphiq_m_n_u16): Remove. (__arm_vcmpeqq_m_u16): Remove. (__arm_vcmpeqq_m_n_u16): Remove. (__arm_vcmpcsq_m_u16): Remove. (__arm_vcmpcsq_m_n_u16): Remove. (__arm_vcmpneq_m_s16): Remove. (__arm_vcmpneq_m_n_s16): Remove. (__arm_vcmpltq_m_s16): Remove. (__arm_vcmpltq_m_n_s16): Remove. (__arm_vcmpleq_m_s16): Remove. (__arm_vcmpleq_m_n_s16): Remove. (__arm_vcmpgtq_m_s16): Remove. (__arm_vcmpgtq_m_n_s16): Remove. (__arm_vcmpgeq_m_s16): Remove. (__arm_vcmpgeq_m_n_s16): Remove. (__arm_vcmpeqq_m_s16): Remove. (__arm_vcmpeqq_m_n_s16): Remove. (__arm_vcmpneq_m_u32): Remove. (__arm_vcmpneq_m_n_u32): Remove. (__arm_vcmphiq_m_u32): Remove. (__arm_vcmphiq_m_n_u32): Remove. (__arm_vcmpeqq_m_u32): Remove. (__arm_vcmpeqq_m_n_u32): Remove. (__arm_vcmpcsq_m_u32): Remove. (__arm_vcmpcsq_m_n_u32): Remove. (__arm_vcmpneq_m_s32): Remove. (__arm_vcmpneq_m_n_s32): Remove. (__arm_vcmpltq_m_s32): Remove. (__arm_vcmpltq_m_n_s32): Remove. (__arm_vcmpleq_m_s32): Remove. (__arm_vcmpleq_m_n_s32): Remove. (__arm_vcmpgtq_m_s32): Remove. (__arm_vcmpgtq_m_n_s32): Remove. (__arm_vcmpgeq_m_s32): Remove. (__arm_vcmpgeq_m_n_s32): Remove. (__arm_vcmpeqq_m_s32): Remove. (__arm_vcmpeqq_m_n_s32): Remove. (__arm_vcmpneq_n_f16): Remove. (__arm_vcmpneq_f16): Remove. (__arm_vcmpltq_n_f16): Remove. (__arm_vcmpltq_f16): Remove. (__arm_vcmpleq_n_f16): Remove. (__arm_vcmpleq_f16): Remove. (__arm_vcmpgtq_n_f16): Remove. (__arm_vcmpgtq_f16): Remove. (__arm_vcmpgeq_n_f16): Remove. (__arm_vcmpgeq_f16): Remove. (__arm_vcmpeqq_n_f16): Remove. (__arm_vcmpeqq_f16): Remove. (__arm_vcmpneq_n_f32): Remove. (__arm_vcmpneq_f32): Remove. (__arm_vcmpltq_n_f32): Remove. (__arm_vcmpltq_f32): Remove. (__arm_vcmpleq_n_f32): Remove. (__arm_vcmpleq_f32): Remove. (__arm_vcmpgtq_n_f32): Remove. (__arm_vcmpgtq_f32): Remove. (__arm_vcmpgeq_n_f32): Remove. (__arm_vcmpgeq_f32): Remove. (__arm_vcmpeqq_n_f32): Remove. (__arm_vcmpeqq_f32): Remove. (__arm_vcmpeqq_m_f16): Remove. (__arm_vcmpeqq_m_f32): Remove. (__arm_vcmpeqq_m_n_f16): Remove. (__arm_vcmpgeq_m_f16): Remove. (__arm_vcmpgeq_m_n_f16): Remove. (__arm_vcmpgtq_m_f16): Remove. (__arm_vcmpgtq_m_n_f16): Remove. (__arm_vcmpleq_m_f16): Remove. (__arm_vcmpleq_m_n_f16): Remove. (__arm_vcmpltq_m_f16): Remove. (__arm_vcmpltq_m_n_f16): Remove. (__arm_vcmpneq_m_f16): Remove. (__arm_vcmpneq_m_n_f16): Remove. (__arm_vcmpeqq_m_n_f32): Remove. (__arm_vcmpgeq_m_f32): Remove. (__arm_vcmpgeq_m_n_f32): Remove. (__arm_vcmpgtq_m_f32): Remove. (__arm_vcmpgtq_m_n_f32): Remove. (__arm_vcmpleq_m_f32): Remove. (__arm_vcmpleq_m_n_f32): Remove. (__arm_vcmpltq_m_f32): Remove. (__arm_vcmpltq_m_n_f32): Remove. (__arm_vcmpneq_m_f32): Remove. (__arm_vcmpneq_m_n_f32): Remove. (__arm_vcmpneq): Remove. (__arm_vcmphiq): Remove. (__arm_vcmpeqq): Remove. (__arm_vcmpcsq): Remove. (__arm_vcmpltq): Remove. (__arm_vcmpleq): Remove. (__arm_vcmpgtq): Remove. (__arm_vcmpgeq): Remove. (__arm_vcmpneq_m): Remove. (__arm_vcmphiq_m): Remove. (__arm_vcmpeqq_m): Remove. (__arm_vcmpcsq_m): Remove. (__arm_vcmpltq_m): Remove. (__arm_vcmpleq_m): Remove. (__arm_vcmpgtq_m): Remove. (__arm_vcmpgeq_m): Remove. --- gcc/config/arm/arm-mve-builtins-base.cc | 9 + gcc/config/arm/arm-mve-builtins-base.def | 14 + gcc/config/arm/arm-mve-builtins-base.h | 8 + gcc/config/arm/arm-mve-builtins-functions.h | 110 + gcc/config/arm/arm-mve-builtins.cc | 8 + gcc/config/arm/arm_mve.h | 3298 ++----------------- 6 files changed, 333 insertions(+), 3114 deletions(-) diff --git a/gcc/config/arm/arm-mve-builtins-base.cc b/gcc/config/arm/arm-mve-builtins-base.cc index edca0d9ac6c..14870f5b1aa 100644 --- a/gcc/config/arm/arm-mve-builtins-base.cc +++ b/gcc/config/arm/arm-mve-builtins-base.cc @@ -26,6 +26,7 @@ #include "memmodel.h" #include "insn-codes.h" #include "optabs.h" +#include "expr.h" #include "basic-block.h" #include "function.h" #include "gimple.h" @@ -237,6 +238,14 @@ FUNCTION_WITH_RTX_M_N (vaddq, PLUS, VADDQ) FUNCTION_WITH_RTX_M (vandq, AND, VANDQ) FUNCTION_WITHOUT_N_NO_U_F (vclsq, VCLSQ) FUNCTION (vclzq, unspec_based_mve_function_exact_insn, (CLZ, CLZ, CLZ, -1, -1, -1, VCLZQ_M_S, VCLZQ_M_U, -1, -1, -1 ,-1)) +FUNCTION (vcmpeqq, unspec_based_mve_function_exact_insn_vcmp, (EQ, EQ, EQ, VCMPEQQ_M_S, VCMPEQQ_M_U, VCMPEQQ_M_F, VCMPEQQ_M_N_S, VCMPEQQ_M_N_U, VCMPEQQ_M_N_F)) +FUNCTION (vcmpneq, unspec_based_mve_function_exact_insn_vcmp, (NE, NE, NE, VCMPNEQ_M_S, VCMPNEQ_M_U, VCMPNEQ_M_F, VCMPNEQ_M_N_S, VCMPNEQ_M_N_U, VCMPNEQ_M_N_F)) +FUNCTION (vcmpgeq, unspec_based_mve_function_exact_insn_vcmp, (GE, UNKNOWN, GE, VCMPGEQ_M_S, UNKNOWN, VCMPGEQ_M_F, VCMPGEQ_M_N_S, UNKNOWN, VCMPGEQ_M_N_F)) +FUNCTION (vcmpgtq, unspec_based_mve_function_exact_insn_vcmp, (GT, UNKNOWN, GT, VCMPGTQ_M_S, UNKNOWN, VCMPGTQ_M_F, VCMPGTQ_M_N_S, UNKNOWN, VCMPGTQ_M_N_F)) +FUNCTION (vcmpleq, unspec_based_mve_function_exact_insn_vcmp, (LE, UNKNOWN, LE, VCMPLEQ_M_S, UNKNOWN, VCMPLEQ_M_F, VCMPLEQ_M_N_S, UNKNOWN, VCMPLEQ_M_N_F)) +FUNCTION (vcmpltq, unspec_based_mve_function_exact_insn_vcmp, (LT, UNKNOWN, LT, VCMPLTQ_M_S, UNKNOWN, VCMPLTQ_M_F, VCMPLTQ_M_N_S, UNKNOWN, VCMPLTQ_M_N_F)) +FUNCTION (vcmpcsq, unspec_based_mve_function_exact_insn_vcmp, (UNKNOWN, GEU, UNKNOWN, UNKNOWN, VCMPCSQ_M_U, UNKNOWN, UNKNOWN, VCMPCSQ_M_N_U, UNKNOWN)) +FUNCTION (vcmphiq, unspec_based_mve_function_exact_insn_vcmp, (UNKNOWN, GTU, UNKNOWN, UNKNOWN, VCMPHIQ_M_U, UNKNOWN, UNKNOWN, VCMPHIQ_M_N_U, UNKNOWN)) FUNCTION_WITHOUT_M_N (vcreateq, VCREATEQ) FUNCTION_WITH_RTX_M (veorq, XOR, VEORQ) FUNCTION_WITH_M_N_NO_F (vhaddq, VHADDQ) diff --git a/gcc/config/arm/arm-mve-builtins-base.def b/gcc/config/arm/arm-mve-builtins-base.def index 48a07c8d888..f05cecd9160 100644 --- a/gcc/config/arm/arm-mve-builtins-base.def +++ b/gcc/config/arm/arm-mve-builtins-base.def @@ -24,6 +24,14 @@ DEF_MVE_FUNCTION (vaddq, binary_opt_n, all_integer, mx_or_none) DEF_MVE_FUNCTION (vandq, binary, all_integer, mx_or_none) DEF_MVE_FUNCTION (vclsq, unary, all_signed, mx_or_none) DEF_MVE_FUNCTION (vclzq, unary, all_integer, mx_or_none) +DEF_MVE_FUNCTION (vcmpcsq, cmp, all_unsigned, m_or_none) +DEF_MVE_FUNCTION (vcmpeqq, cmp, all_integer, m_or_none) +DEF_MVE_FUNCTION (vcmpgeq, cmp, all_signed, m_or_none) +DEF_MVE_FUNCTION (vcmpgtq, cmp, all_signed, m_or_none) +DEF_MVE_FUNCTION (vcmphiq, cmp, all_unsigned, m_or_none) +DEF_MVE_FUNCTION (vcmpleq, cmp, all_signed, m_or_none) +DEF_MVE_FUNCTION (vcmpltq, cmp, all_signed, m_or_none) +DEF_MVE_FUNCTION (vcmpneq, cmp, all_integer, m_or_none) DEF_MVE_FUNCTION (vcreateq, create, all_integer_with_64, none) DEF_MVE_FUNCTION (veorq, binary, all_integer, mx_or_none) DEF_MVE_FUNCTION (vhaddq, binary_opt_n, all_integer, mx_or_none) @@ -86,6 +94,12 @@ DEF_MVE_FUNCTION (vabdq, binary, all_float, mx_or_none) DEF_MVE_FUNCTION (vabsq, unary, all_float, mx_or_none) DEF_MVE_FUNCTION (vaddq, binary_opt_n, all_float, mx_or_none) DEF_MVE_FUNCTION (vandq, binary, all_float, mx_or_none) +DEF_MVE_FUNCTION (vcmpeqq, cmp, all_float, m_or_none) +DEF_MVE_FUNCTION (vcmpgeq, cmp, all_float, m_or_none) +DEF_MVE_FUNCTION (vcmpgtq, cmp, all_float, m_or_none) +DEF_MVE_FUNCTION (vcmpleq, cmp, all_float, m_or_none) +DEF_MVE_FUNCTION (vcmpltq, cmp, all_float, m_or_none) +DEF_MVE_FUNCTION (vcmpneq, cmp, all_float, m_or_none) DEF_MVE_FUNCTION (vcreateq, create, all_float, none) DEF_MVE_FUNCTION (veorq, binary, all_float, mx_or_none) DEF_MVE_FUNCTION (vmaxnmaq, binary, all_float, m_or_none) diff --git a/gcc/config/arm/arm-mve-builtins-base.h b/gcc/config/arm/arm-mve-builtins-base.h index 31417435f6f..179e1295fb2 100644 --- a/gcc/config/arm/arm-mve-builtins-base.h +++ b/gcc/config/arm/arm-mve-builtins-base.h @@ -29,6 +29,14 @@ extern const function_base *const vaddq; extern const function_base *const vandq; extern const function_base *const vclsq; extern const function_base *const vclzq; +extern const function_base *const vcmpcsq; +extern const function_base *const vcmpeqq; +extern const function_base *const vcmpgeq; +extern const function_base *const vcmpgtq; +extern const function_base *const vcmphiq; +extern const function_base *const vcmpleq; +extern const function_base *const vcmpltq; +extern const function_base *const vcmpneq; extern const function_base *const vcreateq; extern const function_base *const veorq; extern const function_base *const vhaddq; diff --git a/gcc/config/arm/arm-mve-builtins-functions.h b/gcc/config/arm/arm-mve-builtins-functions.h index ddedbb2a8e1..d069990dcab 100644 --- a/gcc/config/arm/arm-mve-builtins-functions.h +++ b/gcc/config/arm/arm-mve-builtins-functions.h @@ -590,6 +590,116 @@ public: } }; +/* Map the comparison functions. */ +class unspec_based_mve_function_exact_insn_vcmp : public unspec_based_mve_function_base +{ +public: + CONSTEXPR unspec_based_mve_function_exact_insn_vcmp (rtx_code code_for_sint, + rtx_code code_for_uint, + rtx_code code_for_fp, + int unspec_for_m_sint, + int unspec_for_m_uint, + int unspec_for_m_fp, + int unspec_for_m_n_sint, + int unspec_for_m_n_uint, + int unspec_for_m_n_fp) + : unspec_based_mve_function_base (code_for_sint, + code_for_uint, + code_for_fp, + -1, + -1, + -1, + unspec_for_m_sint, + unspec_for_m_uint, + unspec_for_m_fp, + unspec_for_m_n_sint, + unspec_for_m_n_uint, + unspec_for_m_n_fp) + {} + + rtx + expand (function_expander &e) const override + { + machine_mode mode = e.vector_mode (0); + insn_code code; + rtx target; + + /* No suffix, no predicate, use the right RTX code. */ + if (e.pred == PRED_none) + { + switch (e.mode_suffix_id) + { + case MODE_none: + if (e.type_suffix (0).integer_p) + if (e.type_suffix (0).unsigned_p) + code = code_for_mve_vcmpq (m_code_for_uint, mode); + else + code = code_for_mve_vcmpq (m_code_for_sint, mode); + else + code = code_for_mve_vcmpq_f (m_code_for_fp, mode); + break; + + case MODE_n: + if (e.type_suffix (0).integer_p) + if (e.type_suffix (0).unsigned_p) + code = code_for_mve_vcmpq_n (m_code_for_uint, mode); + else + code = code_for_mve_vcmpq_n (m_code_for_sint, mode); + else + code = code_for_mve_vcmpq_n_f (m_code_for_fp, mode); + break; + + default: + gcc_unreachable (); + } + target = e.use_exact_insn (code); + } + else + { + switch (e.pred) + { + case PRED_m: + switch (e.mode_suffix_id) + { + case MODE_none: + /* No suffix, "m" predicate. */ + if (e.type_suffix (0).integer_p) + if (e.type_suffix (0).unsigned_p) + code = code_for_mve_vcmpq_m (m_unspec_for_m_uint, m_unspec_for_m_uint, mode); + else + code = code_for_mve_vcmpq_m (m_unspec_for_m_sint, m_unspec_for_m_sint, mode); + else + code = code_for_mve_vcmpq_m_f (m_unspec_for_m_fp, mode); + break; + + case MODE_n: + /* _n suffix, "m" predicate. */ + if (e.type_suffix (0).integer_p) + if (e.type_suffix (0).unsigned_p) + code = code_for_mve_vcmpq_m_n (m_unspec_for_m_n_uint, m_unspec_for_m_n_uint, mode); + else + code = code_for_mve_vcmpq_m_n (m_unspec_for_m_n_sint, m_unspec_for_m_n_sint, mode); + else + code = code_for_mve_vcmpq_m_n_f (m_unspec_for_m_n_fp, mode); + break; + + default: + gcc_unreachable (); + } + target = e.use_cond_insn (code, 0); + break; + + default: + gcc_unreachable (); + } + } + + rtx HItarget = gen_reg_rtx (HImode); + emit_move_insn (HItarget, gen_lowpart (HImode, target)); + return HItarget; + } +}; + } /* end namespace arm_mve */ /* Declare the global function base NAME, creating it from an instance diff --git a/gcc/config/arm/arm-mve-builtins.cc b/gcc/config/arm/arm-mve-builtins.cc index 9dc762c9fc0..59cfaf6e5b1 100644 --- a/gcc/config/arm/arm-mve-builtins.cc +++ b/gcc/config/arm/arm-mve-builtins.cc @@ -670,6 +670,14 @@ function_instance::has_inactive_argument () const return false; if (mode_suffix_id == MODE_r + || base == functions::vcmpeqq + || base == functions::vcmpneq + || base == functions::vcmpgeq + || base == functions::vcmpgtq + || base == functions::vcmpleq + || base == functions::vcmpltq + || base == functions::vcmpcsq + || base == functions::vcmphiq || base == functions::vmaxaq || base == functions::vmaxnmaq || base == functions::vminaq diff --git a/gcc/config/arm/arm_mve.h b/gcc/config/arm/arm_mve.h index 373797689cc..3eb8195060b 100644 --- a/gcc/config/arm/arm_mve.h +++ b/gcc/config/arm/arm_mve.h @@ -52,24 +52,16 @@ #define vrev32q(__a) __arm_vrev32q(__a) #define vrev64q(__a) __arm_vrev64q(__a) #define vaddlvq_p(__a, __p) __arm_vaddlvq_p(__a, __p) -#define vcmpneq(__a, __b) __arm_vcmpneq(__a, __b) #define vornq(__a, __b) __arm_vornq(__a, __b) #define vmulltq_int(__a, __b) __arm_vmulltq_int(__a, __b) #define vmullbq_int(__a, __b) __arm_vmullbq_int(__a, __b) #define vmladavq(__a, __b) __arm_vmladavq(__a, __b) -#define vcmphiq(__a, __b) __arm_vcmphiq(__a, __b) -#define vcmpeqq(__a, __b) __arm_vcmpeqq(__a, __b) -#define vcmpcsq(__a, __b) __arm_vcmpcsq(__a, __b) #define vcaddq_rot90(__a, __b) __arm_vcaddq_rot90(__a, __b) #define vcaddq_rot270(__a, __b) __arm_vcaddq_rot270(__a, __b) #define vbicq(__a, __b) __arm_vbicq(__a, __b) #define vaddvq_p(__a, __p) __arm_vaddvq_p(__a, __p) #define vaddvaq(__a, __b) __arm_vaddvaq(__a, __b) #define vbrsrq(__a, __b) __arm_vbrsrq(__a, __b) -#define vcmpltq(__a, __b) __arm_vcmpltq(__a, __b) -#define vcmpleq(__a, __b) __arm_vcmpleq(__a, __b) -#define vcmpgtq(__a, __b) __arm_vcmpgtq(__a, __b) -#define vcmpgeq(__a, __b) __arm_vcmpgeq(__a, __b) #define vqshluq(__a, __imm) __arm_vqshluq(__a, __imm) #define vmlsdavxq(__a, __b) __arm_vmlsdavxq(__a, __b) #define vmlsdavq(__a, __b) __arm_vmlsdavq(__a, __b) @@ -105,18 +97,9 @@ #define vmladavq_p(__a, __b, __p) __arm_vmladavq_p(__a, __b, __p) #define vmladavaq(__a, __b, __c) __arm_vmladavaq(__a, __b, __c) #define vdupq_m(__inactive, __a, __p) __arm_vdupq_m(__inactive, __a, __p) -#define vcmpneq_m(__a, __b, __p) __arm_vcmpneq_m(__a, __b, __p) -#define vcmphiq_m(__a, __b, __p) __arm_vcmphiq_m(__a, __b, __p) -#define vcmpeqq_m(__a, __b, __p) __arm_vcmpeqq_m(__a, __b, __p) -#define vcmpcsq_m(__a, __b, __p) __arm_vcmpcsq_m(__a, __b, __p) -#define vcmpcsq_m_n(__a, __b, __p) __arm_vcmpcsq_m_n(__a, __b, __p) #define vaddvaq_p(__a, __b, __p) __arm_vaddvaq_p(__a, __b, __p) #define vsriq(__a, __b, __imm) __arm_vsriq(__a, __b, __imm) #define vsliq(__a, __b, __imm) __arm_vsliq(__a, __b, __imm) -#define vcmpltq_m(__a, __b, __p) __arm_vcmpltq_m(__a, __b, __p) -#define vcmpleq_m(__a, __b, __p) __arm_vcmpleq_m(__a, __b, __p) -#define vcmpgtq_m(__a, __b, __p) __arm_vcmpgtq_m(__a, __b, __p) -#define vcmpgeq_m(__a, __b, __p) __arm_vcmpgeq_m(__a, __b, __p) #define vmlsdavxq_p(__a, __b, __p) __arm_vmlsdavxq_p(__a, __b, __p) #define vmlsdavq_p(__a, __b, __p) __arm_vmlsdavq_p(__a, __b, __p) #define vmladavxq_p(__a, __b, __p) __arm_vmladavxq_p(__a, __b, __p) @@ -442,40 +425,16 @@ #define vcvtq_n_u32_f32(__a, __imm6) __arm_vcvtq_n_u32_f32(__a, __imm6) #define vaddlvq_p_s32(__a, __p) __arm_vaddlvq_p_s32(__a, __p) #define vaddlvq_p_u32(__a, __p) __arm_vaddlvq_p_u32(__a, __p) -#define vcmpneq_s8(__a, __b) __arm_vcmpneq_s8(__a, __b) -#define vcmpneq_s16(__a, __b) __arm_vcmpneq_s16(__a, __b) -#define vcmpneq_s32(__a, __b) __arm_vcmpneq_s32(__a, __b) -#define vcmpneq_u8(__a, __b) __arm_vcmpneq_u8(__a, __b) -#define vcmpneq_u16(__a, __b) __arm_vcmpneq_u16(__a, __b) -#define vcmpneq_u32(__a, __b) __arm_vcmpneq_u32(__a, __b) #define vornq_u8(__a, __b) __arm_vornq_u8(__a, __b) #define vmulltq_int_u8(__a, __b) __arm_vmulltq_int_u8(__a, __b) #define vmullbq_int_u8(__a, __b) __arm_vmullbq_int_u8(__a, __b) #define vmladavq_u8(__a, __b) __arm_vmladavq_u8(__a, __b) -#define vcmpneq_n_u8(__a, __b) __arm_vcmpneq_n_u8(__a, __b) -#define vcmphiq_u8(__a, __b) __arm_vcmphiq_u8(__a, __b) -#define vcmphiq_n_u8(__a, __b) __arm_vcmphiq_n_u8(__a, __b) -#define vcmpeqq_u8(__a, __b) __arm_vcmpeqq_u8(__a, __b) -#define vcmpeqq_n_u8(__a, __b) __arm_vcmpeqq_n_u8(__a, __b) -#define vcmpcsq_u8(__a, __b) __arm_vcmpcsq_u8(__a, __b) -#define vcmpcsq_n_u8(__a, __b) __arm_vcmpcsq_n_u8(__a, __b) #define vcaddq_rot90_u8(__a, __b) __arm_vcaddq_rot90_u8(__a, __b) #define vcaddq_rot270_u8(__a, __b) __arm_vcaddq_rot270_u8(__a, __b) #define vbicq_u8(__a, __b) __arm_vbicq_u8(__a, __b) #define vaddvq_p_u8(__a, __p) __arm_vaddvq_p_u8(__a, __p) #define vaddvaq_u8(__a, __b) __arm_vaddvaq_u8(__a, __b) #define vbrsrq_n_u8(__a, __b) __arm_vbrsrq_n_u8(__a, __b) -#define vcmpneq_n_s8(__a, __b) __arm_vcmpneq_n_s8(__a, __b) -#define vcmpltq_s8(__a, __b) __arm_vcmpltq_s8(__a, __b) -#define vcmpltq_n_s8(__a, __b) __arm_vcmpltq_n_s8(__a, __b) -#define vcmpleq_s8(__a, __b) __arm_vcmpleq_s8(__a, __b) -#define vcmpleq_n_s8(__a, __b) __arm_vcmpleq_n_s8(__a, __b) -#define vcmpgtq_s8(__a, __b) __arm_vcmpgtq_s8(__a, __b) -#define vcmpgtq_n_s8(__a, __b) __arm_vcmpgtq_n_s8(__a, __b) -#define vcmpgeq_s8(__a, __b) __arm_vcmpgeq_s8(__a, __b) -#define vcmpgeq_n_s8(__a, __b) __arm_vcmpgeq_n_s8(__a, __b) -#define vcmpeqq_s8(__a, __b) __arm_vcmpeqq_s8(__a, __b) -#define vcmpeqq_n_s8(__a, __b) __arm_vcmpeqq_n_s8(__a, __b) #define vqshluq_n_s8(__a, __imm) __arm_vqshluq_n_s8(__a, __imm) #define vaddvq_p_s8(__a, __p) __arm_vaddvq_p_s8(__a, __p) #define vornq_s8(__a, __b) __arm_vornq_s8(__a, __b) @@ -496,30 +455,12 @@ #define vmulltq_int_u16(__a, __b) __arm_vmulltq_int_u16(__a, __b) #define vmullbq_int_u16(__a, __b) __arm_vmullbq_int_u16(__a, __b) #define vmladavq_u16(__a, __b) __arm_vmladavq_u16(__a, __b) -#define vcmpneq_n_u16(__a, __b) __arm_vcmpneq_n_u16(__a, __b) -#define vcmphiq_u16(__a, __b) __arm_vcmphiq_u16(__a, __b) -#define vcmphiq_n_u16(__a, __b) __arm_vcmphiq_n_u16(__a, __b) -#define vcmpeqq_u16(__a, __b) __arm_vcmpeqq_u16(__a, __b) -#define vcmpeqq_n_u16(__a, __b) __arm_vcmpeqq_n_u16(__a, __b) -#define vcmpcsq_u16(__a, __b) __arm_vcmpcsq_u16(__a, __b) -#define vcmpcsq_n_u16(__a, __b) __arm_vcmpcsq_n_u16(__a, __b) #define vcaddq_rot90_u16(__a, __b) __arm_vcaddq_rot90_u16(__a, __b) #define vcaddq_rot270_u16(__a, __b) __arm_vcaddq_rot270_u16(__a, __b) #define vbicq_u16(__a, __b) __arm_vbicq_u16(__a, __b) #define vaddvq_p_u16(__a, __p) __arm_vaddvq_p_u16(__a, __p) #define vaddvaq_u16(__a, __b) __arm_vaddvaq_u16(__a, __b) #define vbrsrq_n_u16(__a, __b) __arm_vbrsrq_n_u16(__a, __b) -#define vcmpneq_n_s16(__a, __b) __arm_vcmpneq_n_s16(__a, __b) -#define vcmpltq_s16(__a, __b) __arm_vcmpltq_s16(__a, __b) -#define vcmpltq_n_s16(__a, __b) __arm_vcmpltq_n_s16(__a, __b) -#define vcmpleq_s16(__a, __b) __arm_vcmpleq_s16(__a, __b) -#define vcmpleq_n_s16(__a, __b) __arm_vcmpleq_n_s16(__a, __b) -#define vcmpgtq_s16(__a, __b) __arm_vcmpgtq_s16(__a, __b) -#define vcmpgtq_n_s16(__a, __b) __arm_vcmpgtq_n_s16(__a, __b) -#define vcmpgeq_s16(__a, __b) __arm_vcmpgeq_s16(__a, __b) -#define vcmpgeq_n_s16(__a, __b) __arm_vcmpgeq_n_s16(__a, __b) -#define vcmpeqq_s16(__a, __b) __arm_vcmpeqq_s16(__a, __b) -#define vcmpeqq_n_s16(__a, __b) __arm_vcmpeqq_n_s16(__a, __b) #define vqshluq_n_s16(__a, __imm) __arm_vqshluq_n_s16(__a, __imm) #define vaddvq_p_s16(__a, __p) __arm_vaddvq_p_s16(__a, __p) #define vornq_s16(__a, __b) __arm_vornq_s16(__a, __b) @@ -540,30 +481,12 @@ #define vmulltq_int_u32(__a, __b) __arm_vmulltq_int_u32(__a, __b) #define vmullbq_int_u32(__a, __b) __arm_vmullbq_int_u32(__a, __b) #define vmladavq_u32(__a, __b) __arm_vmladavq_u32(__a, __b) -#define vcmpneq_n_u32(__a, __b) __arm_vcmpneq_n_u32(__a, __b) -#define vcmphiq_u32(__a, __b) __arm_vcmphiq_u32(__a, __b) -#define vcmphiq_n_u32(__a, __b) __arm_vcmphiq_n_u32(__a, __b) -#define vcmpeqq_u32(__a, __b) __arm_vcmpeqq_u32(__a, __b) -#define vcmpeqq_n_u32(__a, __b) __arm_vcmpeqq_n_u32(__a, __b) -#define vcmpcsq_u32(__a, __b) __arm_vcmpcsq_u32(__a, __b) -#define vcmpcsq_n_u32(__a, __b) __arm_vcmpcsq_n_u32(__a, __b) #define vcaddq_rot90_u32(__a, __b) __arm_vcaddq_rot90_u32(__a, __b) #define vcaddq_rot270_u32(__a, __b) __arm_vcaddq_rot270_u32(__a, __b) #define vbicq_u32(__a, __b) __arm_vbicq_u32(__a, __b) #define vaddvq_p_u32(__a, __p) __arm_vaddvq_p_u32(__a, __p) #define vaddvaq_u32(__a, __b) __arm_vaddvaq_u32(__a, __b) #define vbrsrq_n_u32(__a, __b) __arm_vbrsrq_n_u32(__a, __b) -#define vcmpneq_n_s32(__a, __b) __arm_vcmpneq_n_s32(__a, __b) -#define vcmpltq_s32(__a, __b) __arm_vcmpltq_s32(__a, __b) -#define vcmpltq_n_s32(__a, __b) __arm_vcmpltq_n_s32(__a, __b) -#define vcmpleq_s32(__a, __b) __arm_vcmpleq_s32(__a, __b) -#define vcmpleq_n_s32(__a, __b) __arm_vcmpleq_n_s32(__a, __b) -#define vcmpgtq_s32(__a, __b) __arm_vcmpgtq_s32(__a, __b) -#define vcmpgtq_n_s32(__a, __b) __arm_vcmpgtq_n_s32(__a, __b) -#define vcmpgeq_s32(__a, __b) __arm_vcmpgeq_s32(__a, __b) -#define vcmpgeq_n_s32(__a, __b) __arm_vcmpgeq_n_s32(__a, __b) -#define vcmpeqq_s32(__a, __b) __arm_vcmpeqq_s32(__a, __b) -#define vcmpeqq_n_s32(__a, __b) __arm_vcmpeqq_n_s32(__a, __b) #define vqshluq_n_s32(__a, __imm) __arm_vqshluq_n_s32(__a, __imm) #define vaddvq_p_s32(__a, __p) __arm_vaddvq_p_s32(__a, __p) #define vornq_s32(__a, __b) __arm_vornq_s32(__a, __b) @@ -584,18 +507,6 @@ #define vmullbq_poly_p8(__a, __b) __arm_vmullbq_poly_p8(__a, __b) #define vmlaldavq_u16(__a, __b) __arm_vmlaldavq_u16(__a, __b) #define vbicq_n_u16(__a, __imm) __arm_vbicq_n_u16(__a, __imm) -#define vcmpneq_n_f16(__a, __b) __arm_vcmpneq_n_f16(__a, __b) -#define vcmpneq_f16(__a, __b) __arm_vcmpneq_f16(__a, __b) -#define vcmpltq_n_f16(__a, __b) __arm_vcmpltq_n_f16(__a, __b) -#define vcmpltq_f16(__a, __b) __arm_vcmpltq_f16(__a, __b) -#define vcmpleq_n_f16(__a, __b) __arm_vcmpleq_n_f16(__a, __b) -#define vcmpleq_f16(__a, __b) __arm_vcmpleq_f16(__a, __b) -#define vcmpgtq_n_f16(__a, __b) __arm_vcmpgtq_n_f16(__a, __b) -#define vcmpgtq_f16(__a, __b) __arm_vcmpgtq_f16(__a, __b) -#define vcmpgeq_n_f16(__a, __b) __arm_vcmpgeq_n_f16(__a, __b) -#define vcmpgeq_f16(__a, __b) __arm_vcmpgeq_f16(__a, __b) -#define vcmpeqq_n_f16(__a, __b) __arm_vcmpeqq_n_f16(__a, __b) -#define vcmpeqq_f16(__a, __b) __arm_vcmpeqq_f16(__a, __b) #define vqdmulltq_s16(__a, __b) __arm_vqdmulltq_s16(__a, __b) #define vqdmulltq_n_s16(__a, __b) __arm_vqdmulltq_n_s16(__a, __b) #define vqdmullbq_s16(__a, __b) __arm_vqdmullbq_s16(__a, __b) @@ -617,18 +528,6 @@ #define vmullbq_poly_p16(__a, __b) __arm_vmullbq_poly_p16(__a, __b) #define vmlaldavq_u32(__a, __b) __arm_vmlaldavq_u32(__a, __b) #define vbicq_n_u32(__a, __imm) __arm_vbicq_n_u32(__a, __imm) -#define vcmpneq_n_f32(__a, __b) __arm_vcmpneq_n_f32(__a, __b) -#define vcmpneq_f32(__a, __b) __arm_vcmpneq_f32(__a, __b) -#define vcmpltq_n_f32(__a, __b) __arm_vcmpltq_n_f32(__a, __b) -#define vcmpltq_f32(__a, __b) __arm_vcmpltq_f32(__a, __b) -#define vcmpleq_n_f32(__a, __b) __arm_vcmpleq_n_f32(__a, __b) -#define vcmpleq_f32(__a, __b) __arm_vcmpleq_f32(__a, __b) -#define vcmpgtq_n_f32(__a, __b) __arm_vcmpgtq_n_f32(__a, __b) -#define vcmpgtq_f32(__a, __b) __arm_vcmpgtq_f32(__a, __b) -#define vcmpgeq_n_f32(__a, __b) __arm_vcmpgeq_n_f32(__a, __b) -#define vcmpgeq_f32(__a, __b) __arm_vcmpgeq_f32(__a, __b) -#define vcmpeqq_n_f32(__a, __b) __arm_vcmpeqq_n_f32(__a, __b) -#define vcmpeqq_f32(__a, __b) __arm_vcmpeqq_f32(__a, __b) #define vqdmulltq_s32(__a, __b) __arm_vqdmulltq_s32(__a, __b) #define vqdmulltq_n_s32(__a, __b) __arm_vqdmulltq_n_s32(__a, __b) #define vqdmullbq_s32(__a, __b) __arm_vqdmullbq_s32(__a, __b) @@ -666,8 +565,6 @@ #define vbicq_m_n_s32(__a, __imm, __p) __arm_vbicq_m_n_s32(__a, __imm, __p) #define vbicq_m_n_u16(__a, __imm, __p) __arm_vbicq_m_n_u16(__a, __imm, __p) #define vbicq_m_n_u32(__a, __imm, __p) __arm_vbicq_m_n_u32(__a, __imm, __p) -#define vcmpeqq_m_f16(__a, __b, __p) __arm_vcmpeqq_m_f16(__a, __b, __p) -#define vcmpeqq_m_f32(__a, __b, __p) __arm_vcmpeqq_m_f32(__a, __b, __p) #define vcvtaq_m_s16_f16(__inactive, __a, __p) __arm_vcvtaq_m_s16_f16(__inactive, __a, __p) #define vcvtaq_m_u16_f16(__inactive, __a, __p) __arm_vcvtaq_m_u16_f16(__inactive, __a, __p) #define vcvtaq_m_s32_f32(__inactive, __a, __p) __arm_vcvtaq_m_s32_f32(__inactive, __a, __p) @@ -696,29 +593,9 @@ #define vmladavq_p_u8(__a, __b, __p) __arm_vmladavq_p_u8(__a, __b, __p) #define vmladavaq_u8(__a, __b, __c) __arm_vmladavaq_u8(__a, __b, __c) #define vdupq_m_n_u8(__inactive, __a, __p) __arm_vdupq_m_n_u8(__inactive, __a, __p) -#define vcmpneq_m_u8(__a, __b, __p) __arm_vcmpneq_m_u8(__a, __b, __p) -#define vcmpneq_m_n_u8(__a, __b, __p) __arm_vcmpneq_m_n_u8(__a, __b, __p) -#define vcmphiq_m_u8(__a, __b, __p) __arm_vcmphiq_m_u8(__a, __b, __p) -#define vcmphiq_m_n_u8(__a, __b, __p) __arm_vcmphiq_m_n_u8(__a, __b, __p) -#define vcmpeqq_m_u8(__a, __b, __p) __arm_vcmpeqq_m_u8(__a, __b, __p) -#define vcmpeqq_m_n_u8(__a, __b, __p) __arm_vcmpeqq_m_n_u8(__a, __b, __p) -#define vcmpcsq_m_u8(__a, __b, __p) __arm_vcmpcsq_m_u8(__a, __b, __p) -#define vcmpcsq_m_n_u8(__a, __b, __p) __arm_vcmpcsq_m_n_u8(__a, __b, __p) #define vaddvaq_p_u8(__a, __b, __p) __arm_vaddvaq_p_u8(__a, __b, __p) #define vsriq_n_u8(__a, __b, __imm) __arm_vsriq_n_u8(__a, __b, __imm) #define vsliq_n_u8(__a, __b, __imm) __arm_vsliq_n_u8(__a, __b, __imm) -#define vcmpneq_m_s8(__a, __b, __p) __arm_vcmpneq_m_s8(__a, __b, __p) -#define vcmpneq_m_n_s8(__a, __b, __p) __arm_vcmpneq_m_n_s8(__a, __b, __p) -#define vcmpltq_m_s8(__a, __b, __p) __arm_vcmpltq_m_s8(__a, __b, __p) -#define vcmpltq_m_n_s8(__a, __b, __p) __arm_vcmpltq_m_n_s8(__a, __b, __p) -#define vcmpleq_m_s8(__a, __b, __p) __arm_vcmpleq_m_s8(__a, __b, __p) -#define vcmpleq_m_n_s8(__a, __b, __p) __arm_vcmpleq_m_n_s8(__a, __b, __p) -#define vcmpgtq_m_s8(__a, __b, __p) __arm_vcmpgtq_m_s8(__a, __b, __p) -#define vcmpgtq_m_n_s8(__a, __b, __p) __arm_vcmpgtq_m_n_s8(__a, __b, __p) -#define vcmpgeq_m_s8(__a, __b, __p) __arm_vcmpgeq_m_s8(__a, __b, __p) -#define vcmpgeq_m_n_s8(__a, __b, __p) __arm_vcmpgeq_m_n_s8(__a, __b, __p) -#define vcmpeqq_m_s8(__a, __b, __p) __arm_vcmpeqq_m_s8(__a, __b, __p) -#define vcmpeqq_m_n_s8(__a, __b, __p) __arm_vcmpeqq_m_n_s8(__a, __b, __p) #define vrev64q_m_s8(__inactive, __a, __p) __arm_vrev64q_m_s8(__inactive, __a, __p) #define vmvnq_m_s8(__inactive, __a, __p) __arm_vmvnq_m_s8(__inactive, __a, __p) #define vmlsdavxq_p_s8(__a, __b, __p) __arm_vmlsdavxq_p_s8(__a, __b, __p) @@ -756,29 +633,9 @@ #define vmladavq_p_u16(__a, __b, __p) __arm_vmladavq_p_u16(__a, __b, __p) #define vmladavaq_u16(__a, __b, __c) __arm_vmladavaq_u16(__a, __b, __c) #define vdupq_m_n_u16(__inactive, __a, __p) __arm_vdupq_m_n_u16(__inactive, __a, __p) -#define vcmpneq_m_u16(__a, __b, __p) __arm_vcmpneq_m_u16(__a, __b, __p) -#define vcmpneq_m_n_u16(__a, __b, __p) __arm_vcmpneq_m_n_u16(__a, __b, __p) -#define vcmphiq_m_u16(__a, __b, __p) __arm_vcmphiq_m_u16(__a, __b, __p) -#define vcmphiq_m_n_u16(__a, __b, __p) __arm_vcmphiq_m_n_u16(__a, __b, __p) -#define vcmpeqq_m_u16(__a, __b, __p) __arm_vcmpeqq_m_u16(__a, __b, __p) -#define vcmpeqq_m_n_u16(__a, __b, __p) __arm_vcmpeqq_m_n_u16(__a, __b, __p) -#define vcmpcsq_m_u16(__a, __b, __p) __arm_vcmpcsq_m_u16(__a, __b, __p) -#define vcmpcsq_m_n_u16(__a, __b, __p) __arm_vcmpcsq_m_n_u16(__a, __b, __p) #define vaddvaq_p_u16(__a, __b, __p) __arm_vaddvaq_p_u16(__a, __b, __p) #define vsriq_n_u16(__a, __b, __imm) __arm_vsriq_n_u16(__a, __b, __imm) #define vsliq_n_u16(__a, __b, __imm) __arm_vsliq_n_u16(__a, __b, __imm) -#define vcmpneq_m_s16(__a, __b, __p) __arm_vcmpneq_m_s16(__a, __b, __p) -#define vcmpneq_m_n_s16(__a, __b, __p) __arm_vcmpneq_m_n_s16(__a, __b, __p) -#define vcmpltq_m_s16(__a, __b, __p) __arm_vcmpltq_m_s16(__a, __b, __p) -#define vcmpltq_m_n_s16(__a, __b, __p) __arm_vcmpltq_m_n_s16(__a, __b, __p) -#define vcmpleq_m_s16(__a, __b, __p) __arm_vcmpleq_m_s16(__a, __b, __p) -#define vcmpleq_m_n_s16(__a, __b, __p) __arm_vcmpleq_m_n_s16(__a, __b, __p) -#define vcmpgtq_m_s16(__a, __b, __p) __arm_vcmpgtq_m_s16(__a, __b, __p) -#define vcmpgtq_m_n_s16(__a, __b, __p) __arm_vcmpgtq_m_n_s16(__a, __b, __p) -#define vcmpgeq_m_s16(__a, __b, __p) __arm_vcmpgeq_m_s16(__a, __b, __p) -#define vcmpgeq_m_n_s16(__a, __b, __p) __arm_vcmpgeq_m_n_s16(__a, __b, __p) -#define vcmpeqq_m_s16(__a, __b, __p) __arm_vcmpeqq_m_s16(__a, __b, __p) -#define vcmpeqq_m_n_s16(__a, __b, __p) __arm_vcmpeqq_m_n_s16(__a, __b, __p) #define vrev64q_m_s16(__inactive, __a, __p) __arm_vrev64q_m_s16(__inactive, __a, __p) #define vmvnq_m_s16(__inactive, __a, __p) __arm_vmvnq_m_s16(__inactive, __a, __p) #define vmlsdavxq_p_s16(__a, __b, __p) __arm_vmlsdavxq_p_s16(__a, __b, __p) @@ -816,29 +673,9 @@ #define vmladavq_p_u32(__a, __b, __p) __arm_vmladavq_p_u32(__a, __b, __p) #define vmladavaq_u32(__a, __b, __c) __arm_vmladavaq_u32(__a, __b, __c) #define vdupq_m_n_u32(__inactive, __a, __p) __arm_vdupq_m_n_u32(__inactive, __a, __p) -#define vcmpneq_m_u32(__a, __b, __p) __arm_vcmpneq_m_u32(__a, __b, __p) -#define vcmpneq_m_n_u32(__a, __b, __p) __arm_vcmpneq_m_n_u32(__a, __b, __p) -#define vcmphiq_m_u32(__a, __b, __p) __arm_vcmphiq_m_u32(__a, __b, __p) -#define vcmphiq_m_n_u32(__a, __b, __p) __arm_vcmphiq_m_n_u32(__a, __b, __p) -#define vcmpeqq_m_u32(__a, __b, __p) __arm_vcmpeqq_m_u32(__a, __b, __p) -#define vcmpeqq_m_n_u32(__a, __b, __p) __arm_vcmpeqq_m_n_u32(__a, __b, __p) -#define vcmpcsq_m_u32(__a, __b, __p) __arm_vcmpcsq_m_u32(__a, __b, __p) -#define vcmpcsq_m_n_u32(__a, __b, __p) __arm_vcmpcsq_m_n_u32(__a, __b, __p) #define vaddvaq_p_u32(__a, __b, __p) __arm_vaddvaq_p_u32(__a, __b, __p) #define vsriq_n_u32(__a, __b, __imm) __arm_vsriq_n_u32(__a, __b, __imm) #define vsliq_n_u32(__a, __b, __imm) __arm_vsliq_n_u32(__a, __b, __imm) -#define vcmpneq_m_s32(__a, __b, __p) __arm_vcmpneq_m_s32(__a, __b, __p) -#define vcmpneq_m_n_s32(__a, __b, __p) __arm_vcmpneq_m_n_s32(__a, __b, __p) -#define vcmpltq_m_s32(__a, __b, __p) __arm_vcmpltq_m_s32(__a, __b, __p) -#define vcmpltq_m_n_s32(__a, __b, __p) __arm_vcmpltq_m_n_s32(__a, __b, __p) -#define vcmpleq_m_s32(__a, __b, __p) __arm_vcmpleq_m_s32(__a, __b, __p) -#define vcmpleq_m_n_s32(__a, __b, __p) __arm_vcmpleq_m_n_s32(__a, __b, __p) -#define vcmpgtq_m_s32(__a, __b, __p) __arm_vcmpgtq_m_s32(__a, __b, __p) -#define vcmpgtq_m_n_s32(__a, __b, __p) __arm_vcmpgtq_m_n_s32(__a, __b, __p) -#define vcmpgeq_m_s32(__a, __b, __p) __arm_vcmpgeq_m_s32(__a, __b, __p) -#define vcmpgeq_m_n_s32(__a, __b, __p) __arm_vcmpgeq_m_n_s32(__a, __b, __p) -#define vcmpeqq_m_s32(__a, __b, __p) __arm_vcmpeqq_m_s32(__a, __b, __p) -#define vcmpeqq_m_n_s32(__a, __b, __p) __arm_vcmpeqq_m_n_s32(__a, __b, __p) #define vrev64q_m_s32(__inactive, __a, __p) __arm_vrev64q_m_s32(__inactive, __a, __p) #define vmvnq_m_s32(__inactive, __a, __p) __arm_vmvnq_m_s32(__inactive, __a, __p) #define vmlsdavxq_p_s32(__a, __b, __p) __arm_vmlsdavxq_p_s32(__a, __b, __p) @@ -913,17 +750,6 @@ #define vpselq_f16(__a, __b, __p) __arm_vpselq_f16(__a, __b, __p) #define vrev32q_m_s8(__inactive, __a, __p) __arm_vrev32q_m_s8(__inactive, __a, __p) #define vrev64q_m_f16(__inactive, __a, __p) __arm_vrev64q_m_f16(__inactive, __a, __p) -#define vcmpeqq_m_n_f16(__a, __b, __p) __arm_vcmpeqq_m_n_f16(__a, __b, __p) -#define vcmpgeq_m_f16(__a, __b, __p) __arm_vcmpgeq_m_f16(__a, __b, __p) -#define vcmpgeq_m_n_f16(__a, __b, __p) __arm_vcmpgeq_m_n_f16(__a, __b, __p) -#define vcmpgtq_m_f16(__a, __b, __p) __arm_vcmpgtq_m_f16(__a, __b, __p) -#define vcmpgtq_m_n_f16(__a, __b, __p) __arm_vcmpgtq_m_n_f16(__a, __b, __p) -#define vcmpleq_m_f16(__a, __b, __p) __arm_vcmpleq_m_f16(__a, __b, __p) -#define vcmpleq_m_n_f16(__a, __b, __p) __arm_vcmpleq_m_n_f16(__a, __b, __p) -#define vcmpltq_m_f16(__a, __b, __p) __arm_vcmpltq_m_f16(__a, __b, __p) -#define vcmpltq_m_n_f16(__a, __b, __p) __arm_vcmpltq_m_n_f16(__a, __b, __p) -#define vcmpneq_m_f16(__a, __b, __p) __arm_vcmpneq_m_f16(__a, __b, __p) -#define vcmpneq_m_n_f16(__a, __b, __p) __arm_vcmpneq_m_n_f16(__a, __b, __p) #define vmvnq_m_n_u16(__inactive, __imm, __p) __arm_vmvnq_m_n_u16(__inactive, __imm, __p) #define vcvtmq_m_u16_f16(__inactive, __a, __p) __arm_vcvtmq_m_u16_f16(__inactive, __a, __p) #define vcvtnq_m_u16_f16(__inactive, __a, __p) __arm_vcvtnq_m_u16_f16(__inactive, __a, __p) @@ -961,17 +787,6 @@ #define vpselq_f32(__a, __b, __p) __arm_vpselq_f32(__a, __b, __p) #define vrev32q_m_s16(__inactive, __a, __p) __arm_vrev32q_m_s16(__inactive, __a, __p) #define vrev64q_m_f32(__inactive, __a, __p) __arm_vrev64q_m_f32(__inactive, __a, __p) -#define vcmpeqq_m_n_f32(__a, __b, __p) __arm_vcmpeqq_m_n_f32(__a, __b, __p) -#define vcmpgeq_m_f32(__a, __b, __p) __arm_vcmpgeq_m_f32(__a, __b, __p) -#define vcmpgeq_m_n_f32(__a, __b, __p) __arm_vcmpgeq_m_n_f32(__a, __b, __p) -#define vcmpgtq_m_f32(__a, __b, __p) __arm_vcmpgtq_m_f32(__a, __b, __p) -#define vcmpgtq_m_n_f32(__a, __b, __p) __arm_vcmpgtq_m_n_f32(__a, __b, __p) -#define vcmpleq_m_f32(__a, __b, __p) __arm_vcmpleq_m_f32(__a, __b, __p) -#define vcmpleq_m_n_f32(__a, __b, __p) __arm_vcmpleq_m_n_f32(__a, __b, __p) -#define vcmpltq_m_f32(__a, __b, __p) __arm_vcmpltq_m_f32(__a, __b, __p) -#define vcmpltq_m_n_f32(__a, __b, __p) __arm_vcmpltq_m_n_f32(__a, __b, __p) -#define vcmpneq_m_f32(__a, __b, __p) __arm_vcmpneq_m_f32(__a, __b, __p) -#define vcmpneq_m_n_f32(__a, __b, __p) __arm_vcmpneq_m_n_f32(__a, __b, __p) #define vmvnq_m_n_u32(__inactive, __imm, __p) __arm_vmvnq_m_n_u32(__inactive, __imm, __p) #define vcvtmq_m_u32_f32(__inactive, __a, __p) __arm_vcvtmq_m_u32_f32(__inactive, __a, __p) #define vcvtnq_m_u32_f32(__inactive, __a, __p) __arm_vcvtnq_m_u32_f32(__inactive, __a, __p) @@ -2149,48 +1964,6 @@ __arm_vaddlvq_p_u32 (uint32x4_t __a, mve_pred16_t __p) return __builtin_mve_vaddlvq_p_uv4si (__a, __p); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_s8 (int8x16_t __a, int8x16_t __b) -{ - return __builtin_mve_vcmpneq_v16qi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_s16 (int16x8_t __a, int16x8_t __b) -{ - return __builtin_mve_vcmpneq_v8hi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_s32 (int32x4_t __a, int32x4_t __b) -{ - return __builtin_mve_vcmpneq_v4si (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_u8 (uint8x16_t __a, uint8x16_t __b) -{ - return __builtin_mve_vcmpneq_v16qi ((int8x16_t)__a, (int8x16_t)__b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_u16 (uint16x8_t __a, uint16x8_t __b) -{ - return __builtin_mve_vcmpneq_v8hi ((int16x8_t)__a, (int16x8_t)__b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_u32 (uint32x4_t __a, uint32x4_t __b) -{ - return __builtin_mve_vcmpneq_v4si ((int32x4_t)__a, (int32x4_t)__b); -} - __extension__ extern __inline uint8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vornq_u8 (uint8x16_t __a, uint8x16_t __b) @@ -2219,55 +1992,6 @@ __arm_vmladavq_u8 (uint8x16_t __a, uint8x16_t __b) return __builtin_mve_vmladavq_uv16qi (__a, __b); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_n_u8 (uint8x16_t __a, uint8_t __b) -{ - return __builtin_mve_vcmpneq_n_v16qi ((int8x16_t)__a, (int8_t)__b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq_u8 (uint8x16_t __a, uint8x16_t __b) -{ - return __builtin_mve_vcmphiq_v16qi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq_n_u8 (uint8x16_t __a, uint8_t __b) -{ - return __builtin_mve_vcmphiq_n_v16qi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_u8 (uint8x16_t __a, uint8x16_t __b) -{ - return __builtin_mve_vcmpeqq_v16qi ((int8x16_t)__a, (int8x16_t)__b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_n_u8 (uint8x16_t __a, uint8_t __b) -{ - return __builtin_mve_vcmpeqq_n_v16qi ((int8x16_t)__a, (int8_t)__b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq_u8 (uint8x16_t __a, uint8x16_t __b) -{ - return __builtin_mve_vcmpcsq_v16qi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq_n_u8 (uint8x16_t __a, uint8_t __b) -{ - return __builtin_mve_vcmpcsq_n_v16qi (__a, __b); -} - __extension__ extern __inline uint8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcaddq_rot90_u8 (uint8x16_t __a, uint8x16_t __b) @@ -2312,83 +2036,6 @@ __arm_vbrsrq_n_u8 (uint8x16_t __a, int32_t __b) return __builtin_mve_vbrsrq_n_uv16qi (__a, __b); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_n_s8 (int8x16_t __a, int8_t __b) -{ - return __builtin_mve_vcmpneq_n_v16qi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_s8 (int8x16_t __a, int8x16_t __b) -{ - return __builtin_mve_vcmpltq_v16qi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_n_s8 (int8x16_t __a, int8_t __b) -{ - return __builtin_mve_vcmpltq_n_v16qi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_s8 (int8x16_t __a, int8x16_t __b) -{ - return __builtin_mve_vcmpleq_v16qi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_n_s8 (int8x16_t __a, int8_t __b) -{ - return __builtin_mve_vcmpleq_n_v16qi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_s8 (int8x16_t __a, int8x16_t __b) -{ - return __builtin_mve_vcmpgtq_v16qi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_n_s8 (int8x16_t __a, int8_t __b) -{ - return __builtin_mve_vcmpgtq_n_v16qi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_s8 (int8x16_t __a, int8x16_t __b) -{ - return __builtin_mve_vcmpgeq_v16qi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_n_s8 (int8x16_t __a, int8_t __b) -{ - return __builtin_mve_vcmpgeq_n_v16qi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_s8 (int8x16_t __a, int8x16_t __b) -{ - return __builtin_mve_vcmpeqq_v16qi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_n_s8 (int8x16_t __a, int8_t __b) -{ - return __builtin_mve_vcmpeqq_n_v16qi (__a, __b); -} - __extension__ extern __inline uint8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vqshluq_n_s8 (int8x16_t __a, const int __imm) @@ -2529,55 +2176,6 @@ __arm_vmladavq_u16 (uint16x8_t __a, uint16x8_t __b) return __builtin_mve_vmladavq_uv8hi (__a, __b); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_n_u16 (uint16x8_t __a, uint16_t __b) -{ - return __builtin_mve_vcmpneq_n_v8hi ((int16x8_t)__a, (int16_t)__b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq_u16 (uint16x8_t __a, uint16x8_t __b) -{ - return __builtin_mve_vcmphiq_v8hi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq_n_u16 (uint16x8_t __a, uint16_t __b) -{ - return __builtin_mve_vcmphiq_n_v8hi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_u16 (uint16x8_t __a, uint16x8_t __b) -{ - return __builtin_mve_vcmpeqq_v8hi ((int16x8_t)__a, (int16x8_t)__b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_n_u16 (uint16x8_t __a, uint16_t __b) -{ - return __builtin_mve_vcmpeqq_n_v8hi ((int16x8_t)__a, (int16_t)__b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq_u16 (uint16x8_t __a, uint16x8_t __b) -{ - return __builtin_mve_vcmpcsq_v8hi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq_n_u16 (uint16x8_t __a, uint16_t __b) -{ - return __builtin_mve_vcmpcsq_n_v8hi (__a, __b); -} - __extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcaddq_rot90_u16 (uint16x8_t __a, uint16x8_t __b) @@ -2622,83 +2220,6 @@ __arm_vbrsrq_n_u16 (uint16x8_t __a, int32_t __b) return __builtin_mve_vbrsrq_n_uv8hi (__a, __b); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_n_s16 (int16x8_t __a, int16_t __b) -{ - return __builtin_mve_vcmpneq_n_v8hi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_s16 (int16x8_t __a, int16x8_t __b) -{ - return __builtin_mve_vcmpltq_v8hi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_n_s16 (int16x8_t __a, int16_t __b) -{ - return __builtin_mve_vcmpltq_n_v8hi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_s16 (int16x8_t __a, int16x8_t __b) -{ - return __builtin_mve_vcmpleq_v8hi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_n_s16 (int16x8_t __a, int16_t __b) -{ - return __builtin_mve_vcmpleq_n_v8hi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_s16 (int16x8_t __a, int16x8_t __b) -{ - return __builtin_mve_vcmpgtq_v8hi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_n_s16 (int16x8_t __a, int16_t __b) -{ - return __builtin_mve_vcmpgtq_n_v8hi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_s16 (int16x8_t __a, int16x8_t __b) -{ - return __builtin_mve_vcmpgeq_v8hi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_n_s16 (int16x8_t __a, int16_t __b) -{ - return __builtin_mve_vcmpgeq_n_v8hi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_s16 (int16x8_t __a, int16x8_t __b) -{ - return __builtin_mve_vcmpeqq_v8hi (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_n_s16 (int16x8_t __a, int16_t __b) -{ - return __builtin_mve_vcmpeqq_n_v8hi (__a, __b); -} - __extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vqshluq_n_s16 (int16x8_t __a, const int __imm) @@ -2839,55 +2360,6 @@ __arm_vmladavq_u32 (uint32x4_t __a, uint32x4_t __b) return __builtin_mve_vmladavq_uv4si (__a, __b); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_n_u32 (uint32x4_t __a, uint32_t __b) -{ - return __builtin_mve_vcmpneq_n_v4si ((int32x4_t)__a, (int32_t)__b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq_u32 (uint32x4_t __a, uint32x4_t __b) -{ - return __builtin_mve_vcmphiq_v4si (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq_n_u32 (uint32x4_t __a, uint32_t __b) -{ - return __builtin_mve_vcmphiq_n_v4si (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_u32 (uint32x4_t __a, uint32x4_t __b) -{ - return __builtin_mve_vcmpeqq_v4si ((int32x4_t)__a, (int32x4_t)__b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_n_u32 (uint32x4_t __a, uint32_t __b) -{ - return __builtin_mve_vcmpeqq_n_v4si ((int32x4_t)__a, (int32_t)__b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq_u32 (uint32x4_t __a, uint32x4_t __b) -{ - return __builtin_mve_vcmpcsq_v4si (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq_n_u32 (uint32x4_t __a, uint32_t __b) -{ - return __builtin_mve_vcmpcsq_n_v4si (__a, __b); -} - __extension__ extern __inline uint32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcaddq_rot90_u32 (uint32x4_t __a, uint32x4_t __b) @@ -2932,100 +2404,23 @@ __arm_vbrsrq_n_u32 (uint32x4_t __a, int32_t __b) return __builtin_mve_vbrsrq_n_uv4si (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline uint32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_n_s32 (int32x4_t __a, int32_t __b) +__arm_vqshluq_n_s32 (int32x4_t __a, const int __imm) { - return __builtin_mve_vcmpneq_n_v4si (__a, __b); + return __builtin_mve_vqshluq_n_sv4si (__a, __imm); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline int32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_s32 (int32x4_t __a, int32x4_t __b) +__arm_vaddvq_p_s32 (int32x4_t __a, mve_pred16_t __p) { - return __builtin_mve_vcmpltq_v4si (__a, __b); + return __builtin_mve_vaddvq_p_sv4si (__a, __p); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_n_s32 (int32x4_t __a, int32_t __b) -{ - return __builtin_mve_vcmpltq_n_v4si (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_s32 (int32x4_t __a, int32x4_t __b) -{ - return __builtin_mve_vcmpleq_v4si (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_n_s32 (int32x4_t __a, int32_t __b) -{ - return __builtin_mve_vcmpleq_n_v4si (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_s32 (int32x4_t __a, int32x4_t __b) -{ - return __builtin_mve_vcmpgtq_v4si (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_n_s32 (int32x4_t __a, int32_t __b) -{ - return __builtin_mve_vcmpgtq_n_v4si (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_s32 (int32x4_t __a, int32x4_t __b) -{ - return __builtin_mve_vcmpgeq_v4si (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_n_s32 (int32x4_t __a, int32_t __b) -{ - return __builtin_mve_vcmpgeq_n_v4si (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_s32 (int32x4_t __a, int32x4_t __b) -{ - return __builtin_mve_vcmpeqq_v4si (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_n_s32 (int32x4_t __a, int32_t __b) -{ - return __builtin_mve_vcmpeqq_n_v4si (__a, __b); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqshluq_n_s32 (int32x4_t __a, const int __imm) -{ - return __builtin_mve_vqshluq_n_sv4si (__a, __imm); -} - -__extension__ extern __inline int32_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vaddvq_p_s32 (int32x4_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vaddvq_p_sv4si (__a, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vornq_s32 (int32x4_t __a, int32x4_t __b) +__arm_vornq_s32 (int32x4_t __a, int32x4_t __b) { return __builtin_mve_vornq_sv4si (__a, __b); } @@ -3581,62 +2976,6 @@ __arm_vdupq_m_n_u8 (uint8x16_t __inactive, uint8_t __a, mve_pred16_t __p) return __builtin_mve_vdupq_m_n_uv16qi (__inactive, __a, __p); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m_u8 (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpneq_m_uv16qi (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m_n_u8 (uint8x16_t __a, uint8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpneq_m_n_uv16qi (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq_m_u8 (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmphiq_m_uv16qi (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq_m_n_u8 (uint8x16_t __a, uint8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmphiq_m_n_uv16qi (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m_u8 (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpeqq_m_uv16qi (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m_n_u8 (uint8x16_t __a, uint8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpeqq_m_n_uv16qi (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq_m_u8 (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpcsq_m_uv16qi (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq_m_n_u8 (uint8x16_t __a, uint8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpcsq_m_n_uv16qi (__a, __b, __p); -} - __extension__ extern __inline uint32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddvaq_p_u8 (uint32_t __a, uint8x16_t __b, mve_pred16_t __p) @@ -3658,90 +2997,6 @@ __arm_vsliq_n_u8 (uint8x16_t __a, uint8x16_t __b, const int __imm) return __builtin_mve_vsliq_n_uv16qi (__a, __b, __imm); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m_s8 (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpneq_m_sv16qi (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m_n_s8 (int8x16_t __a, int8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpneq_m_n_sv16qi (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m_s8 (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpltq_m_sv16qi (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m_n_s8 (int8x16_t __a, int8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpltq_m_n_sv16qi (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m_s8 (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpleq_m_sv16qi (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m_n_s8 (int8x16_t __a, int8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpleq_m_n_sv16qi (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m_s8 (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpgtq_m_sv16qi (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m_n_s8 (int8x16_t __a, int8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpgtq_m_n_sv16qi (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m_s8 (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpgeq_m_sv16qi (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m_n_s8 (int8x16_t __a, int8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpgeq_m_n_sv16qi (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m_s8 (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpeqq_m_sv16qi (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m_n_s8 (int8x16_t __a, int8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpeqq_m_n_sv16qi (__a, __b, __p); -} - __extension__ extern __inline int8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vrev64q_m_s8 (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) @@ -4001,301 +3256,161 @@ __arm_vdupq_m_n_u16 (uint16x8_t __inactive, uint16_t __a, mve_pred16_t __p) return __builtin_mve_vdupq_m_n_uv8hi (__inactive, __a, __p); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline uint32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m_u16 (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) +__arm_vaddvaq_p_u16 (uint32_t __a, uint16x8_t __b, mve_pred16_t __p) { - return __builtin_mve_vcmpneq_m_uv8hi (__a, __b, __p); + return __builtin_mve_vaddvaq_p_uv8hi (__a, __b, __p); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m_n_u16 (uint16x8_t __a, uint16_t __b, mve_pred16_t __p) +__arm_vsriq_n_u16 (uint16x8_t __a, uint16x8_t __b, const int __imm) { - return __builtin_mve_vcmpneq_m_n_uv8hi (__a, __b, __p); + return __builtin_mve_vsriq_n_uv8hi (__a, __b, __imm); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq_m_u16 (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) +__arm_vsliq_n_u16 (uint16x8_t __a, uint16x8_t __b, const int __imm) { - return __builtin_mve_vcmphiq_m_uv8hi (__a, __b, __p); + return __builtin_mve_vsliq_n_uv8hi (__a, __b, __imm); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq_m_n_u16 (uint16x8_t __a, uint16_t __b, mve_pred16_t __p) +__arm_vrev64q_m_s16 (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) { - return __builtin_mve_vcmphiq_m_n_uv8hi (__a, __b, __p); + return __builtin_mve_vrev64q_m_sv8hi (__inactive, __a, __p); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m_u16 (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) +__arm_vmvnq_m_s16 (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) { - return __builtin_mve_vcmpeqq_m_uv8hi (__a, __b, __p); + return __builtin_mve_vmvnq_m_sv8hi (__inactive, __a, __p); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline int32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m_n_u16 (uint16x8_t __a, uint16_t __b, mve_pred16_t __p) +__arm_vmlsdavxq_p_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) { - return __builtin_mve_vcmpeqq_m_n_uv8hi (__a, __b, __p); + return __builtin_mve_vmlsdavxq_p_sv8hi (__a, __b, __p); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline int32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq_m_u16 (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) +__arm_vmlsdavq_p_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) { - return __builtin_mve_vcmpcsq_m_uv8hi (__a, __b, __p); + return __builtin_mve_vmlsdavq_p_sv8hi (__a, __b, __p); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline int32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq_m_n_u16 (uint16x8_t __a, uint16_t __b, mve_pred16_t __p) +__arm_vmladavxq_p_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) { - return __builtin_mve_vcmpcsq_m_n_uv8hi (__a, __b, __p); + return __builtin_mve_vmladavxq_p_sv8hi (__a, __b, __p); } -__extension__ extern __inline uint32_t +__extension__ extern __inline int32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vaddvaq_p_u16 (uint32_t __a, uint16x8_t __b, mve_pred16_t __p) +__arm_vmladavq_p_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) { - return __builtin_mve_vaddvaq_p_uv8hi (__a, __b, __p); + return __builtin_mve_vmladavq_p_sv8hi (__a, __b, __p); } -__extension__ extern __inline uint16x8_t +__extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vsriq_n_u16 (uint16x8_t __a, uint16x8_t __b, const int __imm) +__arm_vdupq_m_n_s16 (int16x8_t __inactive, int16_t __a, mve_pred16_t __p) { - return __builtin_mve_vsriq_n_uv8hi (__a, __b, __imm); + return __builtin_mve_vdupq_m_n_sv8hi (__inactive, __a, __p); } -__extension__ extern __inline uint16x8_t +__extension__ extern __inline int32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vsliq_n_u16 (uint16x8_t __a, uint16x8_t __b, const int __imm) +__arm_vaddvaq_p_s16 (int32_t __a, int16x8_t __b, mve_pred16_t __p) { - return __builtin_mve_vsliq_n_uv8hi (__a, __b, __imm); + return __builtin_mve_vaddvaq_p_sv8hi (__a, __b, __p); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) +__arm_vqrdmlsdhxq_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b) { - return __builtin_mve_vcmpneq_m_sv8hi (__a, __b, __p); + return __builtin_mve_vqrdmlsdhxq_sv8hi (__inactive, __a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m_n_s16 (int16x8_t __a, int16_t __b, mve_pred16_t __p) +__arm_vqrdmlsdhq_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b) { - return __builtin_mve_vcmpneq_m_n_sv8hi (__a, __b, __p); + return __builtin_mve_vqrdmlsdhq_sv8hi (__inactive, __a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) +__arm_vqrdmlashq_n_s16 (int16x8_t __a, int16x8_t __b, int16_t __c) { - return __builtin_mve_vcmpltq_m_sv8hi (__a, __b, __p); + return __builtin_mve_vqrdmlashq_n_sv8hi (__a, __b, __c); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m_n_s16 (int16x8_t __a, int16_t __b, mve_pred16_t __p) +__arm_vqdmlashq_n_s16 (int16x8_t __a, int16x8_t __b, int16_t __c) { - return __builtin_mve_vcmpltq_m_n_sv8hi (__a, __b, __p); + return __builtin_mve_vqdmlashq_n_sv8hi (__a, __b, __c); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) +__arm_vqrdmlahq_n_s16 (int16x8_t __a, int16x8_t __b, int16_t __c) { - return __builtin_mve_vcmpleq_m_sv8hi (__a, __b, __p); + return __builtin_mve_vqrdmlahq_n_sv8hi (__a, __b, __c); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m_n_s16 (int16x8_t __a, int16_t __b, mve_pred16_t __p) +__arm_vqrdmladhxq_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b) { - return __builtin_mve_vcmpleq_m_n_sv8hi (__a, __b, __p); + return __builtin_mve_vqrdmladhxq_sv8hi (__inactive, __a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) +__arm_vqrdmladhq_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b) { - return __builtin_mve_vcmpgtq_m_sv8hi (__a, __b, __p); + return __builtin_mve_vqrdmladhq_sv8hi (__inactive, __a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m_n_s16 (int16x8_t __a, int16_t __b, mve_pred16_t __p) +__arm_vqdmlsdhxq_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b) { - return __builtin_mve_vcmpgtq_m_n_sv8hi (__a, __b, __p); + return __builtin_mve_vqdmlsdhxq_sv8hi (__inactive, __a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) +__arm_vqdmlsdhq_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b) { - return __builtin_mve_vcmpgeq_m_sv8hi (__a, __b, __p); + return __builtin_mve_vqdmlsdhq_sv8hi (__inactive, __a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m_n_s16 (int16x8_t __a, int16_t __b, mve_pred16_t __p) +__arm_vqdmlahq_n_s16 (int16x8_t __a, int16x8_t __b, int16_t __c) { - return __builtin_mve_vcmpgeq_m_n_sv8hi (__a, __b, __p); + return __builtin_mve_vqdmlahq_n_sv8hi (__a, __b, __c); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) +__arm_vqdmladhxq_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b) { - return __builtin_mve_vcmpeqq_m_sv8hi (__a, __b, __p); + return __builtin_mve_vqdmladhxq_sv8hi (__inactive, __a, __b); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m_n_s16 (int16x8_t __a, int16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpeqq_m_n_sv8hi (__a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vrev64q_m_s16 (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vrev64q_m_sv8hi (__inactive, __a, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmvnq_m_s16 (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vmvnq_m_sv8hi (__inactive, __a, __p); -} - -__extension__ extern __inline int32_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmlsdavxq_p_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmlsdavxq_p_sv8hi (__a, __b, __p); -} - -__extension__ extern __inline int32_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmlsdavq_p_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmlsdavq_p_sv8hi (__a, __b, __p); -} - -__extension__ extern __inline int32_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmladavxq_p_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmladavxq_p_sv8hi (__a, __b, __p); -} - -__extension__ extern __inline int32_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmladavq_p_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmladavq_p_sv8hi (__a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vdupq_m_n_s16 (int16x8_t __inactive, int16_t __a, mve_pred16_t __p) -{ - return __builtin_mve_vdupq_m_n_sv8hi (__inactive, __a, __p); -} - -__extension__ extern __inline int32_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vaddvaq_p_s16 (int32_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vaddvaq_p_sv8hi (__a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqrdmlsdhxq_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b) -{ - return __builtin_mve_vqrdmlsdhxq_sv8hi (__inactive, __a, __b); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqrdmlsdhq_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b) -{ - return __builtin_mve_vqrdmlsdhq_sv8hi (__inactive, __a, __b); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqrdmlashq_n_s16 (int16x8_t __a, int16x8_t __b, int16_t __c) -{ - return __builtin_mve_vqrdmlashq_n_sv8hi (__a, __b, __c); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqdmlashq_n_s16 (int16x8_t __a, int16x8_t __b, int16_t __c) -{ - return __builtin_mve_vqdmlashq_n_sv8hi (__a, __b, __c); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqrdmlahq_n_s16 (int16x8_t __a, int16x8_t __b, int16_t __c) -{ - return __builtin_mve_vqrdmlahq_n_sv8hi (__a, __b, __c); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqrdmladhxq_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b) -{ - return __builtin_mve_vqrdmladhxq_sv8hi (__inactive, __a, __b); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqrdmladhq_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b) -{ - return __builtin_mve_vqrdmladhq_sv8hi (__inactive, __a, __b); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqdmlsdhxq_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b) -{ - return __builtin_mve_vqdmlsdhxq_sv8hi (__inactive, __a, __b); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqdmlsdhq_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b) -{ - return __builtin_mve_vqdmlsdhq_sv8hi (__inactive, __a, __b); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqdmlahq_n_s16 (int16x8_t __a, int16x8_t __b, int16_t __c) -{ - return __builtin_mve_vqdmlahq_n_sv8hi (__a, __b, __c); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vqdmladhxq_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b) -{ - return __builtin_mve_vqdmladhxq_sv8hi (__inactive, __a, __b); -} - -__extension__ extern __inline int16x8_t +__extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vqdmladhq_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b) { @@ -4421,62 +3536,6 @@ __arm_vdupq_m_n_u32 (uint32x4_t __inactive, uint32_t __a, mve_pred16_t __p) return __builtin_mve_vdupq_m_n_uv4si (__inactive, __a, __p); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m_u32 (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpneq_m_uv4si (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m_n_u32 (uint32x4_t __a, uint32_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpneq_m_n_uv4si (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq_m_u32 (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmphiq_m_uv4si (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq_m_n_u32 (uint32x4_t __a, uint32_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmphiq_m_n_uv4si (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m_u32 (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpeqq_m_uv4si (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m_n_u32 (uint32x4_t __a, uint32_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpeqq_m_n_uv4si (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq_m_u32 (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpcsq_m_uv4si (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq_m_n_u32 (uint32x4_t __a, uint32_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpcsq_m_n_uv4si (__a, __b, __p); -} - __extension__ extern __inline uint32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddvaq_p_u32 (uint32_t __a, uint32x4_t __b, mve_pred16_t __p) @@ -4498,90 +3557,6 @@ __arm_vsliq_n_u32 (uint32x4_t __a, uint32x4_t __b, const int __imm) return __builtin_mve_vsliq_n_uv4si (__a, __b, __imm); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m_s32 (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpneq_m_sv4si (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m_n_s32 (int32x4_t __a, int32_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpneq_m_n_sv4si (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m_s32 (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpltq_m_sv4si (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m_n_s32 (int32x4_t __a, int32_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpltq_m_n_sv4si (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m_s32 (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpleq_m_sv4si (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m_n_s32 (int32x4_t __a, int32_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpleq_m_n_sv4si (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m_s32 (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpgtq_m_sv4si (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m_n_s32 (int32x4_t __a, int32_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpgtq_m_n_sv4si (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m_s32 (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpgeq_m_sv4si (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m_n_s32 (int32x4_t __a, int32_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpgeq_m_n_sv4si (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m_s32 (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpeqq_m_sv4si (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m_n_s32 (int32x4_t __a, int32_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpeqq_m_n_sv4si (__a, __b, __p); -} - __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vrev64q_m_s32 (int32x4_t __inactive, int32x4_t __a, mve_pred16_t __p) @@ -9775,119 +8750,35 @@ __arm_vcvtq_n_u32_f32 (float32x4_t __a, const int __imm6) return __builtin_mve_vcvtq_n_from_f_uv4si (__a, __imm6); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_n_f16 (float16x8_t __a, float16_t __b) +__arm_vornq_f16 (float16x8_t __a, float16x8_t __b) { - return __builtin_mve_vcmpneq_n_fv8hf (__a, __b); + return __builtin_mve_vornq_fv8hf (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_f16 (float16x8_t __a, float16x8_t __b) +__arm_vcmulq_rot90_f16 (float16x8_t __a, float16x8_t __b) { - return __builtin_mve_vcmpneq_fv8hf (__a, __b); + return __builtin_mve_vcmulq_rot90v8hf (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_n_f16 (float16x8_t __a, float16_t __b) +__arm_vcmulq_rot270_f16 (float16x8_t __a, float16x8_t __b) { - return __builtin_mve_vcmpltq_n_fv8hf (__a, __b); + return __builtin_mve_vcmulq_rot270v8hf (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_f16 (float16x8_t __a, float16x8_t __b) +__arm_vcmulq_rot180_f16 (float16x8_t __a, float16x8_t __b) { - return __builtin_mve_vcmpltq_fv8hf (__a, __b); + return __builtin_mve_vcmulq_rot180v8hf (__a, __b); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_n_f16 (float16x8_t __a, float16_t __b) -{ - return __builtin_mve_vcmpleq_n_fv8hf (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_f16 (float16x8_t __a, float16x8_t __b) -{ - return __builtin_mve_vcmpleq_fv8hf (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_n_f16 (float16x8_t __a, float16_t __b) -{ - return __builtin_mve_vcmpgtq_n_fv8hf (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_f16 (float16x8_t __a, float16x8_t __b) -{ - return __builtin_mve_vcmpgtq_fv8hf (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_n_f16 (float16x8_t __a, float16_t __b) -{ - return __builtin_mve_vcmpgeq_n_fv8hf (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_f16 (float16x8_t __a, float16x8_t __b) -{ - return __builtin_mve_vcmpgeq_fv8hf (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_n_f16 (float16x8_t __a, float16_t __b) -{ - return __builtin_mve_vcmpeqq_n_fv8hf (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_f16 (float16x8_t __a, float16x8_t __b) -{ - return __builtin_mve_vcmpeqq_fv8hf (__a, __b); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vornq_f16 (float16x8_t __a, float16x8_t __b) -{ - return __builtin_mve_vornq_fv8hf (__a, __b); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmulq_rot90_f16 (float16x8_t __a, float16x8_t __b) -{ - return __builtin_mve_vcmulq_rot90v8hf (__a, __b); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmulq_rot270_f16 (float16x8_t __a, float16x8_t __b) -{ - return __builtin_mve_vcmulq_rot270v8hf (__a, __b); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmulq_rot180_f16 (float16x8_t __a, float16x8_t __b) -{ - return __builtin_mve_vcmulq_rot180v8hf (__a, __b); -} - -__extension__ extern __inline float16x8_t +__extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcmulq_f16 (float16x8_t __a, float16x8_t __b) { @@ -9915,90 +8806,6 @@ __arm_vbicq_f16 (float16x8_t __a, float16x8_t __b) return __builtin_mve_vbicq_fv8hf (__a, __b); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_n_f32 (float32x4_t __a, float32_t __b) -{ - return __builtin_mve_vcmpneq_n_fv4sf (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_f32 (float32x4_t __a, float32x4_t __b) -{ - return __builtin_mve_vcmpneq_fv4sf (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_n_f32 (float32x4_t __a, float32_t __b) -{ - return __builtin_mve_vcmpltq_n_fv4sf (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_f32 (float32x4_t __a, float32x4_t __b) -{ - return __builtin_mve_vcmpltq_fv4sf (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_n_f32 (float32x4_t __a, float32_t __b) -{ - return __builtin_mve_vcmpleq_n_fv4sf (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_f32 (float32x4_t __a, float32x4_t __b) -{ - return __builtin_mve_vcmpleq_fv4sf (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_n_f32 (float32x4_t __a, float32_t __b) -{ - return __builtin_mve_vcmpgtq_n_fv4sf (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_f32 (float32x4_t __a, float32x4_t __b) -{ - return __builtin_mve_vcmpgtq_fv4sf (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_n_f32 (float32x4_t __a, float32_t __b) -{ - return __builtin_mve_vcmpgeq_n_fv4sf (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_f32 (float32x4_t __a, float32x4_t __b) -{ - return __builtin_mve_vcmpgeq_fv4sf (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_n_f32 (float32x4_t __a, float32_t __b) -{ - return __builtin_mve_vcmpeqq_n_fv4sf (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_f32 (float32x4_t __a, float32x4_t __b) -{ - return __builtin_mve_vcmpeqq_fv4sf (__a, __b); -} - __extension__ extern __inline float32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vornq_f32 (float32x4_t __a, float32x4_t __b) @@ -10069,20 +8876,6 @@ __arm_vcvtbq_f16_f32 (float16x8_t __a, float32x4_t __b) return __builtin_mve_vcvtbq_f16_f32v8hf (__a, __b); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m_f16 (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpeqq_m_fv8hf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m_f32 (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpeqq_m_fv4sf (__a, __b, __p); -} - __extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcvtaq_m_s16_f16 (int16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) @@ -10280,83 +9073,6 @@ __arm_vrev64q_m_f16 (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) return __builtin_mve_vrev64q_m_fv8hf (__inactive, __a, __p); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m_n_f16 (float16x8_t __a, float16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpeqq_m_n_fv8hf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m_f16 (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpgeq_m_fv8hf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m_n_f16 (float16x8_t __a, float16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpgeq_m_n_fv8hf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m_f16 (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpgtq_m_fv8hf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m_n_f16 (float16x8_t __a, float16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpgtq_m_n_fv8hf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m_f16 (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpleq_m_fv8hf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m_n_f16 (float16x8_t __a, float16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpleq_m_n_fv8hf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m_f16 (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpltq_m_fv8hf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m_n_f16 (float16x8_t __a, float16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpltq_m_n_fv8hf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m_f16 (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpneq_m_fv8hf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m_n_f16 (float16x8_t __a, float16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpneq_m_n_fv8hf (__a, __b, __p); -} - __extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcvtmq_m_u16_f16 (uint16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) @@ -10490,83 +9206,6 @@ __arm_vrev64q_m_f32 (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) return __builtin_mve_vrev64q_m_fv4sf (__inactive, __a, __p); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m_n_f32 (float32x4_t __a, float32_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpeqq_m_n_fv4sf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m_f32 (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpgeq_m_fv4sf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m_n_f32 (float32x4_t __a, float32_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpgeq_m_n_fv4sf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m_f32 (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpgtq_m_fv4sf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m_n_f32 (float32x4_t __a, float32_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpgtq_m_n_fv4sf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m_f32 (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpleq_m_fv4sf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m_n_f32 (float32x4_t __a, float32_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpleq_m_n_fv4sf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m_f32 (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpltq_m_fv4sf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m_n_f32 (float32x4_t __a, float32_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpltq_m_n_fv4sf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m_f32 (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpneq_m_fv4sf (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m_n_f32 (float32x4_t __a, float32_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vcmpneq_m_n_fv4sf (__a, __b, __p); -} - __extension__ extern __inline uint32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcvtmq_m_u32_f32 (uint32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) @@ -12000,60 +10639,18 @@ __arm_vaddlvq_p (uint32x4_t __a, mve_pred16_t __p) return __arm_vaddlvq_p_u32 (__a, __p); } -__extension__ extern __inline int32_t +__extension__ extern __inline uint8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq (int8x16_t __a, int8x16_t __b) +__arm_vornq (uint8x16_t __a, uint8x16_t __b) { - return __arm_vcmpneq_s8 (__a, __b); + return __arm_vornq_u8 (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq (int16x8_t __a, int16x8_t __b) +__arm_vmulltq_int (uint8x16_t __a, uint8x16_t __b) { - return __arm_vcmpneq_s16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq (int32x4_t __a, int32x4_t __b) -{ - return __arm_vcmpneq_s32 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq (uint8x16_t __a, uint8x16_t __b) -{ - return __arm_vcmpneq_u8 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq (uint16x8_t __a, uint16x8_t __b) -{ - return __arm_vcmpneq_u16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq (uint32x4_t __a, uint32x4_t __b) -{ - return __arm_vcmpneq_u32 (__a, __b); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vornq (uint8x16_t __a, uint8x16_t __b) -{ - return __arm_vornq_u8 (__a, __b); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmulltq_int (uint8x16_t __a, uint8x16_t __b) -{ - return __arm_vmulltq_int_u8 (__a, __b); + return __arm_vmulltq_int_u8 (__a, __b); } __extension__ extern __inline uint16x8_t @@ -12070,55 +10667,6 @@ __arm_vmladavq (uint8x16_t __a, uint8x16_t __b) return __arm_vmladavq_u8 (__a, __b); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq (uint8x16_t __a, uint8_t __b) -{ - return __arm_vcmpneq_n_u8 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq (uint8x16_t __a, uint8x16_t __b) -{ - return __arm_vcmphiq_u8 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq (uint8x16_t __a, uint8_t __b) -{ - return __arm_vcmphiq_n_u8 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq (uint8x16_t __a, uint8x16_t __b) -{ - return __arm_vcmpeqq_u8 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq (uint8x16_t __a, uint8_t __b) -{ - return __arm_vcmpeqq_n_u8 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq (uint8x16_t __a, uint8x16_t __b) -{ - return __arm_vcmpcsq_u8 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq (uint8x16_t __a, uint8_t __b) -{ - return __arm_vcmpcsq_n_u8 (__a, __b); -} - __extension__ extern __inline uint8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcaddq_rot90 (uint8x16_t __a, uint8x16_t __b) @@ -12161,83 +10709,6 @@ __arm_vbrsrq (uint8x16_t __a, int32_t __b) return __arm_vbrsrq_n_u8 (__a, __b); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq (int8x16_t __a, int8_t __b) -{ - return __arm_vcmpneq_n_s8 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq (int8x16_t __a, int8x16_t __b) -{ - return __arm_vcmpltq_s8 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq (int8x16_t __a, int8_t __b) -{ - return __arm_vcmpltq_n_s8 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq (int8x16_t __a, int8x16_t __b) -{ - return __arm_vcmpleq_s8 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq (int8x16_t __a, int8_t __b) -{ - return __arm_vcmpleq_n_s8 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq (int8x16_t __a, int8x16_t __b) -{ - return __arm_vcmpgtq_s8 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq (int8x16_t __a, int8_t __b) -{ - return __arm_vcmpgtq_n_s8 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq (int8x16_t __a, int8x16_t __b) -{ - return __arm_vcmpgeq_s8 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq (int8x16_t __a, int8_t __b) -{ - return __arm_vcmpgeq_n_s8 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq (int8x16_t __a, int8x16_t __b) -{ - return __arm_vcmpeqq_s8 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq (int8x16_t __a, int8_t __b) -{ - return __arm_vcmpeqq_n_s8 (__a, __b); -} - __extension__ extern __inline uint8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vqshluq (int8x16_t __a, const int __imm) @@ -12378,55 +10849,6 @@ __arm_vmladavq (uint16x8_t __a, uint16x8_t __b) return __arm_vmladavq_u16 (__a, __b); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq (uint16x8_t __a, uint16_t __b) -{ - return __arm_vcmpneq_n_u16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq (uint16x8_t __a, uint16x8_t __b) -{ - return __arm_vcmphiq_u16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq (uint16x8_t __a, uint16_t __b) -{ - return __arm_vcmphiq_n_u16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq (uint16x8_t __a, uint16x8_t __b) -{ - return __arm_vcmpeqq_u16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq (uint16x8_t __a, uint16_t __b) -{ - return __arm_vcmpeqq_n_u16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq (uint16x8_t __a, uint16x8_t __b) -{ - return __arm_vcmpcsq_u16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq (uint16x8_t __a, uint16_t __b) -{ - return __arm_vcmpcsq_n_u16 (__a, __b); -} - __extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcaddq_rot90 (uint16x8_t __a, uint16x8_t __b) @@ -12469,83 +10891,6 @@ __arm_vbrsrq (uint16x8_t __a, int32_t __b) return __arm_vbrsrq_n_u16 (__a, __b); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq (int16x8_t __a, int16_t __b) -{ - return __arm_vcmpneq_n_s16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq (int16x8_t __a, int16x8_t __b) -{ - return __arm_vcmpltq_s16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq (int16x8_t __a, int16_t __b) -{ - return __arm_vcmpltq_n_s16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq (int16x8_t __a, int16x8_t __b) -{ - return __arm_vcmpleq_s16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq (int16x8_t __a, int16_t __b) -{ - return __arm_vcmpleq_n_s16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq (int16x8_t __a, int16x8_t __b) -{ - return __arm_vcmpgtq_s16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq (int16x8_t __a, int16_t __b) -{ - return __arm_vcmpgtq_n_s16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq (int16x8_t __a, int16x8_t __b) -{ - return __arm_vcmpgeq_s16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq (int16x8_t __a, int16_t __b) -{ - return __arm_vcmpgeq_n_s16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq (int16x8_t __a, int16x8_t __b) -{ - return __arm_vcmpeqq_s16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq (int16x8_t __a, int16_t __b) -{ - return __arm_vcmpeqq_n_s16 (__a, __b); -} - __extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vqshluq (int16x8_t __a, const int __imm) @@ -12644,214 +10989,88 @@ __arm_vbrsrq (int16x8_t __a, int32_t __b) return __arm_vbrsrq_n_s16 (__a, __b); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vbicq (int16x8_t __a, int16x8_t __b) -{ - return __arm_vbicq_s16 (__a, __b); -} - -__extension__ extern __inline int32_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vaddvaq (int32_t __a, int16x8_t __b) -{ - return __arm_vaddvaq_s16 (__a, __b); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vornq (uint32x4_t __a, uint32x4_t __b) -{ - return __arm_vornq_u32 (__a, __b); -} - -__extension__ extern __inline uint64x2_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmulltq_int (uint32x4_t __a, uint32x4_t __b) -{ - return __arm_vmulltq_int_u32 (__a, __b); -} - -__extension__ extern __inline uint64x2_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmullbq_int (uint32x4_t __a, uint32x4_t __b) -{ - return __arm_vmullbq_int_u32 (__a, __b); -} - -__extension__ extern __inline uint32_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmladavq (uint32x4_t __a, uint32x4_t __b) -{ - return __arm_vmladavq_u32 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq (uint32x4_t __a, uint32_t __b) -{ - return __arm_vcmpneq_n_u32 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq (uint32x4_t __a, uint32x4_t __b) -{ - return __arm_vcmphiq_u32 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq (uint32x4_t __a, uint32_t __b) -{ - return __arm_vcmphiq_n_u32 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq (uint32x4_t __a, uint32x4_t __b) -{ - return __arm_vcmpeqq_u32 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq (uint32x4_t __a, uint32_t __b) -{ - return __arm_vcmpeqq_n_u32 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq (uint32x4_t __a, uint32x4_t __b) -{ - return __arm_vcmpcsq_u32 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq (uint32x4_t __a, uint32_t __b) -{ - return __arm_vcmpcsq_n_u32 (__a, __b); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcaddq_rot90 (uint32x4_t __a, uint32x4_t __b) -{ - return __arm_vcaddq_rot90_u32 (__a, __b); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcaddq_rot270 (uint32x4_t __a, uint32x4_t __b) -{ - return __arm_vcaddq_rot270_u32 (__a, __b); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vbicq (uint32x4_t __a, uint32x4_t __b) -{ - return __arm_vbicq_u32 (__a, __b); -} - -__extension__ extern __inline uint32_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vaddvq_p (uint32x4_t __a, mve_pred16_t __p) -{ - return __arm_vaddvq_p_u32 (__a, __p); -} - -__extension__ extern __inline uint32_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vaddvaq (uint32_t __a, uint32x4_t __b) -{ - return __arm_vaddvaq_u32 (__a, __b); -} - -__extension__ extern __inline uint32x4_t +__extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vbrsrq (uint32x4_t __a, int32_t __b) +__arm_vbicq (int16x8_t __a, int16x8_t __b) { - return __arm_vbrsrq_n_u32 (__a, __b); + return __arm_vbicq_s16 (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline int32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq (int32x4_t __a, int32_t __b) +__arm_vaddvaq (int32_t __a, int16x8_t __b) { - return __arm_vcmpneq_n_s32 (__a, __b); + return __arm_vaddvaq_s16 (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline uint32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq (int32x4_t __a, int32x4_t __b) +__arm_vornq (uint32x4_t __a, uint32x4_t __b) { - return __arm_vcmpltq_s32 (__a, __b); + return __arm_vornq_u32 (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline uint64x2_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq (int32x4_t __a, int32_t __b) +__arm_vmulltq_int (uint32x4_t __a, uint32x4_t __b) { - return __arm_vcmpltq_n_s32 (__a, __b); + return __arm_vmulltq_int_u32 (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline uint64x2_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq (int32x4_t __a, int32x4_t __b) +__arm_vmullbq_int (uint32x4_t __a, uint32x4_t __b) { - return __arm_vcmpleq_s32 (__a, __b); + return __arm_vmullbq_int_u32 (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline uint32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq (int32x4_t __a, int32_t __b) +__arm_vmladavq (uint32x4_t __a, uint32x4_t __b) { - return __arm_vcmpleq_n_s32 (__a, __b); + return __arm_vmladavq_u32 (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline uint32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq (int32x4_t __a, int32x4_t __b) +__arm_vcaddq_rot90 (uint32x4_t __a, uint32x4_t __b) { - return __arm_vcmpgtq_s32 (__a, __b); + return __arm_vcaddq_rot90_u32 (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline uint32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq (int32x4_t __a, int32_t __b) +__arm_vcaddq_rot270 (uint32x4_t __a, uint32x4_t __b) { - return __arm_vcmpgtq_n_s32 (__a, __b); + return __arm_vcaddq_rot270_u32 (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline uint32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq (int32x4_t __a, int32x4_t __b) +__arm_vbicq (uint32x4_t __a, uint32x4_t __b) { - return __arm_vcmpgeq_s32 (__a, __b); + return __arm_vbicq_u32 (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline uint32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq (int32x4_t __a, int32_t __b) +__arm_vaddvq_p (uint32x4_t __a, mve_pred16_t __p) { - return __arm_vcmpgeq_n_s32 (__a, __b); + return __arm_vaddvq_p_u32 (__a, __p); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline uint32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq (int32x4_t __a, int32x4_t __b) +__arm_vaddvaq (uint32_t __a, uint32x4_t __b) { - return __arm_vcmpeqq_s32 (__a, __b); + return __arm_vaddvaq_u32 (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline uint32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq (int32x4_t __a, int32_t __b) +__arm_vbrsrq (uint32x4_t __a, int32_t __b) { - return __arm_vcmpeqq_n_s32 (__a, __b); + return __arm_vbrsrq_n_u32 (__a, __b); } __extension__ extern __inline uint32x4_t @@ -13386,62 +11605,6 @@ __arm_vdupq_m (uint8x16_t __inactive, uint8_t __a, mve_pred16_t __p) return __arm_vdupq_m_n_u8 (__inactive, __a, __p); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpneq_m_u8 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m (uint8x16_t __a, uint8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpneq_m_n_u8 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq_m (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __arm_vcmphiq_m_u8 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq_m (uint8x16_t __a, uint8_t __b, mve_pred16_t __p) -{ - return __arm_vcmphiq_m_n_u8 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpeqq_m_u8 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m (uint8x16_t __a, uint8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpeqq_m_n_u8 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq_m (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpcsq_m_u8 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq_m (uint8x16_t __a, uint8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpcsq_m_n_u8 (__a, __b, __p); -} - __extension__ extern __inline uint32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddvaq_p (uint32_t __a, uint8x16_t __b, mve_pred16_t __p) @@ -13463,90 +11626,6 @@ __arm_vsliq (uint8x16_t __a, uint8x16_t __b, const int __imm) return __arm_vsliq_n_u8 (__a, __b, __imm); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpneq_m_s8 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m (int8x16_t __a, int8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpneq_m_n_s8 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpltq_m_s8 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m (int8x16_t __a, int8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpltq_m_n_s8 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpleq_m_s8 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m (int8x16_t __a, int8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpleq_m_n_s8 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpgtq_m_s8 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m (int8x16_t __a, int8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpgtq_m_n_s8 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpgeq_m_s8 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m (int8x16_t __a, int8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpgeq_m_n_s8 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpeqq_m_s8 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m (int8x16_t __a, int8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpeqq_m_n_s8 (__a, __b, __p); -} - __extension__ extern __inline int8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vrev64q_m (int8x16_t __inactive, int8x16_t __a, mve_pred16_t __p) @@ -13806,165 +11885,25 @@ __arm_vdupq_m (uint16x8_t __inactive, uint16_t __a, mve_pred16_t __p) return __arm_vdupq_m_n_u16 (__inactive, __a, __p); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpneq_m_u16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m (uint16x8_t __a, uint16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpneq_m_n_u16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq_m (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __arm_vcmphiq_m_u16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq_m (uint16x8_t __a, uint16_t __b, mve_pred16_t __p) -{ - return __arm_vcmphiq_m_n_u16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpeqq_m_u16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m (uint16x8_t __a, uint16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpeqq_m_n_u16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq_m (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpcsq_m_u16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq_m (uint16x8_t __a, uint16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpcsq_m_n_u16 (__a, __b, __p); -} - -__extension__ extern __inline uint32_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vaddvaq_p (uint32_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __arm_vaddvaq_p_u16 (__a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vsriq (uint16x8_t __a, uint16x8_t __b, const int __imm) -{ - return __arm_vsriq_n_u16 (__a, __b, __imm); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vsliq (uint16x8_t __a, uint16x8_t __b, const int __imm) -{ - return __arm_vsliq_n_u16 (__a, __b, __imm); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpneq_m_s16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m (int16x8_t __a, int16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpneq_m_n_s16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpltq_m_s16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m (int16x8_t __a, int16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpltq_m_n_s16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpleq_m_s16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m (int16x8_t __a, int16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpleq_m_n_s16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpgtq_m_s16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m (int16x8_t __a, int16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpgtq_m_n_s16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpgeq_m_s16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline uint32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m (int16x8_t __a, int16_t __b, mve_pred16_t __p) +__arm_vaddvaq_p (uint32_t __a, uint16x8_t __b, mve_pred16_t __p) { - return __arm_vcmpgeq_m_n_s16 (__a, __b, __p); + return __arm_vaddvaq_p_u16 (__a, __b, __p); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) +__arm_vsriq (uint16x8_t __a, uint16x8_t __b, const int __imm) { - return __arm_vcmpeqq_m_s16 (__a, __b, __p); + return __arm_vsriq_n_u16 (__a, __b, __imm); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m (int16x8_t __a, int16_t __b, mve_pred16_t __p) +__arm_vsliq (uint16x8_t __a, uint16x8_t __b, const int __imm) { - return __arm_vcmpeqq_m_n_s16 (__a, __b, __p); + return __arm_vsliq_n_u16 (__a, __b, __imm); } __extension__ extern __inline int16x8_t @@ -14226,62 +12165,6 @@ __arm_vdupq_m (uint32x4_t __inactive, uint32_t __a, mve_pred16_t __p) return __arm_vdupq_m_n_u32 (__inactive, __a, __p); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __arm_vcmpneq_m_u32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m (uint32x4_t __a, uint32_t __b, mve_pred16_t __p) -{ - return __arm_vcmpneq_m_n_u32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq_m (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __arm_vcmphiq_m_u32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmphiq_m (uint32x4_t __a, uint32_t __b, mve_pred16_t __p) -{ - return __arm_vcmphiq_m_n_u32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __arm_vcmpeqq_m_u32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m (uint32x4_t __a, uint32_t __b, mve_pred16_t __p) -{ - return __arm_vcmpeqq_m_n_u32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq_m (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __arm_vcmpcsq_m_u32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpcsq_m (uint32x4_t __a, uint32_t __b, mve_pred16_t __p) -{ - return __arm_vcmpcsq_m_n_u32 (__a, __b, __p); -} - __extension__ extern __inline uint32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vaddvaq_p (uint32_t __a, uint32x4_t __b, mve_pred16_t __p) @@ -14303,90 +12186,6 @@ __arm_vsliq (uint32x4_t __a, uint32x4_t __b, const int __imm) return __arm_vsliq_n_u32 (__a, __b, __imm); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __arm_vcmpneq_m_s32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m (int32x4_t __a, int32_t __b, mve_pred16_t __p) -{ - return __arm_vcmpneq_m_n_s32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __arm_vcmpltq_m_s32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m (int32x4_t __a, int32_t __b, mve_pred16_t __p) -{ - return __arm_vcmpltq_m_n_s32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __arm_vcmpleq_m_s32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m (int32x4_t __a, int32_t __b, mve_pred16_t __p) -{ - return __arm_vcmpleq_m_n_s32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __arm_vcmpgtq_m_s32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m (int32x4_t __a, int32_t __b, mve_pred16_t __p) -{ - return __arm_vcmpgtq_m_n_s32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __arm_vcmpgeq_m_s32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m (int32x4_t __a, int32_t __b, mve_pred16_t __p) -{ - return __arm_vcmpgeq_m_n_s32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __arm_vcmpeqq_m_s32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m (int32x4_t __a, int32_t __b, mve_pred16_t __p) -{ - return __arm_vcmpeqq_m_n_s32 (__a, __b, __p); -} - __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vrev64q_m (int32x4_t __inactive, int32x4_t __a, mve_pred16_t __p) @@ -18635,280 +16434,112 @@ __extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcvtq (uint16x8_t __a) { - return __arm_vcvtq_f16_u16 (__a); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcvtq (uint32x4_t __a) -{ - return __arm_vcvtq_f32_u32 (__a); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vbrsrq (float16x8_t __a, int32_t __b) -{ - return __arm_vbrsrq_n_f16 (__a, __b); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vbrsrq (float32x4_t __a, int32_t __b) -{ - return __arm_vbrsrq_n_f32 (__a, __b); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcvtq_n (int16x8_t __a, const int __imm6) -{ - return __arm_vcvtq_n_f16_s16 (__a, __imm6); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcvtq_n (int32x4_t __a, const int __imm6) -{ - return __arm_vcvtq_n_f32_s32 (__a, __imm6); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcvtq_n (uint16x8_t __a, const int __imm6) -{ - return __arm_vcvtq_n_f16_u16 (__a, __imm6); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcvtq_n (uint32x4_t __a, const int __imm6) -{ - return __arm_vcvtq_n_f32_u32 (__a, __imm6); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq (float16x8_t __a, float16_t __b) -{ - return __arm_vcmpneq_n_f16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq (float16x8_t __a, float16x8_t __b) -{ - return __arm_vcmpneq_f16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq (float16x8_t __a, float16_t __b) -{ - return __arm_vcmpltq_n_f16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq (float16x8_t __a, float16x8_t __b) -{ - return __arm_vcmpltq_f16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq (float16x8_t __a, float16_t __b) -{ - return __arm_vcmpleq_n_f16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq (float16x8_t __a, float16x8_t __b) -{ - return __arm_vcmpleq_f16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq (float16x8_t __a, float16_t __b) -{ - return __arm_vcmpgtq_n_f16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq (float16x8_t __a, float16x8_t __b) -{ - return __arm_vcmpgtq_f16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq (float16x8_t __a, float16_t __b) -{ - return __arm_vcmpgeq_n_f16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq (float16x8_t __a, float16x8_t __b) -{ - return __arm_vcmpgeq_f16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq (float16x8_t __a, float16_t __b) -{ - return __arm_vcmpeqq_n_f16 (__a, __b); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq (float16x8_t __a, float16x8_t __b) -{ - return __arm_vcmpeqq_f16 (__a, __b); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vornq (float16x8_t __a, float16x8_t __b) -{ - return __arm_vornq_f16 (__a, __b); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmulq_rot90 (float16x8_t __a, float16x8_t __b) -{ - return __arm_vcmulq_rot90_f16 (__a, __b); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmulq_rot270 (float16x8_t __a, float16x8_t __b) -{ - return __arm_vcmulq_rot270_f16 (__a, __b); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmulq_rot180 (float16x8_t __a, float16x8_t __b) -{ - return __arm_vcmulq_rot180_f16 (__a, __b); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmulq (float16x8_t __a, float16x8_t __b) -{ - return __arm_vcmulq_f16 (__a, __b); + return __arm_vcvtq_f16_u16 (__a); } -__extension__ extern __inline float16x8_t +__extension__ extern __inline float32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcaddq_rot90 (float16x8_t __a, float16x8_t __b) +__arm_vcvtq (uint32x4_t __a) { - return __arm_vcaddq_rot90_f16 (__a, __b); + return __arm_vcvtq_f32_u32 (__a); } __extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcaddq_rot270 (float16x8_t __a, float16x8_t __b) +__arm_vbrsrq (float16x8_t __a, int32_t __b) { - return __arm_vcaddq_rot270_f16 (__a, __b); + return __arm_vbrsrq_n_f16 (__a, __b); } -__extension__ extern __inline float16x8_t +__extension__ extern __inline float32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vbicq (float16x8_t __a, float16x8_t __b) +__arm_vbrsrq (float32x4_t __a, int32_t __b) { - return __arm_vbicq_f16 (__a, __b); + return __arm_vbrsrq_n_f32 (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq (float32x4_t __a, float32_t __b) +__arm_vcvtq_n (int16x8_t __a, const int __imm6) { - return __arm_vcmpneq_n_f32 (__a, __b); + return __arm_vcvtq_n_f16_s16 (__a, __imm6); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline float32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq (float32x4_t __a, float32x4_t __b) +__arm_vcvtq_n (int32x4_t __a, const int __imm6) { - return __arm_vcmpneq_f32 (__a, __b); + return __arm_vcvtq_n_f32_s32 (__a, __imm6); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq (float32x4_t __a, float32_t __b) +__arm_vcvtq_n (uint16x8_t __a, const int __imm6) { - return __arm_vcmpltq_n_f32 (__a, __b); + return __arm_vcvtq_n_f16_u16 (__a, __imm6); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline float32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq (float32x4_t __a, float32x4_t __b) +__arm_vcvtq_n (uint32x4_t __a, const int __imm6) { - return __arm_vcmpltq_f32 (__a, __b); + return __arm_vcvtq_n_f32_u32 (__a, __imm6); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq (float32x4_t __a, float32_t __b) +__arm_vornq (float16x8_t __a, float16x8_t __b) { - return __arm_vcmpleq_n_f32 (__a, __b); + return __arm_vornq_f16 (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq (float32x4_t __a, float32x4_t __b) +__arm_vcmulq_rot90 (float16x8_t __a, float16x8_t __b) { - return __arm_vcmpleq_f32 (__a, __b); + return __arm_vcmulq_rot90_f16 (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq (float32x4_t __a, float32_t __b) +__arm_vcmulq_rot270 (float16x8_t __a, float16x8_t __b) { - return __arm_vcmpgtq_n_f32 (__a, __b); + return __arm_vcmulq_rot270_f16 (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq (float32x4_t __a, float32x4_t __b) +__arm_vcmulq_rot180 (float16x8_t __a, float16x8_t __b) { - return __arm_vcmpgtq_f32 (__a, __b); + return __arm_vcmulq_rot180_f16 (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq (float32x4_t __a, float32_t __b) +__arm_vcmulq (float16x8_t __a, float16x8_t __b) { - return __arm_vcmpgeq_n_f32 (__a, __b); + return __arm_vcmulq_f16 (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq (float32x4_t __a, float32x4_t __b) +__arm_vcaddq_rot90 (float16x8_t __a, float16x8_t __b) { - return __arm_vcmpgeq_f32 (__a, __b); + return __arm_vcaddq_rot90_f16 (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq (float32x4_t __a, float32_t __b) +__arm_vcaddq_rot270 (float16x8_t __a, float16x8_t __b) { - return __arm_vcmpeqq_n_f32 (__a, __b); + return __arm_vcaddq_rot270_f16 (__a, __b); } -__extension__ extern __inline mve_pred16_t +__extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq (float32x4_t __a, float32x4_t __b) +__arm_vbicq (float16x8_t __a, float16x8_t __b) { - return __arm_vcmpeqq_f32 (__a, __b); + return __arm_vbicq_f16 (__a, __b); } __extension__ extern __inline float32x4_t @@ -18967,20 +16598,6 @@ __arm_vbicq (float32x4_t __a, float32x4_t __b) return __arm_vbicq_f32 (__a, __b); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpeqq_m_f16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) -{ - return __arm_vcmpeqq_m_f32 (__a, __b, __p); -} - __extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcvtaq_m (int16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) @@ -19177,83 +16794,6 @@ __arm_vrev64q_m (float16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) return __arm_vrev64q_m_f16 (__inactive, __a, __p); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m (float16x8_t __a, float16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpeqq_m_n_f16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpgeq_m_f16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m (float16x8_t __a, float16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpgeq_m_n_f16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpgtq_m_f16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m (float16x8_t __a, float16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpgtq_m_n_f16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpleq_m_f16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m (float16x8_t __a, float16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpleq_m_n_f16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpltq_m_f16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m (float16x8_t __a, float16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpltq_m_n_f16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) -{ - return __arm_vcmpneq_m_f16 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m (float16x8_t __a, float16_t __b, mve_pred16_t __p) -{ - return __arm_vcmpneq_m_n_f16 (__a, __b, __p); -} - __extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcvtmq_m (uint16x8_t __inactive, float16x8_t __a, mve_pred16_t __p) @@ -19387,83 +16927,6 @@ __arm_vrev64q_m (float32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) return __arm_vrev64q_m_f32 (__inactive, __a, __p); } -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpeqq_m (float32x4_t __a, float32_t __b, mve_pred16_t __p) -{ - return __arm_vcmpeqq_m_n_f32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) -{ - return __arm_vcmpgeq_m_f32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgeq_m (float32x4_t __a, float32_t __b, mve_pred16_t __p) -{ - return __arm_vcmpgeq_m_n_f32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) -{ - return __arm_vcmpgtq_m_f32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpgtq_m (float32x4_t __a, float32_t __b, mve_pred16_t __p) -{ - return __arm_vcmpgtq_m_n_f32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) -{ - return __arm_vcmpleq_m_f32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpleq_m (float32x4_t __a, float32_t __b, mve_pred16_t __p) -{ - return __arm_vcmpleq_m_n_f32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) -{ - return __arm_vcmpltq_m_f32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpltq_m (float32x4_t __a, float32_t __b, mve_pred16_t __p) -{ - return __arm_vcmpltq_m_n_f32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) -{ - return __arm_vcmpneq_m_f32 (__a, __b, __p); -} - -__extension__ extern __inline mve_pred16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vcmpneq_m (float32x4_t __a, float32_t __b, mve_pred16_t __p) -{ - return __arm_vcmpneq_m_n_f32 (__a, __b, __p); -} - __extension__ extern __inline uint32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcvtmq_m (uint32x4_t __inactive, float32x4_t __a, mve_pred16_t __p) @@ -20672,26 +18135,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vcaddq_rot270_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t)), \ int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vcaddq_rot270_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t)));}) -#define __arm_vcmpeqq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_fp_n]: __arm_vcmpeqq_n_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce2(p1, double)), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_fp_n]: __arm_vcmpeqq_n_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce2(p1, double)), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpeqq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpeqq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpeqq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vcmpeqq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vcmpeqq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vcmpeqq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vcmpeqq_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t)), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vcmpeqq_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t)));}) - #define __arm_vcaddq_rot90(p0,p1) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -20704,88 +18147,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vcaddq_rot90_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t)), \ int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vcaddq_rot90_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t)));}) -#define __arm_vcmpeqq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpeqq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpeqq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpeqq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vcmpeqq_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vcmpeqq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), p2), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vcmpeqq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), p2), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_m_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_m_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_m_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_m_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_m_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vcmpeqq_m_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vcmpeqq_m_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), p2), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_fp_n]: __arm_vcmpeqq_m_n_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce2(p1, double), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_fp_n]: __arm_vcmpeqq_m_n_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce2(p1, double), p2));}) - -#define __arm_vcmpgtq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpgtq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpgtq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpgtq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpgtq_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpgtq_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpgtq_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vcmpgtq_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t)), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vcmpgtq_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t)), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_fp_n]: __arm_vcmpgtq_n_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce2(p1, double)), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_fp_n]: __arm_vcmpgtq_n_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce2(p1, double)));}) - -#define __arm_vcmpleq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpleq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpleq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpleq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vcmpleq_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t)), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vcmpleq_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t)), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpleq_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpleq_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpleq_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_fp_n]: __arm_vcmpleq_n_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce2(p1, double)), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_fp_n]: __arm_vcmpleq_n_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce2(p1, double)));}) - -#define __arm_vcmpltq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpltq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpltq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpltq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpltq_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpltq_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpltq_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vcmpltq_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t)), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vcmpltq_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t)), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_fp_n]: __arm_vcmpltq_n_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce2(p1, double)), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_fp_n]: __arm_vcmpltq_n_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce2(p1, double)));}) - -#define __arm_vcmpneq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpneq_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpneq_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpneq_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int_n]: __arm_vcmpneq_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int_n]: __arm_vcmpneq_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int_n]: __arm_vcmpneq_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_fp_n]: __arm_vcmpneq_n_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce2(p1, double)), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_fp_n]: __arm_vcmpneq_n_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce2(p1, double)), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpneq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpneq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpneq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vcmpneq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vcmpneq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vcmpneq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vcmpneq_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t)), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vcmpneq_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t)));}) - #define __arm_vcmulq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -21125,68 +18486,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vcmlaq_rot90_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), __ARM_mve_coerce(__p2, float16x8_t)), \ int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vcmlaq_rot90_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), __ARM_mve_coerce(__p2, float32x4_t)));}) -#define __arm_vcmpgtq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpgtq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpgtq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpgtq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpgtq_m_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpgtq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpgtq_m_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_fp_n]: __arm_vcmpgtq_m_n_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce2(p1, double), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_fp_n]: __arm_vcmpgtq_m_n_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce2(p1, double), p2), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vcmpgtq_m_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vcmpgtq_m_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), p2));}) - -#define __arm_vcmpleq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpleq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpleq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpleq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vcmpleq_m_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vcmpleq_m_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), p2), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpleq_m_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpleq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpleq_m_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_fp_n]: __arm_vcmpleq_m_n_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce2(p1, double), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_fp_n]: __arm_vcmpleq_m_n_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce2(p1, double), p2));}) - -#define __arm_vcmpltq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpltq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpltq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpltq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vcmpltq_m_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vcmpltq_m_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), p2), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpltq_m_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpltq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpltq_m_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_fp_n]: __arm_vcmpltq_m_n_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce2(p1, double), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_fp_n]: __arm_vcmpltq_m_n_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce2(p1, double), p2));}) - -#define __arm_vcmpneq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpneq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpneq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpneq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vcmpneq_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vcmpneq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), p2), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vcmpneq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), p2), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vcmpneq_m_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vcmpneq_m_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), p2), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpneq_m_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpneq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpneq_m_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int_n]: __arm_vcmpneq_m_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int_n]: __arm_vcmpneq_m_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int_n]: __arm_vcmpneq_m_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_fp_n]: __arm_vcmpneq_m_n_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce2(p1, double), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_fp_n]: __arm_vcmpneq_m_n_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce2(p1, double), p2));}) - #define __arm_vcvtbq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -21293,40 +18592,12 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vpselq_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), p2), \ int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vpselq_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), p2));}) -#define __arm_vcmpgeq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpgeq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpgeq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpgeq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpgeq_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpgeq_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpgeq_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vcmpgeq_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t)), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vcmpgeq_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t)), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_fp_n]: __arm_vcmpgeq_n_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce2(p1, double)), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_fp_n]: __arm_vcmpgeq_n_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce2(p1, double)));}) - #define __arm_vrev16q_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vrev16q_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vrev16q_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), p2));}) -#define __arm_vcmpgeq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpgeq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpgeq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpgeq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpgeq_m_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpgeq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpgeq_m_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_fp_n]: __arm_vcmpgeq_m_n_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce2(p1, double), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_fp_n]: __arm_vcmpgeq_m_n_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce2(p1, double), p2), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vcmpgeq_m_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), p2), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vcmpgeq_m_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), p2));}) - #define __arm_vbicq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ __typeof(p2) __p2 = (p2); \ @@ -21990,22 +19261,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16x8_t]: __arm_vrev64q_u16 (__ARM_mve_coerce(__p0, uint16x8_t)), \ int (*)[__ARM_mve_type_uint32x4_t]: __arm_vrev64q_u32 (__ARM_mve_coerce(__p0, uint32x4_t)));}) -#define __arm_vcmpneq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpneq_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpneq_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpneq_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int_n]: __arm_vcmpneq_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int_n]: __arm_vcmpneq_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int_n]: __arm_vcmpneq_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpneq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpneq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpneq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vcmpneq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vcmpneq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vcmpneq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)));}) - #define __arm_vqshluq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ int (*)[__ARM_mve_type_int8x16_t]: __arm_vqshluq_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), p1), \ @@ -22099,22 +19354,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vbicq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vbicq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)));}) -#define __arm_vcmpeqq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpeqq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpeqq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpeqq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vcmpeqq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vcmpeqq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vcmpeqq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce3(p1, int)));}) - #define __arm_vmulltq_poly(p0,p1) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -22149,62 +19388,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vqdmullbq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vqdmullbq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)));}) -#define __arm_vcmpgeq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpgeq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpgeq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpgeq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpgeq_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpgeq_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpgeq_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int)));}) - -#define __arm_vcmpgtq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpgtq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpgtq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpgtq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpgtq_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpgtq_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpgtq_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int)));}) - -#define __arm_vcmpleq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpleq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpleq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpleq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpleq_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpleq_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpleq_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int)));}) - -#define __arm_vcmpltq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpltq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpltq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpltq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpltq_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpltq_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpltq_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int)));}) - -#define __arm_vcmpneq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpneq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpneq_m_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpneq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpneq_m_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int_n]: __arm_vcmpneq_m_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int_n]: __arm_vcmpneq_m_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int_n]: __arm_vcmpneq_m_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpneq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpneq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vcmpneq_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vcmpneq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), p2), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vcmpneq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), p2));}) - #define __arm_vshlcq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ int (*)[__ARM_mve_type_int8x16_t]: __arm_vshlcq_s8 (__ARM_mve_coerce(__p0, int8x16_t), p1, p2), \ @@ -22214,22 +19397,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16x8_t]: __arm_vshlcq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), p1, p2), \ int (*)[__ARM_mve_type_uint32x4_t]: __arm_vshlcq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), p1, p2));}) -#define __arm_vcmpeqq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpeqq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpeqq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpeqq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vcmpeqq_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vcmpeqq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), p2), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vcmpeqq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), p2), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_m_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_m_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_m_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_m_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int_n]: __arm_vcmpeqq_m_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce3(p1, int), p2));}) - #define __arm_vbicq_m_n(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)])0, \ int (*)[__ARM_mve_type_int16x8_t]: __arm_vbicq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), p1, p2), \ @@ -22331,63 +19498,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vqdmlsdhxq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t)), \ int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vqdmlsdhxq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t)));}) -#define __arm_vcmpgeq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpgeq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpgeq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpgeq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpgeq_m_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpgeq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpgeq_m_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int), p2));}) - - -#define __arm_vcmpgtq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpgtq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpgtq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpgtq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpgtq_m_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpgtq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpgtq_m_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int), p2));}) - -#define __arm_vcmpleq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpleq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpleq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpleq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpleq_m_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpleq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpleq_m_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int), p2));}) - -#define __arm_vcmpltq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpltq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpltq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpltq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpltq_m_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpltq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpltq_m_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int), p2));}) - -#define __arm_vcmpneq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vcmpneq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vcmpneq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vcmpneq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vcmpneq_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vcmpneq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), p2), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vcmpneq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), p2), \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int_n]: __arm_vcmpneq_m_n_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int_n]: __arm_vcmpneq_m_n_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int_n]: __arm_vcmpneq_m_n_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int_n]: __arm_vcmpneq_m_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int_n]: __arm_vcmpneq_m_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int_n]: __arm_vcmpneq_m_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce3(p1, int), p2));}) - #define __arm_vdupq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -23667,46 +20777,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16x8_t]: __arm_vaddvq_p_u16 (__ARM_mve_coerce(__p0, uint16x8_t), p1), \ int (*)[__ARM_mve_type_uint32x4_t]: __arm_vaddvq_p_u32 (__ARM_mve_coerce(__p0, uint32x4_t), p1));}) -#define __arm_vcmpcsq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vcmpcsq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vcmpcsq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vcmpcsq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int_n]: __arm_vcmpcsq_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int_n]: __arm_vcmpcsq_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int_n]: __arm_vcmpcsq_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce3(p1, int)));}) - -#define __arm_vcmpcsq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vcmpcsq_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vcmpcsq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), p2), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vcmpcsq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int_n]: __arm_vcmpcsq_m_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int_n]: __arm_vcmpcsq_m_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int_n]: __arm_vcmpcsq_m_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce3(p1, int), p2));}) - -#define __arm_vcmphiq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vcmphiq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vcmphiq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vcmphiq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int_n]: __arm_vcmphiq_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int_n]: __arm_vcmphiq_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce3(p1, int)), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int_n]: __arm_vcmphiq_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce3(p1, int)));}) - -#define __arm_vcmphiq_m(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int_n]: __arm_vcmphiq_m_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int_n]: __arm_vcmphiq_m_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int_n]: __arm_vcmphiq_m_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce3(p1, int), p2), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vcmphiq_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), p2), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vcmphiq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), p2), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vcmphiq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), p2));}) - #define __arm_vmladavaq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ __typeof(p2) __p2 = (p2); \