From patchwork Thu Nov 3 17:30:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Carl Love X-Patchwork-Id: 59846 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 8CC373858419 for ; Thu, 3 Nov 2022 17:31:11 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 8CC373858419 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1667496671; bh=L+yz9fKF2SKuttHJ6O082c02MiBxPQ8dVeVgIKr52yE=; h=Subject:To:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=Hah74iYFv+QI2XbhMKLd9zO/WCu5EYdZzZlmSrVVQIliPd6b8PkJpkYwsACxMUXtH b60GiqOhW3EmE9G2I+Bm8zeZJtMlIj8KChScQdd+r0tUAPehIWAdiT4b/EgdIiOuHT WyuBEJoFWJeHX0BDS+dL2QZm2PlJatdMiZulbRyQ= X-Original-To: gdb-patches@sourceware.org Delivered-To: gdb-patches@sourceware.org Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by sourceware.org (Postfix) with ESMTPS id A46223858429 for ; Thu, 3 Nov 2022 17:30:38 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org A46223858429 Received: from pps.filterd (m0187473.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A3HOIHH012584 for ; Thu, 3 Nov 2022 17:30:37 GMT Received: from ppma05wdc.us.ibm.com (1b.90.2fa9.ip4.static.sl-reverse.com [169.47.144.27]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3kme16an91-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Thu, 03 Nov 2022 17:30:37 +0000 Received: from pps.filterd (ppma05wdc.us.ibm.com [127.0.0.1]) by ppma05wdc.us.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 2A3HDIr7012135 for ; Thu, 3 Nov 2022 17:30:36 GMT Received: from b01cxnp23033.gho.pok.ibm.com (b01cxnp23033.gho.pok.ibm.com [9.57.198.28]) by ppma05wdc.us.ibm.com with ESMTP id 3kguta327n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Thu, 03 Nov 2022 17:30:36 +0000 Received: from smtpav02.wdc07v.mail.ibm.com ([9.208.128.114]) by b01cxnp23033.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 2A3HUXKq10355344 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 3 Nov 2022 17:30:34 GMT Received: from smtpav02.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B3D665805B; Thu, 3 Nov 2022 17:30:33 +0000 (GMT) Received: from smtpav02.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id EBDF758058; Thu, 3 Nov 2022 17:30:32 +0000 (GMT) Received: from li-e362e14c-2378-11b2-a85c-87d605f3c641.ibm.com (unknown [9.163.10.231]) by smtpav02.wdc07v.mail.ibm.com (Postfix) with ESMTP; Thu, 3 Nov 2022 17:30:32 +0000 (GMT) Message-ID: Subject: [PATCH 1/2] PowerPC: fix for the gdb.arch/powerpc-power10.exp test. To: "gdb-patches@sourceware.org" Date: Thu, 03 Nov 2022 10:30:32 -0700 In-Reply-To: <20111d66a467599d893bf85bbdf2e82b76377127.camel@us.ibm.com> References: <20111d66a467599d893bf85bbdf2e82b76377127.camel@us.ibm.com> X-Mailer: Evolution 3.28.5 (3.28.5-18.el8) Mime-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: aYBKOAJsjpsNpXkhK8hxPL4jCYXtWDYX X-Proofpoint-ORIG-GUID: aYBKOAJsjpsNpXkhK8hxPL4jCYXtWDYX X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-03_04,2022-11-03_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 suspectscore=0 mlxscore=0 phishscore=0 lowpriorityscore=0 mlxlogscore=896 bulkscore=0 clxscore=1015 adultscore=0 priorityscore=1501 impostorscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211030114 X-Spam-Status: No, score=-11.9 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gdb-patches@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gdb-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Carl Love via Gdb-patches From: Carl Love Reply-To: Carl Love Cc: Ulrich Weigand Errors-To: gdb-patches-bounces+patchwork=sourceware.org@sourceware.org Sender: "Gdb-patches" GDB maintainers: This patch updates the PowerPC instruction names in the gdb.arch/powerpc-power10.exp test per the name change in: commit bb98553cad4e017f1851153fa5de91f2cee98fb2 Author: Peter Bergner Date: Sat Oct 8 16:19:51 2022 -0500 PowerPC: Add support for RFC02658 - MMA+ Outer-Product Instructions The patch updates the expected instruction names in the expect file and the instruction names contained in the source file comments. The patch has been tested on Power 10 with no regressions. Carl Love ------------------------------------------------- PowerPC fix for the gdb.arch/powerpc-power10.exp test. The mnemonics for the pmxvf16ger*, pmxvf32ger*,pmxvf64ger*, pmxvi4ger8*, pmxvi8ger4*, pmxvi16ger2* instructions were officially changed to pmdmxvf16ger*, pmdmxvf32ger*, pmdmxvf64ger*, pmdmxvi4ger8*, pmdmxvi8ger4*, pmdmxvi16ger* respectively. The old mnemonics are still supported by the assembler as extended mnemonics. The disassembler generates the new mnemonics. The name changes occurred in commit: commit bb98553cad4e017f1851153fa5de91f2cee98fb2 Author: Peter Bergner Date: Sat Oct 8 16:19:51 2022 -0500 PowerPC: Add support for RFC02658 - MMA+ Outer-Product Instructions gas/ * config/tc-ppc.c (md_assemble): Only check for prefix opcodes. * testsuite/gas/ppc/rfc02658.s: New test. * testsuite/gas/ppc/rfc02658.d: Likewise. * testsuite/gas/ppc/ppc.exp: Run it. opcodes/ * ppc-opc.c (XMSK8, P_GERX4_MASK, P_GERX2_MASK, XX3GERX_MASK): New. (powerpc_opcodes): Add dmxvi8gerx4pp, dmxvi8gerx4, dmxvf16gerx2pp, dmxvf16gerx2, dmxvbf16gerx2pp, dmxvf16gerx2np, dmxvbf16gerx2, dmxvi8gerx4spp, dmxvbf16gerx2np, dmxvf16gerx2pn, dmxvbf16gerx2pn, dmxvf16gerx2nn, dmxvbf16gerx2nn, pmdmxvi8gerx4pp, pmdmxvi8gerx4, pmdmxvf16gerx2pp, pmdmxvf16gerx2, pmdmxvbf16gerx2pp, pmdmxvf16gerx2np, pmdmxvbf16gerx2, pmdmxvi8gerx4spp, pmdmxvbf16gerx2np, pmdmxvf16gerx2pn, pmdmxvbf16gerx2pn, pmdmxvf16gerx2nn, pmdmxvbf16gerx2nn. The above commit results in about 224 failures on Power 10 since the disassembled names do not match the expected names in the test. This patch updates the expected names in the test to match the values produced by the disassembler. This patch updates file gdb.arch/powerpc-power10.exp with the new expected values to the instructions. The comment giving the name of the instruction for each binary value in the file gdb.arch/powerpc-power10.c is updated with the new name. There are no functional changes in file gdb.arch/powerpc-power10.c. --- gdb/testsuite/gdb.arch/powerpc-power10.exp | 448 ++++++++++----------- gdb/testsuite/gdb.arch/powerpc-power10.s | 384 +++++++++--------- 2 files changed, 416 insertions(+), 416 deletions(-) diff --git a/gdb/testsuite/gdb.arch/powerpc-power10.exp b/gdb/testsuite/gdb.arch/powerpc-power10.exp index bc52a72d9de..b9383d8bd2a 100644 --- a/gdb/testsuite/gdb.arch/powerpc-power10.exp +++ b/gdb/testsuite/gdb.arch/powerpc-power10.exp @@ -186,198 +186,198 @@ func_check "plxvp vs20,16(0)" func_check "plxvp vs20,24(0)" func_check "plxvp vs20,32(0)" func_check "plxvp vs20,8(0)" -func_check "pmxvbf16ger2 a4,vs0,vs1,0,0,0" -func_check "pmxvbf16ger2 a4,vs0,vs1,0,0,1" -func_check "pmxvbf16ger2 a4,vs0,vs1,0,13,0" -func_check "pmxvbf16ger2 a4,vs0,vs1,0,13,1" -func_check "pmxvbf16ger2 a4,vs0,vs1,11,0,0" -func_check "pmxvbf16ger2 a4,vs0,vs1,11,0,1" -func_check "pmxvbf16ger2 a4,vs0,vs1,11,13,0" -func_check "pmxvbf16ger2 a4,vs0,vs1,11,13,1" -func_check "pmxvbf16ger2nn a4,vs0,vs1,0,0,0" -func_check "pmxvbf16ger2nn a4,vs0,vs1,0,0,1" -func_check "pmxvbf16ger2nn a4,vs0,vs1,0,13,0" -func_check "pmxvbf16ger2nn a4,vs0,vs1,0,13,1" -func_check "pmxvbf16ger2nn a4,vs0,vs1,11,0,0" -func_check "pmxvbf16ger2nn a4,vs0,vs1,11,0,1" -func_check "pmxvbf16ger2nn a4,vs0,vs1,11,13,0" -func_check "pmxvbf16ger2nn a4,vs0,vs1,11,13,1" -func_check "pmxvbf16ger2np a4,vs0,vs1,0,0,0" -func_check "pmxvbf16ger2np a4,vs0,vs1,0,0,1" -func_check "pmxvbf16ger2np a4,vs0,vs1,0,13,0" -func_check "pmxvbf16ger2np a4,vs0,vs1,0,13,1" -func_check "pmxvbf16ger2np a4,vs0,vs1,11,0,0" -func_check "pmxvbf16ger2np a4,vs0,vs1,11,0,1" -func_check "pmxvbf16ger2np a4,vs0,vs1,11,13,0" -func_check "pmxvbf16ger2np a4,vs0,vs1,11,13,1" -func_check "pmxvbf16ger2pn a4,vs0,vs1,0,0,0" -func_check "pmxvbf16ger2pn a4,vs0,vs1,0,0,1" -func_check "pmxvbf16ger2pn a4,vs0,vs1,0,13,0" -func_check "pmxvbf16ger2pn a4,vs0,vs1,0,13,1" -func_check "pmxvbf16ger2pn a4,vs0,vs1,11,0,0" -func_check "pmxvbf16ger2pn a4,vs0,vs1,11,0,1" -func_check "pmxvbf16ger2pn a4,vs0,vs1,11,13,0" -func_check "pmxvbf16ger2pn a4,vs0,vs1,11,13,1" -func_check "pmxvbf16ger2pp a4,vs0,vs1,0,0,0" -func_check "pmxvbf16ger2pp a4,vs0,vs1,0,0,1" -func_check "pmxvbf16ger2pp a4,vs0,vs1,0,13,0" -func_check "pmxvbf16ger2pp a4,vs0,vs1,0,13,1" -func_check "pmxvbf16ger2pp a4,vs0,vs1,11,0,0" -func_check "pmxvbf16ger2pp a4,vs0,vs1,11,0,1" -func_check "pmxvbf16ger2pp a4,vs0,vs1,11,13,0" -func_check "pmxvbf16ger2pp a4,vs0,vs1,11,13,1" -func_check "pmxvf16ger2 a4,vs0,vs1,0,0,0" -func_check "pmxvf16ger2 a4,vs0,vs1,0,0,1" -func_check "pmxvf16ger2 a4,vs0,vs1,0,13,0" -func_check "pmxvf16ger2 a4,vs0,vs1,0,13,1" -func_check "pmxvf16ger2 a4,vs0,vs1,11,0,0" -func_check "pmxvf16ger2 a4,vs0,vs1,11,0,1" -func_check "pmxvf16ger2 a4,vs0,vs1,11,13,0" -func_check "pmxvf16ger2 a4,vs0,vs1,11,13,1" -func_check "pmxvf16ger2nn a4,vs0,vs1,0,0,0" -func_check "pmxvf16ger2nn a4,vs0,vs1,0,0,1" -func_check "pmxvf16ger2nn a4,vs0,vs1,0,13,0" -func_check "pmxvf16ger2nn a4,vs0,vs1,0,13,1" -func_check "pmxvf16ger2nn a4,vs0,vs1,11,0,0" -func_check "pmxvf16ger2nn a4,vs0,vs1,11,0,1" -func_check "pmxvf16ger2nn a4,vs0,vs1,11,13,0" -func_check "pmxvf16ger2nn a4,vs0,vs1,11,13,1" -func_check "pmxvf16ger2np a4,vs0,vs1,0,0,0" -func_check "pmxvf16ger2np a4,vs0,vs1,0,0,1" -func_check "pmxvf16ger2np a4,vs0,vs1,0,13,0" -func_check "pmxvf16ger2np a4,vs0,vs1,0,13,1" -func_check "pmxvf16ger2np a4,vs0,vs1,11,0,0" -func_check "pmxvf16ger2np a4,vs0,vs1,11,0,1" -func_check "pmxvf16ger2np a4,vs0,vs1,11,13,0" -func_check "pmxvf16ger2np a4,vs0,vs1,11,13,1" -func_check "pmxvf16ger2pn a4,vs0,vs1,0,0,0" -func_check "pmxvf16ger2pn a4,vs0,vs1,0,0,1" -func_check "pmxvf16ger2pn a4,vs0,vs1,0,13,0" -func_check "pmxvf16ger2pn a4,vs0,vs1,0,13,1" -func_check "pmxvf16ger2pn a4,vs0,vs1,11,0,0" -func_check "pmxvf16ger2pn a4,vs0,vs1,11,0,1" -func_check "pmxvf16ger2pn a4,vs0,vs1,11,13,0" -func_check "pmxvf16ger2pn a4,vs0,vs1,11,13,1" -func_check "pmxvf16ger2pp a4,vs0,vs1,0,0,0" -func_check "pmxvf16ger2pp a4,vs0,vs1,0,0,1" -func_check "pmxvf16ger2pp a4,vs0,vs1,0,13,0" -func_check "pmxvf16ger2pp a4,vs0,vs1,0,13,1" -func_check "pmxvf16ger2pp a4,vs0,vs1,11,0,0" -func_check "pmxvf16ger2pp a4,vs0,vs1,11,0,1" -func_check "pmxvf16ger2pp a4,vs0,vs1,11,13,0" -func_check "pmxvf16ger2pp a4,vs0,vs1,11,13,1" -func_check "pmxvf32ger a4,vs0,vs1,0,0" -func_check "pmxvf32ger a4,vs0,vs1,0,13" -func_check "pmxvf32ger a4,vs0,vs1,11,0" -func_check "pmxvf32ger a4,vs0,vs1,11,13" -func_check "pmxvf32gernn a4,vs0,vs1,0,0" -func_check "pmxvf32gernn a4,vs0,vs1,0,13" -func_check "pmxvf32gernn a4,vs0,vs1,11,0" -func_check "pmxvf32gernn a4,vs0,vs1,11,13" -func_check "pmxvf32gernp a4,vs0,vs1,0,0" -func_check "pmxvf32gernp a4,vs0,vs1,0,13" -func_check "pmxvf32gernp a4,vs0,vs1,11,0" -func_check "pmxvf32gernp a4,vs0,vs1,11,13" -func_check "pmxvf32gerpn a4,vs0,vs1,0,0" -func_check "pmxvf32gerpn a4,vs0,vs1,0,13" -func_check "pmxvf32gerpn a4,vs0,vs1,11,0" -func_check "pmxvf32gerpn a4,vs0,vs1,11,13" -func_check "pmxvf32gerpp a4,vs0,vs1,0,0" -func_check "pmxvf32gerpp a4,vs0,vs1,0,13" -func_check "pmxvf32gerpp a4,vs0,vs1,11,0" -func_check "pmxvf32gerpp a4,vs0,vs1,11,13" -func_check "pmxvf64ger a4,vs22,vs0,0,0" -func_check "pmxvf64ger a4,vs22,vs0,0,1" -func_check "pmxvf64ger a4,vs22,vs0,11,0" -func_check "pmxvf64ger a4,vs22,vs0,11,1" -func_check "pmxvf64gernn a4,vs22,vs0,0,0" -func_check "pmxvf64gernn a4,vs22,vs0,0,1" -func_check "pmxvf64gernn a4,vs22,vs0,11,0" -func_check "pmxvf64gernn a4,vs22,vs0,11,1" -func_check "pmxvf64gernp a4,vs22,vs0,0,0" -func_check "pmxvf64gernp a4,vs22,vs0,0,1" -func_check "pmxvf64gernp a4,vs22,vs0,11,0" -func_check "pmxvf64gernp a4,vs22,vs0,11,1" -func_check "pmxvf64gerpn a4,vs22,vs0,0,0" -func_check "pmxvf64gerpn a4,vs22,vs0,0,1" -func_check "pmxvf64gerpn a4,vs22,vs0,11,0" -func_check "pmxvf64gerpn a4,vs22,vs0,11,1" -func_check "pmxvf64gerpp a4,vs22,vs0,0,0" -func_check "pmxvf64gerpp a4,vs22,vs0,0,1" -func_check "pmxvf64gerpp a4,vs22,vs0,11,0" -func_check "pmxvf64gerpp a4,vs22,vs0,11,1" -func_check "pmxvi16ger2 a4,vs0,vs1,0,0,0" -func_check "pmxvi16ger2 a4,vs0,vs1,0,0,1" -func_check "pmxvi16ger2 a4,vs0,vs1,0,13,0" -func_check "pmxvi16ger2 a4,vs0,vs1,0,13,1" -func_check "pmxvi16ger2 a4,vs0,vs1,11,0,0" -func_check "pmxvi16ger2 a4,vs0,vs1,11,0,1" -func_check "pmxvi16ger2 a4,vs0,vs1,11,13,0" -func_check "pmxvi16ger2 a4,vs0,vs1,11,13,1" -func_check "pmxvi16ger2pp a4,vs0,vs1,0,0,0" -func_check "pmxvi16ger2pp a4,vs0,vs1,0,0,1" -func_check "pmxvi16ger2pp a4,vs0,vs1,0,13,0" -func_check "pmxvi16ger2pp a4,vs0,vs1,0,13,1" -func_check "pmxvi16ger2pp a4,vs0,vs1,11,0,0" -func_check "pmxvi16ger2pp a4,vs0,vs1,11,0,1" -func_check "pmxvi16ger2pp a4,vs0,vs1,11,13,0" -func_check "pmxvi16ger2pp a4,vs0,vs1,11,13,1" -func_check "pmxvi16ger2s a4,vs0,vs1,0,0,0" -func_check "pmxvi16ger2s a4,vs0,vs1,0,0,1" -func_check "pmxvi16ger2s a4,vs0,vs1,0,13,0" -func_check "pmxvi16ger2s a4,vs0,vs1,0,13,1" -func_check "pmxvi16ger2s a4,vs0,vs1,11,0,0" -func_check "pmxvi16ger2s a4,vs0,vs1,11,0,1" -func_check "pmxvi16ger2s a4,vs0,vs1,11,13,0" -func_check "pmxvi16ger2s a4,vs0,vs1,11,13,1" -func_check "pmxvi16ger2spp a4,vs0,vs1,0,0,0" -func_check "pmxvi16ger2spp a4,vs0,vs1,0,0,1" -func_check "pmxvi16ger2spp a4,vs0,vs1,0,13,0" -func_check "pmxvi16ger2spp a4,vs0,vs1,0,13,1" -func_check "pmxvi16ger2spp a4,vs0,vs1,11,0,0" -func_check "pmxvi16ger2spp a4,vs0,vs1,11,0,1" -func_check "pmxvi16ger2spp a4,vs0,vs1,11,13,0" -func_check "pmxvi16ger2spp a4,vs0,vs1,11,13,1" -func_check "pmxvi4ger8 a4,vs0,vs1,0,0,0" -func_check "pmxvi4ger8 a4,vs0,vs1,0,0,45" -func_check "pmxvi4ger8 a4,vs0,vs1,0,1,0" -func_check "pmxvi4ger8 a4,vs0,vs1,0,1,45" -func_check "pmxvi4ger8 a4,vs0,vs1,11,0,0" -func_check "pmxvi4ger8 a4,vs0,vs1,11,0,45" -func_check "pmxvi4ger8 a4,vs0,vs1,11,1,0" -func_check "pmxvi4ger8 a4,vs0,vs1,11,1,45" -func_check "pmxvi4ger8pp a4,vs0,vs1,0,0,0" -func_check "pmxvi4ger8pp a4,vs0,vs1,0,0,45" -func_check "pmxvi4ger8pp a4,vs0,vs1,0,1,0" -func_check "pmxvi4ger8pp a4,vs0,vs1,0,1,45" -func_check "pmxvi4ger8pp a4,vs0,vs1,11,0,0" -func_check "pmxvi4ger8pp a4,vs0,vs1,11,0,45" -func_check "pmxvi4ger8pp a4,vs0,vs1,11,1,0" -func_check "pmxvi4ger8pp a4,vs0,vs1,11,1,45" -func_check "pmxvi8ger4 a4,vs0,vs1,0,0,0" -func_check "pmxvi8ger4 a4,vs0,vs1,0,0,5" -func_check "pmxvi8ger4 a4,vs0,vs1,0,13,0" -func_check "pmxvi8ger4 a4,vs0,vs1,0,13,5" -func_check "pmxvi8ger4 a4,vs0,vs1,11,0,0" -func_check "pmxvi8ger4 a4,vs0,vs1,11,0,5" -func_check "pmxvi8ger4 a4,vs0,vs1,11,13,0" -func_check "pmxvi8ger4 a4,vs0,vs1,11,13,5" -func_check "pmxvi8ger4pp a4,vs0,vs1,0,0,0" -func_check "pmxvi8ger4pp a4,vs0,vs1,0,0,5" -func_check "pmxvi8ger4pp a4,vs0,vs1,0,13,0" -func_check "pmxvi8ger4pp a4,vs0,vs1,0,13,5" -func_check "pmxvi8ger4pp a4,vs0,vs1,11,0,0" -func_check "pmxvi8ger4pp a4,vs0,vs1,11,0,5" -func_check "pmxvi8ger4pp a4,vs0,vs1,11,13,0" -func_check "pmxvi8ger4pp a4,vs0,vs1,11,13,5" -func_check "pmxvi8ger4spp a4,vs0,vs1,0,0,0" -func_check "pmxvi8ger4spp a4,vs0,vs1,0,0,5" -func_check "pmxvi8ger4spp a4,vs0,vs1,0,13,0" -func_check "pmxvi8ger4spp a4,vs0,vs1,0,13,5" -func_check "pmxvi8ger4spp a4,vs0,vs1,11,0,0" -func_check "pmxvi8ger4spp a4,vs0,vs1,11,0,5" -func_check "pmxvi8ger4spp a4,vs0,vs1,11,13,0" -func_check "pmxvi8ger4spp a4,vs0,vs1,11,13,5" +func_check "pmdmxvbf16ger2 a4,vs0,vs1,0,0,0" +func_check "pmdmxvbf16ger2 a4,vs0,vs1,0,0,1" +func_check "pmdmxvbf16ger2 a4,vs0,vs1,0,13,0" +func_check "pmdmxvbf16ger2 a4,vs0,vs1,0,13,1" +func_check "pmdmxvbf16ger2 a4,vs0,vs1,11,0,0" +func_check "pmdmxvbf16ger2 a4,vs0,vs1,11,0,1" +func_check "pmdmxvbf16ger2 a4,vs0,vs1,11,13,0" +func_check "pmdmxvbf16ger2 a4,vs0,vs1,11,13,1" +func_check "pmdmxvbf16ger2nn a4,vs0,vs1,0,0,0" +func_check "pmdmxvbf16ger2nn a4,vs0,vs1,0,0,1" +func_check "pmdmxvbf16ger2nn a4,vs0,vs1,0,13,0" +func_check "pmdmxvbf16ger2nn a4,vs0,vs1,0,13,1" +func_check "pmdmxvbf16ger2nn a4,vs0,vs1,11,0,0" +func_check "pmdmxvbf16ger2nn a4,vs0,vs1,11,0,1" +func_check "pmdmxvbf16ger2nn a4,vs0,vs1,11,13,0" +func_check "pmdmxvbf16ger2nn a4,vs0,vs1,11,13,1" +func_check "pmdmxvbf16ger2np a4,vs0,vs1,0,0,0" +func_check "pmdmxvbf16ger2np a4,vs0,vs1,0,0,1" +func_check "pmdmxvbf16ger2np a4,vs0,vs1,0,13,0" +func_check "pmdmxvbf16ger2np a4,vs0,vs1,0,13,1" +func_check "pmdmxvbf16ger2np a4,vs0,vs1,11,0,0" +func_check "pmdmxvbf16ger2np a4,vs0,vs1,11,0,1" +func_check "pmdmxvbf16ger2np a4,vs0,vs1,11,13,0" +func_check "pmdmxvbf16ger2np a4,vs0,vs1,11,13,1" +func_check "pmdmxvbf16ger2pn a4,vs0,vs1,0,0,0" +func_check "pmdmxvbf16ger2pn a4,vs0,vs1,0,0,1" +func_check "pmdmxvbf16ger2pn a4,vs0,vs1,0,13,0" +func_check "pmdmxvbf16ger2pn a4,vs0,vs1,0,13,1" +func_check "pmdmxvbf16ger2pn a4,vs0,vs1,11,0,0" +func_check "pmdmxvbf16ger2pn a4,vs0,vs1,11,0,1" +func_check "pmdmxvbf16ger2pn a4,vs0,vs1,11,13,0" +func_check "pmdmxvbf16ger2pn a4,vs0,vs1,11,13,1" +func_check "pmdmxvbf16ger2pp a4,vs0,vs1,0,0,0" +func_check "pmdmxvbf16ger2pp a4,vs0,vs1,0,0,1" +func_check "pmdmxvbf16ger2pp a4,vs0,vs1,0,13,0" +func_check "pmdmxvbf16ger2pp a4,vs0,vs1,0,13,1" +func_check "pmdmxvbf16ger2pp a4,vs0,vs1,11,0,0" +func_check "pmdmxvbf16ger2pp a4,vs0,vs1,11,0,1" +func_check "pmdmxvbf16ger2pp a4,vs0,vs1,11,13,0" +func_check "pmdmxvbf16ger2pp a4,vs0,vs1,11,13,1" +func_check "pmdmxvf16ger2 a4,vs0,vs1,0,0,0" +func_check "pmdmxvf16ger2 a4,vs0,vs1,0,0,1" +func_check "pmdmxvf16ger2 a4,vs0,vs1,0,13,0" +func_check "pmdmxvf16ger2 a4,vs0,vs1,0,13,1" +func_check "pmdmxvf16ger2 a4,vs0,vs1,11,0,0" +func_check "pmdmxvf16ger2 a4,vs0,vs1,11,0,1" +func_check "pmdmxvf16ger2 a4,vs0,vs1,11,13,0" +func_check "pmdmxvf16ger2 a4,vs0,vs1,11,13,1" +func_check "pmdmxvf16ger2nn a4,vs0,vs1,0,0,0" +func_check "pmdmxvf16ger2nn a4,vs0,vs1,0,0,1" +func_check "pmdmxvf16ger2nn a4,vs0,vs1,0,13,0" +func_check "pmdmxvf16ger2nn a4,vs0,vs1,0,13,1" +func_check "pmdmxvf16ger2nn a4,vs0,vs1,11,0,0" +func_check "pmdmxvf16ger2nn a4,vs0,vs1,11,0,1" +func_check "pmdmxvf16ger2nn a4,vs0,vs1,11,13,0" +func_check "pmdmxvf16ger2nn a4,vs0,vs1,11,13,1" +func_check "pmdmxvf16ger2np a4,vs0,vs1,0,0,0" +func_check "pmdmxvf16ger2np a4,vs0,vs1,0,0,1" +func_check "pmdmxvf16ger2np a4,vs0,vs1,0,13,0" +func_check "pmdmxvf16ger2np a4,vs0,vs1,0,13,1" +func_check "pmdmxvf16ger2np a4,vs0,vs1,11,0,0" +func_check "pmdmxvf16ger2np a4,vs0,vs1,11,0,1" +func_check "pmdmxvf16ger2np a4,vs0,vs1,11,13,0" +func_check "pmdmxvf16ger2np a4,vs0,vs1,11,13,1" +func_check "pmdmxvf16ger2pn a4,vs0,vs1,0,0,0" +func_check "pmdmxvf16ger2pn a4,vs0,vs1,0,0,1" +func_check "pmdmxvf16ger2pn a4,vs0,vs1,0,13,0" +func_check "pmdmxvf16ger2pn a4,vs0,vs1,0,13,1" +func_check "pmdmxvf16ger2pn a4,vs0,vs1,11,0,0" +func_check "pmdmxvf16ger2pn a4,vs0,vs1,11,0,1" +func_check "pmdmxvf16ger2pn a4,vs0,vs1,11,13,0" +func_check "pmdmxvf16ger2pn a4,vs0,vs1,11,13,1" +func_check "pmdmxvf16ger2pp a4,vs0,vs1,0,0,0" +func_check "pmdmxvf16ger2pp a4,vs0,vs1,0,0,1" +func_check "pmdmxvf16ger2pp a4,vs0,vs1,0,13,0" +func_check "pmdmxvf16ger2pp a4,vs0,vs1,0,13,1" +func_check "pmdmxvf16ger2pp a4,vs0,vs1,11,0,0" +func_check "pmdmxvf16ger2pp a4,vs0,vs1,11,0,1" +func_check "pmdmxvf16ger2pp a4,vs0,vs1,11,13,0" +func_check "pmdmxvf16ger2pp a4,vs0,vs1,11,13,1" +func_check "pmdmxvf32ger a4,vs0,vs1,0,0" +func_check "pmdmxvf32ger a4,vs0,vs1,0,13" +func_check "pmdmxvf32ger a4,vs0,vs1,11,0" +func_check "pmdmxvf32ger a4,vs0,vs1,11,13" +func_check "pmdmxvf32gernn a4,vs0,vs1,0,0" +func_check "pmdmxvf32gernn a4,vs0,vs1,0,13" +func_check "pmdmxvf32gernn a4,vs0,vs1,11,0" +func_check "pmdmxvf32gernn a4,vs0,vs1,11,13" +func_check "pmdmxvf32gernp a4,vs0,vs1,0,0" +func_check "pmdmxvf32gernp a4,vs0,vs1,0,13" +func_check "pmdmxvf32gernp a4,vs0,vs1,11,0" +func_check "pmdmxvf32gernp a4,vs0,vs1,11,13" +func_check "pmdmxvf32gerpn a4,vs0,vs1,0,0" +func_check "pmdmxvf32gerpn a4,vs0,vs1,0,13" +func_check "pmdmxvf32gerpn a4,vs0,vs1,11,0" +func_check "pmdmxvf32gerpn a4,vs0,vs1,11,13" +func_check "pmdmxvf32gerpp a4,vs0,vs1,0,0" +func_check "pmdmxvf32gerpp a4,vs0,vs1,0,13" +func_check "pmdmxvf32gerpp a4,vs0,vs1,11,0" +func_check "pmdmxvf32gerpp a4,vs0,vs1,11,13" +func_check "pmdmxvf64ger a4,vs22,vs0,0,0" +func_check "pmdmxvf64ger a4,vs22,vs0,0,1" +func_check "pmdmxvf64ger a4,vs22,vs0,11,0" +func_check "pmdmxvf64ger a4,vs22,vs0,11,1" +func_check "pmdmxvf64gernn a4,vs22,vs0,0,0" +func_check "pmdmxvf64gernn a4,vs22,vs0,0,1" +func_check "pmdmxvf64gernn a4,vs22,vs0,11,0" +func_check "pmdmxvf64gernn a4,vs22,vs0,11,1" +func_check "pmdmxvf64gernp a4,vs22,vs0,0,0" +func_check "pmdmxvf64gernp a4,vs22,vs0,0,1" +func_check "pmdmxvf64gernp a4,vs22,vs0,11,0" +func_check "pmdmxvf64gernp a4,vs22,vs0,11,1" +func_check "pmdmxvf64gerpn a4,vs22,vs0,0,0" +func_check "pmdmxvf64gerpn a4,vs22,vs0,0,1" +func_check "pmdmxvf64gerpn a4,vs22,vs0,11,0" +func_check "pmdmxvf64gerpn a4,vs22,vs0,11,1" +func_check "pmdmxvf64gerpp a4,vs22,vs0,0,0" +func_check "pmdmxvf64gerpp a4,vs22,vs0,0,1" +func_check "pmdmxvf64gerpp a4,vs22,vs0,11,0" +func_check "pmdmxvf64gerpp a4,vs22,vs0,11,1" +func_check "pmdmxvi16ger2 a4,vs0,vs1,0,0,0" +func_check "pmdmxvi16ger2 a4,vs0,vs1,0,0,1" +func_check "pmdmxvi16ger2 a4,vs0,vs1,0,13,0" +func_check "pmdmxvi16ger2 a4,vs0,vs1,0,13,1" +func_check "pmdmxvi16ger2 a4,vs0,vs1,11,0,0" +func_check "pmdmxvi16ger2 a4,vs0,vs1,11,0,1" +func_check "pmdmxvi16ger2 a4,vs0,vs1,11,13,0" +func_check "pmdmxvi16ger2 a4,vs0,vs1,11,13,1" +func_check "pmdmxvi16ger2pp a4,vs0,vs1,0,0,0" +func_check "pmdmxvi16ger2pp a4,vs0,vs1,0,0,1" +func_check "pmdmxvi16ger2pp a4,vs0,vs1,0,13,0" +func_check "pmdmxvi16ger2pp a4,vs0,vs1,0,13,1" +func_check "pmdmxvi16ger2pp a4,vs0,vs1,11,0,0" +func_check "pmdmxvi16ger2pp a4,vs0,vs1,11,0,1" +func_check "pmdmxvi16ger2pp a4,vs0,vs1,11,13,0" +func_check "pmdmxvi16ger2pp a4,vs0,vs1,11,13,1" +func_check "pmdmxvi16ger2s a4,vs0,vs1,0,0,0" +func_check "pmdmxvi16ger2s a4,vs0,vs1,0,0,1" +func_check "pmdmxvi16ger2s a4,vs0,vs1,0,13,0" +func_check "pmdmxvi16ger2s a4,vs0,vs1,0,13,1" +func_check "pmdmxvi16ger2s a4,vs0,vs1,11,0,0" +func_check "pmdmxvi16ger2s a4,vs0,vs1,11,0,1" +func_check "pmdmxvi16ger2s a4,vs0,vs1,11,13,0" +func_check "pmdmxvi16ger2s a4,vs0,vs1,11,13,1" +func_check "pmdmxvi16ger2spp a4,vs0,vs1,0,0,0" +func_check "pmdmxvi16ger2spp a4,vs0,vs1,0,0,1" +func_check "pmdmxvi16ger2spp a4,vs0,vs1,0,13,0" +func_check "pmdmxvi16ger2spp a4,vs0,vs1,0,13,1" +func_check "pmdmxvi16ger2spp a4,vs0,vs1,11,0,0" +func_check "pmdmxvi16ger2spp a4,vs0,vs1,11,0,1" +func_check "pmdmxvi16ger2spp a4,vs0,vs1,11,13,0" +func_check "pmdmxvi16ger2spp a4,vs0,vs1,11,13,1" +func_check "pmdmxvi4ger8 a4,vs0,vs1,0,0,0" +func_check "pmdmxvi4ger8 a4,vs0,vs1,0,0,45" +func_check "pmdmxvi4ger8 a4,vs0,vs1,0,1,0" +func_check "pmdmxvi4ger8 a4,vs0,vs1,0,1,45" +func_check "pmdmxvi4ger8 a4,vs0,vs1,11,0,0" +func_check "pmdmxvi4ger8 a4,vs0,vs1,11,0,45" +func_check "pmdmxvi4ger8 a4,vs0,vs1,11,1,0" +func_check "pmdmxvi4ger8 a4,vs0,vs1,11,1,45" +func_check "pmdmxvi4ger8pp a4,vs0,vs1,0,0,0" +func_check "pmdmxvi4ger8pp a4,vs0,vs1,0,0,45" +func_check "pmdmxvi4ger8pp a4,vs0,vs1,0,1,0" +func_check "pmdmxvi4ger8pp a4,vs0,vs1,0,1,45" +func_check "pmdmxvi4ger8pp a4,vs0,vs1,11,0,0" +func_check "pmdmxvi4ger8pp a4,vs0,vs1,11,0,45" +func_check "pmdmxvi4ger8pp a4,vs0,vs1,11,1,0" +func_check "pmdmxvi4ger8pp a4,vs0,vs1,11,1,45" +func_check "pmdmxvi8ger4 a4,vs0,vs1,0,0,0" +func_check "pmdmxvi8ger4 a4,vs0,vs1,0,0,5" +func_check "pmdmxvi8ger4 a4,vs0,vs1,0,13,0" +func_check "pmdmxvi8ger4 a4,vs0,vs1,0,13,5" +func_check "pmdmxvi8ger4 a4,vs0,vs1,11,0,0" +func_check "pmdmxvi8ger4 a4,vs0,vs1,11,0,5" +func_check "pmdmxvi8ger4 a4,vs0,vs1,11,13,0" +func_check "pmdmxvi8ger4 a4,vs0,vs1,11,13,5" +func_check "pmdmxvi8ger4pp a4,vs0,vs1,0,0,0" +func_check "pmdmxvi8ger4pp a4,vs0,vs1,0,0,5" +func_check "pmdmxvi8ger4pp a4,vs0,vs1,0,13,0" +func_check "pmdmxvi8ger4pp a4,vs0,vs1,0,13,5" +func_check "pmdmxvi8ger4pp a4,vs0,vs1,11,0,0" +func_check "pmdmxvi8ger4pp a4,vs0,vs1,11,0,5" +func_check "pmdmxvi8ger4pp a4,vs0,vs1,11,13,0" +func_check "pmdmxvi8ger4pp a4,vs0,vs1,11,13,5" +func_check "pmdmxvi8ger4spp a4,vs0,vs1,0,0,0" +func_check "pmdmxvi8ger4spp a4,vs0,vs1,0,0,5" +func_check "pmdmxvi8ger4spp a4,vs0,vs1,0,13,0" +func_check "pmdmxvi8ger4spp a4,vs0,vs1,0,13,5" +func_check "pmdmxvi8ger4spp a4,vs0,vs1,11,0,0" +func_check "pmdmxvi8ger4spp a4,vs0,vs1,11,0,5" +func_check "pmdmxvi8ger4spp a4,vs0,vs1,11,13,0" +func_check "pmdmxvi8ger4spp a4,vs0,vs1,11,13,5" #/* pstb extended mnemonics can suppress (r1) or the trailing ,0 or ,1, see ISA. func_check "pstb r0,0(r1)" func_check "pstb r0,16(r1)" @@ -582,37 +582,37 @@ func_check "xscvsqqp v0,v1" func_check "xscvuqqp v0,v1" func_check "xsmaxcqp v0,v1,v2" func_check "xsmincqp v0,v1,v2" -func_check "xvbf16ger2 a4,vs0,vs1" -func_check "xvbf16ger2nn a4,vs0,vs1" -func_check "xvbf16ger2np a4,vs0,vs1" -func_check "xvbf16ger2pn a4,vs0,vs1" -func_check "xvbf16ger2pp a4,vs0,vs1" +func_check "dmxvbf16ger2 a4,vs0,vs1" +func_check "dmxvbf16ger2nn a4,vs0,vs1" +func_check "dmxvbf16ger2np a4,vs0,vs1" +func_check "dmxvbf16ger2pn a4,vs0,vs1" +func_check "dmxvbf16ger2pp a4,vs0,vs1" func_check "xvcvbf16spn vs0,vs1" func_check "xvcvspbf16 vs0,vs1" -func_check "xvf16ger2 a4,vs0,vs1" -func_check "xvf16ger2nn a4,vs0,vs1" -func_check "xvf16ger2np a4,vs0,vs1" -func_check "xvf16ger2pn a4,vs0,vs1" -func_check "xvf16ger2pp a4,vs0,vs1" -func_check "xvf32ger a4,vs0,vs1" -func_check "xvf32gernn a4,vs0,vs1" -func_check "xvf32gernp a4,vs0,vs1" -func_check "xvf32gerpn a4,vs0,vs1" -func_check "xvf32gerpp a4,vs0,vs1" -func_check "xvf64ger a4,vs22,vs0" -func_check "xvf64gernn a4,vs22,vs0" -func_check "xvf64gernp a4,vs22,vs0" -func_check "xvf64gerpn a4,vs22,vs0" -func_check "xvf64gerpp a4,vs22,vs0" -func_check "xvi16ger2 a4,vs0,vs1" -func_check "xvi16ger2pp a4,vs0,vs1" -func_check "xvi16ger2s a4,vs0,vs1" -func_check "xvi16ger2spp a4,vs0,vs1" -func_check "xvi4ger8 a4,vs0,vs1" -func_check "xvi4ger8pp a4,vs0,vs1" -func_check "xvi8ger4 a4,vs0,vs1" -func_check "xvi8ger4pp a4,vs0,vs1" -func_check "xvi8ger4spp a4,vs0,vs1" +func_check "dmxvf16ger2 a4,vs0,vs1" +func_check "dmxvf16ger2nn a4,vs0,vs1" +func_check "dmxvf16ger2np a4,vs0,vs1" +func_check "dmxvf16ger2pn a4,vs0,vs1" +func_check "dmxvf16ger2pp a4,vs0,vs1" +func_check "dmxvf32ger a4,vs0,vs1" +func_check "dmxvf32gernn a4,vs0,vs1" +func_check "dmxvf32gernp a4,vs0,vs1" +func_check "dmxvf32gerpn a4,vs0,vs1" +func_check "dmxvf32gerpp a4,vs0,vs1" +func_check "dmxvf64ger a4,vs22,vs0" +func_check "dmxvf64gernn a4,vs22,vs0" +func_check "dmxvf64gernp a4,vs22,vs0" +func_check "dmxvf64gerpn a4,vs22,vs0" +func_check "dmxvf64gerpp a4,vs22,vs0" +func_check "dmxvi16ger2 a4,vs0,vs1" +func_check "dmxvi16ger2pp a4,vs0,vs1" +func_check "dmxvi16ger2s a4,vs0,vs1" +func_check "dmxvi16ger2spp a4,vs0,vs1" +func_check "dmxvi4ger8 a4,vs0,vs1" +func_check "dmxvi4ger8pp a4,vs0,vs1" +func_check "dmxvi8ger4 a4,vs0,vs1" +func_check "dmxvi8ger4pp a4,vs0,vs1" +func_check "dmxvi8ger4spp a4,vs0,vs1" func_check "xvtlsbb cr3,vs0" func_check "xxblendvb vs0,vs1,vs2,vs3" func_check "xxblendvd vs0,vs1,vs2,vs3" @@ -636,11 +636,11 @@ func_check "xxgenpcvwm vs0,v1,0" func_check "xxgenpcvwm vs0,v1,1" func_check "xxgenpcvwm vs0,v1,2" func_check "xxgenpcvwm vs0,v1,3" -func_check "xxmfacc a4" -func_check "xxmtacc a4" +func_check "dmxxmfacc a4" +func_check "dmxxmtacc a4" func_check "xxpermx vs0,vs1,vs2,vs3,0" func_check "xxpermx vs0,vs1,vs2,vs3,3" -func_check "xxsetaccz a4" +func_check "dmsetaccz a4" func_check "xxsplti32dx vs0,0,2779096485" func_check "xxsplti32dx vs0,0,4294967295" func_check "xxsplti32dx vs0,0,127" diff --git a/gdb/testsuite/gdb.arch/powerpc-power10.s b/gdb/testsuite/gdb.arch/powerpc-power10.s index 9ded00a8226..a334633292e 100644 --- a/gdb/testsuite/gdb.arch/powerpc-power10.s +++ b/gdb/testsuite/gdb.arch/powerpc-power10.s @@ -427,389 +427,389 @@ func: .long 0xc8010004 .long 0x04000000 /* plxv vs0,8(r1) */ .long 0xc8010008 - .long 0x07900000 /* pmxvbf16ger2 a4,vs0,vs1,0,0,0 */ + .long 0x07900000 /* pmdmxvbf16ger2 a4,vs0,vs1,0,0,0 */ .long 0xee000998 - .long 0x07904000 /* pmxvbf16ger2 a4,vs0,vs1,0,0,1 */ + .long 0x07904000 /* pmdmxvbf16ger2 a4,vs0,vs1,0,0,1 */ .long 0xee000998 - .long 0x0790000d /* pmxvbf16ger2 a4,vs0,vs1,0,13,0 */ + .long 0x0790000d /* pmdmxvbf16ger2 a4,vs0,vs1,0,13,0 */ .long 0xee000998 - .long 0x0790400d /* pmxvbf16ger2 a4,vs0,vs1,0,13,1 */ + .long 0x0790400d /* pmdmxvbf16ger2 a4,vs0,vs1,0,13,1 */ .long 0xee000998 - .long 0x079000b0 /* pmxvbf16ger2 a4,vs0,vs1,11,0,0 */ + .long 0x079000b0 /* pmdmxvbf16ger2 a4,vs0,vs1,11,0,0 */ .long 0xee000998 - .long 0x079040b0 /* pmxvbf16ger2 a4,vs0,vs1,11,0,1 */ + .long 0x079040b0 /* pmdmxvbf16ger2 a4,vs0,vs1,11,0,1 */ .long 0xee000998 - .long 0x079000bd /* pmxvbf16ger2 a4,vs0,vs1,11,13,0 */ + .long 0x079000bd /* pmdmxvbf16ger2 a4,vs0,vs1,11,13,0 */ .long 0xee000998 - .long 0x079040bd /* pmxvbf16ger2 a4,vs0,vs1,11,13,1 */ + .long 0x079040bd /* pmdmxvbf16ger2 a4,vs0,vs1,11,13,1 */ .long 0xee000998 - .long 0x07900000 /* pmxvbf16ger2nn a4,vs0,vs1,0,0,0 */ + .long 0x07900000 /* pmdmxvbf16ger2nn a4,vs0,vs1,0,0,0 */ .long 0xee000f90 - .long 0x07904000 /* pmxvbf16ger2nn a4,vs0,vs1,0,0,1 */ + .long 0x07904000 /* pmdmxvbf16ger2nn a4,vs0,vs1,0,0,1 */ .long 0xee000f90 - .long 0x0790000d /* pmxvbf16ger2nn a4,vs0,vs1,0,13,0 */ + .long 0x0790000d /* pmdmxvbf16ger2nn a4,vs0,vs1,0,13,0 */ .long 0xee000f90 - .long 0x0790400d /* pmxvbf16ger2nn a4,vs0,vs1,0,13,1 */ + .long 0x0790400d /* pmdmxvbf16ger2nn a4,vs0,vs1,0,13,1 */ .long 0xee000f90 - .long 0x079000b0 /* pmxvbf16ger2nn a4,vs0,vs1,11,0,0 */ + .long 0x079000b0 /* pmdmxvbf16ger2nn a4,vs0,vs1,11,0,0 */ .long 0xee000f90 - .long 0x079040b0 /* pmxvbf16ger2nn a4,vs0,vs1,11,0,1 */ + .long 0x079040b0 /* pmdmxvbf16ger2nn a4,vs0,vs1,11,0,1 */ .long 0xee000f90 - .long 0x079000bd /* pmxvbf16ger2nn a4,vs0,vs1,11,13,0 */ + .long 0x079000bd /* pmdmxvbf16ger2nn a4,vs0,vs1,11,13,0 */ .long 0xee000f90 - .long 0x079040bd /* pmxvbf16ger2nn a4,vs0,vs1,11,13,1 */ + .long 0x079040bd /* pmdmxvbf16ger2nn a4,vs0,vs1,11,13,1 */ .long 0xee000f90 - .long 0x07900000 /* pmxvbf16ger2np a4,vs0,vs1,0,0,0 */ + .long 0x07900000 /* pmdmxvbf16ger2np a4,vs0,vs1,0,0,0 */ .long 0xee000b90 - .long 0x07904000 /* pmxvbf16ger2np a4,vs0,vs1,0,0,1 */ + .long 0x07904000 /* pmdmxvbf16ger2np a4,vs0,vs1,0,0,1 */ .long 0xee000b90 - .long 0x0790000d /* pmxvbf16ger2np a4,vs0,vs1,0,13,0 */ + .long 0x0790000d /* pmdmxvbf16ger2np a4,vs0,vs1,0,13,0 */ .long 0xee000b90 - .long 0x0790400d /* pmxvbf16ger2np a4,vs0,vs1,0,13,1 */ + .long 0x0790400d /* pmdmxvbf16ger2np a4,vs0,vs1,0,13,1 */ .long 0xee000b90 - .long 0x079000b0 /* pmxvbf16ger2np a4,vs0,vs1,11,0,0 */ + .long 0x079000b0 /* pmdmxvbf16ger2np a4,vs0,vs1,11,0,0 */ .long 0xee000b90 - .long 0x079040b0 /* pmxvbf16ger2np a4,vs0,vs1,11,0,1 */ + .long 0x079040b0 /* pmdmxvbf16ger2np a4,vs0,vs1,11,0,1 */ .long 0xee000b90 - .long 0x079000bd /* pmxvbf16ger2np a4,vs0,vs1,11,13,0 */ + .long 0x079000bd /* pmdmxvbf16ger2np a4,vs0,vs1,11,13,0 */ .long 0xee000b90 - .long 0x079040bd /* pmxvbf16ger2np a4,vs0,vs1,11,13,1 */ + .long 0x079040bd /* pmdmxvbf16ger2np a4,vs0,vs1,11,13,1 */ .long 0xee000b90 - .long 0x07900000 /* pmxvbf16ger2pn a4,vs0,vs1,0,0,0 */ + .long 0x07900000 /* pmdmxvbf16ger2pn a4,vs0,vs1,0,0,0 */ .long 0xee000d90 - .long 0x07904000 /* pmxvbf16ger2pn a4,vs0,vs1,0,0,1 */ + .long 0x07904000 /* pmdmxvbf16ger2pn a4,vs0,vs1,0,0,1 */ .long 0xee000d90 - .long 0x0790000d /* pmxvbf16ger2pn a4,vs0,vs1,0,13,0 */ + .long 0x0790000d /* pmdmxvbf16ger2pn a4,vs0,vs1,0,13,0 */ .long 0xee000d90 - .long 0x0790400d /* pmxvbf16ger2pn a4,vs0,vs1,0,13,1 */ + .long 0x0790400d /* pmdmxvbf16ger2pn a4,vs0,vs1,0,13,1 */ .long 0xee000d90 - .long 0x079000b0 /* pmxvbf16ger2pn a4,vs0,vs1,11,0,0 */ + .long 0x079000b0 /* pmdmxvbf16ger2pn a4,vs0,vs1,11,0,0 */ .long 0xee000d90 - .long 0x079040b0 /* pmxvbf16ger2pn a4,vs0,vs1,11,0,1 */ + .long 0x079040b0 /* pmdmxvbf16ger2pn a4,vs0,vs1,11,0,1 */ .long 0xee000d90 - .long 0x079000bd /* pmxvbf16ger2pn a4,vs0,vs1,11,13,0 */ + .long 0x079000bd /* pmdmxvbf16ger2pn a4,vs0,vs1,11,13,0 */ .long 0xee000d90 - .long 0x079040bd /* pmxvbf16ger2pn a4,vs0,vs1,11,13,1 */ + .long 0x079040bd /* pmdmxvbf16ger2pn a4,vs0,vs1,11,13,1 */ .long 0xee000d90 - .long 0x07900000 /* pmxvbf16ger2pp a4,vs0,vs1,0,0,0 */ + .long 0x07900000 /* pmdmxvbf16ger2pp a4,vs0,vs1,0,0,0 */ .long 0xee000990 - .long 0x07904000 /* pmxvbf16ger2pp a4,vs0,vs1,0,0,1 */ + .long 0x07904000 /* pmdmxvbf16ger2pp a4,vs0,vs1,0,0,1 */ .long 0xee000990 - .long 0x0790000d /* pmxvbf16ger2pp a4,vs0,vs1,0,13,0 */ + .long 0x0790000d /* pmdmxvbf16ger2pp a4,vs0,vs1,0,13,0 */ .long 0xee000990 - .long 0x0790400d /* pmxvbf16ger2pp a4,vs0,vs1,0,13,1 */ + .long 0x0790400d /* pmdmxvbf16ger2pp a4,vs0,vs1,0,13,1 */ .long 0xee000990 - .long 0x079000b0 /* pmxvbf16ger2pp a4,vs0,vs1,11,0,0 */ + .long 0x079000b0 /* pmdmxvbf16ger2pp a4,vs0,vs1,11,0,0 */ .long 0xee000990 - .long 0x079040b0 /* pmxvbf16ger2pp a4,vs0,vs1,11,0,1 */ + .long 0x079040b0 /* pmdmxvbf16ger2pp a4,vs0,vs1,11,0,1 */ .long 0xee000990 - .long 0x079000bd /* pmxvbf16ger2pp a4,vs0,vs1,11,13,0 */ + .long 0x079000bd /* pmdmxvbf16ger2pp a4,vs0,vs1,11,13,0 */ .long 0xee000990 - .long 0x079040bd /* pmxvbf16ger2pp a4,vs0,vs1,11,13,1 */ + .long 0x079040bd /* pmdmxvbf16ger2pp a4,vs0,vs1,11,13,1 */ .long 0xee000990 - .long 0x07900000 /* pmxvf16ger2 a4,vs0,vs1,0,0,0 */ + .long 0x07900000 /* pmdmxvf16ger2 a4,vs0,vs1,0,0,0 */ .long 0xee000898 - .long 0x07904000 /* pmxvf16ger2 a4,vs0,vs1,0,0,1 */ + .long 0x07904000 /* pmdmxvf16ger2 a4,vs0,vs1,0,0,1 */ .long 0xee000898 - .long 0x0790000d /* pmxvf16ger2 a4,vs0,vs1,0,13,0 */ + .long 0x0790000d /* pmdmxvf16ger2 a4,vs0,vs1,0,13,0 */ .long 0xee000898 - .long 0x0790400d /* pmxvf16ger2 a4,vs0,vs1,0,13,1 */ + .long 0x0790400d /* pmdmxvf16ger2 a4,vs0,vs1,0,13,1 */ .long 0xee000898 - .long 0x079000b0 /* pmxvf16ger2 a4,vs0,vs1,11,0,0 */ + .long 0x079000b0 /* pmdmxvf16ger2 a4,vs0,vs1,11,0,0 */ .long 0xee000898 - .long 0x079040b0 /* pmxvf16ger2 a4,vs0,vs1,11,0,1 */ + .long 0x079040b0 /* pmdmxvf16ger2 a4,vs0,vs1,11,0,1 */ .long 0xee000898 - .long 0x079000bd /* pmxvf16ger2 a4,vs0,vs1,11,13,0 */ + .long 0x079000bd /* pmdmxvf16ger2 a4,vs0,vs1,11,13,0 */ .long 0xee000898 - .long 0x079040bd /* pmxvf16ger2 a4,vs0,vs1,11,13,1 */ + .long 0x079040bd /* pmdmxvf16ger2 a4,vs0,vs1,11,13,1 */ .long 0xee000898 - .long 0x07900000 /* pmxvf16ger2nn a4,vs0,vs1,0,0,0 */ + .long 0x07900000 /* pmdmxvf16ger2nn a4,vs0,vs1,0,0,0 */ .long 0xee000e90 - .long 0x07904000 /* pmxvf16ger2nn a4,vs0,vs1,0,0,1 */ + .long 0x07904000 /* pmdmxvf16ger2nn a4,vs0,vs1,0,0,1 */ .long 0xee000e90 - .long 0x0790000d /* pmxvf16ger2nn a4,vs0,vs1,0,13,0 */ + .long 0x0790000d /* pmdmxvf16ger2nn a4,vs0,vs1,0,13,0 */ .long 0xee000e90 - .long 0x0790400d /* pmxvf16ger2nn a4,vs0,vs1,0,13,1 */ + .long 0x0790400d /* pmdmxvf16ger2nn a4,vs0,vs1,0,13,1 */ .long 0xee000e90 - .long 0x079000b0 /* pmxvf16ger2nn a4,vs0,vs1,11,0,0 */ + .long 0x079000b0 /* pmdmxvf16ger2nn a4,vs0,vs1,11,0,0 */ .long 0xee000e90 - .long 0x079040b0 /* pmxvf16ger2nn a4,vs0,vs1,11,0,1 */ + .long 0x079040b0 /* pmdmxvf16ger2nn a4,vs0,vs1,11,0,1 */ .long 0xee000e90 - .long 0x079000bd /* pmxvf16ger2nn a4,vs0,vs1,11,13,0 */ + .long 0x079000bd /* pmdmxvf16ger2nn a4,vs0,vs1,11,13,0 */ .long 0xee000e90 - .long 0x079040bd /* pmxvf16ger2nn a4,vs0,vs1,11,13,1 */ + .long 0x079040bd /* pmdmxvf16ger2nn a4,vs0,vs1,11,13,1 */ .long 0xee000e90 - .long 0x07900000 /* pmxvf16ger2np a4,vs0,vs1,0,0,0 */ + .long 0x07900000 /* pmdmxvf16ger2np a4,vs0,vs1,0,0,0 */ .long 0xee000a90 - .long 0x07904000 /* pmxvf16ger2np a4,vs0,vs1,0,0,1 */ + .long 0x07904000 /* pmdmxvf16ger2np a4,vs0,vs1,0,0,1 */ .long 0xee000a90 - .long 0x0790000d /* pmxvf16ger2np a4,vs0,vs1,0,13,0 */ + .long 0x0790000d /* pmdmxvf16ger2np a4,vs0,vs1,0,13,0 */ .long 0xee000a90 - .long 0x0790400d /* pmxvf16ger2np a4,vs0,vs1,0,13,1 */ + .long 0x0790400d /* pmdmxvf16ger2np a4,vs0,vs1,0,13,1 */ .long 0xee000a90 - .long 0x079000b0 /* pmxvf16ger2np a4,vs0,vs1,11,0,0 */ + .long 0x079000b0 /* pmdmxvf16ger2np a4,vs0,vs1,11,0,0 */ .long 0xee000a90 - .long 0x079040b0 /* pmxvf16ger2np a4,vs0,vs1,11,0,1 */ + .long 0x079040b0 /* pmdmxvf16ger2np a4,vs0,vs1,11,0,1 */ .long 0xee000a90 - .long 0x079000bd /* pmxvf16ger2np a4,vs0,vs1,11,13,0 */ + .long 0x079000bd /* pmdmxvf16ger2np a4,vs0,vs1,11,13,0 */ .long 0xee000a90 - .long 0x079040bd /* pmxvf16ger2np a4,vs0,vs1,11,13,1 */ + .long 0x079040bd /* pmdmxvf16ger2np a4,vs0,vs1,11,13,1 */ .long 0xee000a90 - .long 0x07900000 /* pmxvf16ger2pn a4,vs0,vs1,0,0,0 */ + .long 0x07900000 /* pmdmxvf16ger2pn a4,vs0,vs1,0,0,0 */ .long 0xee000c90 - .long 0x07904000 /* pmxvf16ger2pn a4,vs0,vs1,0,0,1 */ + .long 0x07904000 /* pmdmxvf16ger2pn a4,vs0,vs1,0,0,1 */ .long 0xee000c90 - .long 0x0790000d /* pmxvf16ger2pn a4,vs0,vs1,0,13,0 */ + .long 0x0790000d /* pmdmxvf16ger2pn a4,vs0,vs1,0,13,0 */ .long 0xee000c90 - .long 0x0790400d /* pmxvf16ger2pn a4,vs0,vs1,0,13,1 */ + .long 0x0790400d /* pmdmxvf16ger2pn a4,vs0,vs1,0,13,1 */ .long 0xee000c90 - .long 0x079000b0 /* pmxvf16ger2pn a4,vs0,vs1,11,0,0 */ + .long 0x079000b0 /* pmdmxvf16ger2pn a4,vs0,vs1,11,0,0 */ .long 0xee000c90 - .long 0x079040b0 /* pmxvf16ger2pn a4,vs0,vs1,11,0,1 */ + .long 0x079040b0 /* pmdmxvf16ger2pn a4,vs0,vs1,11,0,1 */ .long 0xee000c90 - .long 0x079000bd /* pmxvf16ger2pn a4,vs0,vs1,11,13,0 */ + .long 0x079000bd /* pmdmxvf16ger2pn a4,vs0,vs1,11,13,0 */ .long 0xee000c90 - .long 0x079040bd /* pmxvf16ger2pn a4,vs0,vs1,11,13,1 */ + .long 0x079040bd /* pmdmxvf16ger2pn a4,vs0,vs1,11,13,1 */ .long 0xee000c90 - .long 0x07900000 /* pmxvf16ger2pp a4,vs0,vs1,0,0,0 */ + .long 0x07900000 /* pmdmxvf16ger2pp a4,vs0,vs1,0,0,0 */ .long 0xee000890 - .long 0x07904000 /* pmxvf16ger2pp a4,vs0,vs1,0,0,1 */ + .long 0x07904000 /* pmdmxvf16ger2pp a4,vs0,vs1,0,0,1 */ .long 0xee000890 - .long 0x0790000d /* pmxvf16ger2pp a4,vs0,vs1,0,13,0 */ + .long 0x0790000d /* pmdmxvf16ger2pp a4,vs0,vs1,0,13,0 */ .long 0xee000890 - .long 0x0790400d /* pmxvf16ger2pp a4,vs0,vs1,0,13,1 */ + .long 0x0790400d /* pmdmxvf16ger2pp a4,vs0,vs1,0,13,1 */ .long 0xee000890 - .long 0x079000b0 /* pmxvf16ger2pp a4,vs0,vs1,11,0,0 */ + .long 0x079000b0 /* pmdmxvf16ger2pp a4,vs0,vs1,11,0,0 */ .long 0xee000890 - .long 0x079040b0 /* pmxvf16ger2pp a4,vs0,vs1,11,0,1 */ + .long 0x079040b0 /* pmdmxvf16ger2pp a4,vs0,vs1,11,0,1 */ .long 0xee000890 - .long 0x079000bd /* pmxvf16ger2pp a4,vs0,vs1,11,13,0 */ + .long 0x079000bd /* pmdmxvf16ger2pp a4,vs0,vs1,11,13,0 */ .long 0xee000890 - .long 0x079040bd /* pmxvf16ger2pp a4,vs0,vs1,11,13,1 */ + .long 0x079040bd /* pmdmxvf16ger2pp a4,vs0,vs1,11,13,1 */ .long 0xee000890 - .long 0x07900000 /* pmxvf32ger a4,vs0,vs1,0,0 */ + .long 0x07900000 /* pmdmxvf32ger a4,vs0,vs1,0,0 */ .long 0xee0008d8 - .long 0x0790000d /* pmxvf32ger a4,vs0,vs1,0,13 */ + .long 0x0790000d /* pmdmxvf32ger a4,vs0,vs1,0,13 */ .long 0xee0008d8 - .long 0x079000b0 /* pmxvf32ger a4,vs0,vs1,11,0 */ + .long 0x079000b0 /* pmdmxvf32ger a4,vs0,vs1,11,0 */ .long 0xee0008d8 - .long 0x079000bd /* pmxvf32ger a4,vs0,vs1,11,13 */ + .long 0x079000bd /* pmdmxvf32ger a4,vs0,vs1,11,13 */ .long 0xee0008d8 - .long 0x07900000 /* pmxvf32gernn a4,vs0,vs1,0,0 */ + .long 0x07900000 /* pmdmxvf32gernn a4,vs0,vs1,0,0 */ .long 0xee000ed0 - .long 0x0790000d /* pmxvf32gernn a4,vs0,vs1,0,13 */ + .long 0x0790000d /* pmdmxvf32gernn a4,vs0,vs1,0,13 */ .long 0xee000ed0 - .long 0x079000b0 /* pmxvf32gernn a4,vs0,vs1,11,0 */ + .long 0x079000b0 /* pmdmxvf32gernn a4,vs0,vs1,11,0 */ .long 0xee000ed0 - .long 0x079000bd /* pmxvf32gernn a4,vs0,vs1,11,13 */ + .long 0x079000bd /* pmdmxvf32gernn a4,vs0,vs1,11,13 */ .long 0xee000ed0 - .long 0x07900000 /* pmxvf32gernp a4,vs0,vs1,0,0 */ + .long 0x07900000 /* pmdmxvf32gernp a4,vs0,vs1,0,0 */ .long 0xee000ad0 - .long 0x0790000d /* pmxvf32gernp a4,vs0,vs1,0,13 */ + .long 0x0790000d /* pmdmxvf32gernp a4,vs0,vs1,0,13 */ .long 0xee000ad0 - .long 0x079000b0 /* pmxvf32gernp a4,vs0,vs1,11,0 */ + .long 0x079000b0 /* pmdmxvf32gernp a4,vs0,vs1,11,0 */ .long 0xee000ad0 - .long 0x079000bd /* pmxvf32gernp a4,vs0,vs1,11,13 */ + .long 0x079000bd /* pmdmxvf32gernp a4,vs0,vs1,11,13 */ .long 0xee000ad0 - .long 0x07900000 /* pmxvf32gerpn a4,vs0,vs1,0,0 */ + .long 0x07900000 /* pmdmxvf32gerpn a4,vs0,vs1,0,0 */ .long 0xee000cd0 - .long 0x0790000d /* pmxvf32gerpn a4,vs0,vs1,0,13 */ + .long 0x0790000d /* pmdmxvf32gerpn a4,vs0,vs1,0,13 */ .long 0xee000cd0 - .long 0x079000b0 /* pmxvf32gerpn a4,vs0,vs1,11,0 */ + .long 0x079000b0 /* pmdmxvf32gerpn a4,vs0,vs1,11,0 */ .long 0xee000cd0 - .long 0x079000bd /* pmxvf32gerpn a4,vs0,vs1,11,13 */ + .long 0x079000bd /* pmdmxvf32gerpn a4,vs0,vs1,11,13 */ .long 0xee000cd0 - .long 0x07900000 /* pmxvf32gerpp a4,vs0,vs1,0,0 */ + .long 0x07900000 /* pmdmxvf32gerpp a4,vs0,vs1,0,0 */ .long 0xee0008d0 - .long 0x0790000d /* pmxvf32gerpp a4,vs0,vs1,0,13 */ + .long 0x0790000d /* pmdmxvf32gerpp a4,vs0,vs1,0,13 */ .long 0xee0008d0 - .long 0x079000b0 /* pmxvf32gerpp a4,vs0,vs1,11,0 */ + .long 0x079000b0 /* pmdmxvf32gerpp a4,vs0,vs1,11,0 */ .long 0xee0008d0 - .long 0x079000bd /* pmxvf32gerpp a4,vs0,vs1,11,13 */ + .long 0x079000bd /* pmdmxvf32gerpp a4,vs0,vs1,11,13 */ .long 0xee0008d0 - .long 0x07900000 /* pmxvf64ger a4,vs22,vs0,0,0 */ + .long 0x07900000 /* pmdmxvf64ger a4,vs22,vs0,0,0 */ .long 0xee1601d8 - .long 0x07900004 /* pmxvf64ger a4,vs22,vs0,0,1 */ + .long 0x07900004 /* pmdmxvf64ger a4,vs22,vs0,0,1 */ .long 0xee1601d8 - .long 0x079000b0 /* pmxvf64ger a4,vs22,vs0,11,0 */ + .long 0x079000b0 /* pmdmxvf64ger a4,vs22,vs0,11,0 */ .long 0xee1601d8 - .long 0x079000b4 /* pmxvf64ger a4,vs22,vs0,11,1 */ + .long 0x079000b4 /* pmdmxvf64ger a4,vs22,vs0,11,1 */ .long 0xee1601d8 - .long 0x07900000 /* pmxvf64gernn a4,vs22,vs0,0,0 */ + .long 0x07900000 /* pmdmxvf64gernn a4,vs22,vs0,0,0 */ .long 0xee1607d0 - .long 0x07900004 /* pmxvf64gernn a4,vs22,vs0,0,1 */ + .long 0x07900004 /* pmdmxvf64gernn a4,vs22,vs0,0,1 */ .long 0xee1607d0 - .long 0x079000b0 /* pmxvf64gernn a4,vs22,vs0,11,0 */ + .long 0x079000b0 /* pmdmxvf64gernn a4,vs22,vs0,11,0 */ .long 0xee1607d0 - .long 0x079000b4 /* pmxvf64gernn a4,vs22,vs0,11,1 */ + .long 0x079000b4 /* pmdmxvf64gernn a4,vs22,vs0,11,1 */ .long 0xee1607d0 - .long 0x07900000 /* pmxvf64gernp a4,vs22,vs0,0,0 */ + .long 0x07900000 /* pmdmxvf64gernp a4,vs22,vs0,0,0 */ .long 0xee1603d0 - .long 0x07900004 /* pmxvf64gernp a4,vs22,vs0,0,1 */ + .long 0x07900004 /* pmdmxvf64gernp a4,vs22,vs0,0,1 */ .long 0xee1603d0 - .long 0x079000b0 /* pmxvf64gernp a4,vs22,vs0,11,0 */ + .long 0x079000b0 /* pmdmxvf64gernp a4,vs22,vs0,11,0 */ .long 0xee1603d0 - .long 0x079000b4 /* pmxvf64gernp a4,vs22,vs0,11,1 */ + .long 0x079000b4 /* pmdmxvf64gernp a4,vs22,vs0,11,1 */ .long 0xee1603d0 - .long 0x07900000 /* pmxvf64gerpn a4,vs22,vs0,0,0 */ + .long 0x07900000 /* pmdmxvf64gerpn a4,vs22,vs0,0,0 */ .long 0xee1605d0 - .long 0x07900004 /* pmxvf64gerpn a4,vs22,vs0,0,1 */ + .long 0x07900004 /* pmdmxvf64gerpn a4,vs22,vs0,0,1 */ .long 0xee1605d0 - .long 0x079000b0 /* pmxvf64gerpn a4,vs22,vs0,11,0 */ + .long 0x079000b0 /* pmdmxvf64gerpn a4,vs22,vs0,11,0 */ .long 0xee1605d0 - .long 0x079000b4 /* pmxvf64gerpn a4,vs22,vs0,11,1 */ + .long 0x079000b4 /* pmdmxvf64gerpn a4,vs22,vs0,11,1 */ .long 0xee1605d0 - .long 0x07900000 /* pmxvf64gerpp a4,vs22,vs0,0,0 */ + .long 0x07900000 /* pmdmxvf64gerpp a4,vs22,vs0,0,0 */ .long 0xee1601d0 - .long 0x07900004 /* pmxvf64gerpp a4,vs22,vs0,0,1 */ + .long 0x07900004 /* pmdmxvf64gerpp a4,vs22,vs0,0,1 */ .long 0xee1601d0 - .long 0x079000b0 /* pmxvf64gerpp a4,vs22,vs0,11,0 */ + .long 0x079000b0 /* pmdmxvf64gerpp a4,vs22,vs0,11,0 */ .long 0xee1601d0 - .long 0x079000b4 /* pmxvf64gerpp a4,vs22,vs0,11,1 */ + .long 0x079000b4 /* pmdmxvf64gerpp a4,vs22,vs0,11,1 */ .long 0xee1601d0 - .long 0x07900000 /* pmxvi16ger2 a4,vs0,vs1,0,0,0 */ + .long 0x07900000 /* pmdmxvi16ger2 a4,vs0,vs1,0,0,0 */ .long 0xee000a58 - .long 0x07904000 /* pmxvi16ger2 a4,vs0,vs1,0,0,1 */ + .long 0x07904000 /* pmdmxvi16ger2 a4,vs0,vs1,0,0,1 */ .long 0xee000a58 - .long 0x0790000d /* pmxvi16ger2 a4,vs0,vs1,0,13,0 */ + .long 0x0790000d /* pmdmxvi16ger2 a4,vs0,vs1,0,13,0 */ .long 0xee000a58 - .long 0x0790400d /* pmxvi16ger2 a4,vs0,vs1,0,13,1 */ + .long 0x0790400d /* pmdmxvi16ger2 a4,vs0,vs1,0,13,1 */ .long 0xee000a58 - .long 0x079000b0 /* pmxvi16ger2 a4,vs0,vs1,11,0,0 */ + .long 0x079000b0 /* pmdmxvi16ger2 a4,vs0,vs1,11,0,0 */ .long 0xee000a58 - .long 0x079040b0 /* pmxvi16ger2 a4,vs0,vs1,11,0,1 */ + .long 0x079040b0 /* pmdmxvi16ger2 a4,vs0,vs1,11,0,1 */ .long 0xee000a58 - .long 0x079000bd /* pmxvi16ger2 a4,vs0,vs1,11,13,0 */ + .long 0x079000bd /* pmdmxvi16ger2 a4,vs0,vs1,11,13,0 */ .long 0xee000a58 - .long 0x079040bd /* pmxvi16ger2 a4,vs0,vs1,11,13,1 */ + .long 0x079040bd /* pmdmxvi16ger2 a4,vs0,vs1,11,13,1 */ .long 0xee000a58 - .long 0x07900000 /* pmxvi16ger2pp a4,vs0,vs1,0,0,0 */ + .long 0x07900000 /* pmdmxvi16ger2pp a4,vs0,vs1,0,0,0 */ .long 0xee000b58 - .long 0x07904000 /* pmxvi16ger2pp a4,vs0,vs1,0,0,1 */ + .long 0x07904000 /* pmdmxvi16ger2pp a4,vs0,vs1,0,0,1 */ .long 0xee000b58 - .long 0x0790000d /* pmxvi16ger2pp a4,vs0,vs1,0,13,0 */ + .long 0x0790000d /* pmdmxvi16ger2pp a4,vs0,vs1,0,13,0 */ .long 0xee000b58 - .long 0x0790400d /* pmxvi16ger2pp a4,vs0,vs1,0,13,1 */ + .long 0x0790400d /* pmdmxvi16ger2pp a4,vs0,vs1,0,13,1 */ .long 0xee000b58 - .long 0x079000b0 /* pmxvi16ger2pp a4,vs0,vs1,11,0,0 */ + .long 0x079000b0 /* pmdmxvi16ger2pp a4,vs0,vs1,11,0,0 */ .long 0xee000b58 - .long 0x079040b0 /* pmxvi16ger2pp a4,vs0,vs1,11,0,1 */ + .long 0x079040b0 /* pmdmxvi16ger2pp a4,vs0,vs1,11,0,1 */ .long 0xee000b58 - .long 0x079000bd /* pmxvi16ger2pp a4,vs0,vs1,11,13,0 */ + .long 0x079000bd /* pmdmxvi16ger2pp a4,vs0,vs1,11,13,0 */ .long 0xee000b58 - .long 0x079040bd /* pmxvi16ger2pp a4,vs0,vs1,11,13,1 */ + .long 0x079040bd /* pmdmxvi16ger2pp a4,vs0,vs1,11,13,1 */ .long 0xee000b58 - .long 0x07900000 /* pmxvi16ger2s a4,vs0,vs1,0,0,0 */ + .long 0x07900000 /* pmdmxvi16ger2s a4,vs0,vs1,0,0,0 */ .long 0xee000958 - .long 0x07904000 /* pmxvi16ger2s a4,vs0,vs1,0,0,1 */ + .long 0x07904000 /* pmdmxvi16ger2s a4,vs0,vs1,0,0,1 */ .long 0xee000958 - .long 0x0790000d /* pmxvi16ger2s a4,vs0,vs1,0,13,0 */ + .long 0x0790000d /* pmdmxvi16ger2s a4,vs0,vs1,0,13,0 */ .long 0xee000958 - .long 0x0790400d /* pmxvi16ger2s a4,vs0,vs1,0,13,1 */ + .long 0x0790400d /* pmdmxvi16ger2s a4,vs0,vs1,0,13,1 */ .long 0xee000958 - .long 0x079000b0 /* pmxvi16ger2s a4,vs0,vs1,11,0,0 */ + .long 0x079000b0 /* pmdmxvi16ger2s a4,vs0,vs1,11,0,0 */ .long 0xee000958 - .long 0x079040b0 /* pmxvi16ger2s a4,vs0,vs1,11,0,1 */ + .long 0x079040b0 /* pmdmxvi16ger2s a4,vs0,vs1,11,0,1 */ .long 0xee000958 - .long 0x079000bd /* pmxvi16ger2s a4,vs0,vs1,11,13,0 */ + .long 0x079000bd /* pmdmxvi16ger2s a4,vs0,vs1,11,13,0 */ .long 0xee000958 - .long 0x079040bd /* pmxvi16ger2s a4,vs0,vs1,11,13,1 */ + .long 0x079040bd /* pmdmxvi16ger2s a4,vs0,vs1,11,13,1 */ .long 0xee000958 - .long 0x07900000 /* pmxvi16ger2spp a4,vs0,vs1,0,0,0 */ + .long 0x07900000 /* pmdmxvi16ger2spp a4,vs0,vs1,0,0,0 */ .long 0xee000950 - .long 0x07904000 /* pmxvi16ger2spp a4,vs0,vs1,0,0,1 */ + .long 0x07904000 /* pmdmxvi16ger2spp a4,vs0,vs1,0,0,1 */ .long 0xee000950 - .long 0x0790000d /* pmxvi16ger2spp a4,vs0,vs1,0,13,0 */ + .long 0x0790000d /* pmdmxvi16ger2spp a4,vs0,vs1,0,13,0 */ .long 0xee000950 - .long 0x0790400d /* pmxvi16ger2spp a4,vs0,vs1,0,13,1 */ + .long 0x0790400d /* pmdmxvi16ger2spp a4,vs0,vs1,0,13,1 */ .long 0xee000950 - .long 0x079000b0 /* pmxvi16ger2spp a4,vs0,vs1,11,0,0 */ + .long 0x079000b0 /* pmdmxvi16ger2spp a4,vs0,vs1,11,0,0 */ .long 0xee000950 - .long 0x079040b0 /* pmxvi16ger2spp a4,vs0,vs1,11,0,1 */ + .long 0x079040b0 /* pmdmxvi16ger2spp a4,vs0,vs1,11,0,1 */ .long 0xee000950 - .long 0x079000bd /* pmxvi16ger2spp a4,vs0,vs1,11,13,0 */ + .long 0x079000bd /* pmdmxvi16ger2spp a4,vs0,vs1,11,13,0 */ .long 0xee000950 - .long 0x079040bd /* pmxvi16ger2spp a4,vs0,vs1,11,13,1 */ + .long 0x079040bd /* pmdmxvi16ger2spp a4,vs0,vs1,11,13,1 */ .long 0xee000950 - .long 0x07900000 /* pmxvi4ger8 a4,vs0,vs1,0,0,0 */ + .long 0x07900000 /* pmdmxvi4ger8 a4,vs0,vs1,0,0,0 */ .long 0xee000918 - .long 0x07902d00 /* pmxvi4ger8 a4,vs0,vs1,0,0,45 */ + .long 0x07902d00 /* pmdmxvi4ger8 a4,vs0,vs1,0,0,45 */ .long 0xee000918 - .long 0x07900001 /* pmxvi4ger8 a4,vs0,vs1,0,1,0 */ + .long 0x07900001 /* pmdmxvi4ger8 a4,vs0,vs1,0,1,0 */ .long 0xee000918 - .long 0x07902d01 /* pmxvi4ger8 a4,vs0,vs1,0,1,45 */ + .long 0x07902d01 /* pmdmxvi4ger8 a4,vs0,vs1,0,1,45 */ .long 0xee000918 - .long 0x079000b0 /* pmxvi4ger8 a4,vs0,vs1,11,0,0 */ + .long 0x079000b0 /* pmdmxvi4ger8 a4,vs0,vs1,11,0,0 */ .long 0xee000918 - .long 0x07902db0 /* pmxvi4ger8 a4,vs0,vs1,11,0,45 */ + .long 0x07902db0 /* pmdmxvi4ger8 a4,vs0,vs1,11,0,45 */ .long 0xee000918 - .long 0x079000b1 /* pmxvi4ger8 a4,vs0,vs1,11,1,0 */ + .long 0x079000b1 /* pmdmxvi4ger8 a4,vs0,vs1,11,1,0 */ .long 0xee000918 - .long 0x07902db1 /* pmxvi4ger8 a4,vs0,vs1,11,1,45 */ + .long 0x07902db1 /* pmdmxvi4ger8 a4,vs0,vs1,11,1,45 */ .long 0xee000918 - .long 0x07900000 /* pmxvi4ger8pp a4,vs0,vs1,0,0,0 */ + .long 0x07900000 /* pmdmxvi4ger8pp a4,vs0,vs1,0,0,0 */ .long 0xee000910 - .long 0x07902d00 /* pmxvi4ger8pp a4,vs0,vs1,0,0,45 */ + .long 0x07902d00 /* pmdmxvi4ger8pp a4,vs0,vs1,0,0,45 */ .long 0xee000910 - .long 0x07900001 /* pmxvi4ger8pp a4,vs0,vs1,0,1,0 */ + .long 0x07900001 /* pmdmxvi4ger8pp a4,vs0,vs1,0,1,0 */ .long 0xee000910 - .long 0x07902d01 /* pmxvi4ger8pp a4,vs0,vs1,0,1,45 */ + .long 0x07902d01 /* pmdmxvi4ger8pp a4,vs0,vs1,0,1,45 */ .long 0xee000910 - .long 0x079000b0 /* pmxvi4ger8pp a4,vs0,vs1,11,0,0 */ + .long 0x079000b0 /* pmdmxvi4ger8pp a4,vs0,vs1,11,0,0 */ .long 0xee000910 - .long 0x07902db0 /* pmxvi4ger8pp a4,vs0,vs1,11,0,45 */ + .long 0x07902db0 /* pmdmxvi4ger8pp a4,vs0,vs1,11,0,45 */ .long 0xee000910 - .long 0x079000b1 /* pmxvi4ger8pp a4,vs0,vs1,11,1,0 */ + .long 0x079000b1 /* pmdmxvi4ger8pp a4,vs0,vs1,11,1,0 */ .long 0xee000910 - .long 0x07902db1 /* pmxvi4ger8pp a4,vs0,vs1,11,1,45 */ + .long 0x07902db1 /* pmdmxvi4ger8pp a4,vs0,vs1,11,1,45 */ .long 0xee000910 - .long 0x07900000 /* pmxvi8ger4 a4,vs0,vs1,0,0,0 */ + .long 0x07900000 /* pmdmxvi8ger4 a4,vs0,vs1,0,0,0 */ .long 0xee000818 - .long 0x07905000 /* pmxvi8ger4 a4,vs0,vs1,0,0,5 */ + .long 0x07905000 /* pmdmxvi8ger4 a4,vs0,vs1,0,0,5 */ .long 0xee000818 - .long 0x0790000d /* pmxvi8ger4 a4,vs0,vs1,0,13,0 */ + .long 0x0790000d /* pmdmxvi8ger4 a4,vs0,vs1,0,13,0 */ .long 0xee000818 - .long 0x0790500d /* pmxvi8ger4 a4,vs0,vs1,0,13,5 */ + .long 0x0790500d /* pmdmxvi8ger4 a4,vs0,vs1,0,13,5 */ .long 0xee000818 - .long 0x079000b0 /* pmxvi8ger4 a4,vs0,vs1,11,0,0 */ + .long 0x079000b0 /* pmdmxvi8ger4 a4,vs0,vs1,11,0,0 */ .long 0xee000818 - .long 0x079050b0 /* pmxvi8ger4 a4,vs0,vs1,11,0,5 */ + .long 0x079050b0 /* pmdmxvi8ger4 a4,vs0,vs1,11,0,5 */ .long 0xee000818 - .long 0x079000bd /* pmxvi8ger4 a4,vs0,vs1,11,13,0 */ + .long 0x079000bd /* pmdmxvi8ger4 a4,vs0,vs1,11,13,0 */ .long 0xee000818 - .long 0x079050bd /* pmxvi8ger4 a4,vs0,vs1,11,13,5 */ + .long 0x079050bd /* pmdmxvi8ger4 a4,vs0,vs1,11,13,5 */ .long 0xee000818 - .long 0x07900000 /* pmxvi8ger4pp a4,vs0,vs1,0,0,0 */ + .long 0x07900000 /* pmdmxvi8ger4pp a4,vs0,vs1,0,0,0 */ .long 0xee000810 - .long 0x07905000 /* pmxvi8ger4pp a4,vs0,vs1,0,0,5 */ + .long 0x07905000 /* pmdmxvi8ger4pp a4,vs0,vs1,0,0,5 */ .long 0xee000810 - .long 0x0790000d /* pmxvi8ger4pp a4,vs0,vs1,0,13,0 */ + .long 0x0790000d /* pmdmxvi8ger4pp a4,vs0,vs1,0,13,0 */ .long 0xee000810 - .long 0x0790500d /* pmxvi8ger4pp a4,vs0,vs1,0,13,5 */ + .long 0x0790500d /* pmdmxvi8ger4pp a4,vs0,vs1,0,13,5 */ .long 0xee000810 - .long 0x079000b0 /* pmxvi8ger4pp a4,vs0,vs1,11,0,0 */ + .long 0x079000b0 /* pmdmxvi8ger4pp a4,vs0,vs1,11,0,0 */ .long 0xee000810 - .long 0x079050b0 /* pmxvi8ger4pp a4,vs0,vs1,11,0,5 */ + .long 0x079050b0 /* pmdmxvi8ger4pp a4,vs0,vs1,11,0,5 */ .long 0xee000810 - .long 0x079000bd /* pmxvi8ger4pp a4,vs0,vs1,11,13,0 */ + .long 0x079000bd /* pmdmxvi8ger4pp a4,vs0,vs1,11,13,0 */ .long 0xee000810 - .long 0x079050bd /* pmxvi8ger4pp a4,vs0,vs1,11,13,5 */ + .long 0x079050bd /* pmdmxvi8ger4pp a4,vs0,vs1,11,13,5 */ .long 0xee000810 - .long 0x07900000 /* pmxvi8ger4spp a4,vs0,vs1,0,0,0 */ + .long 0x07900000 /* pmdmxvi8ger4spp a4,vs0,vs1,0,0,0 */ .long 0xee000b18 - .long 0x07905000 /* pmxvi8ger4spp a4,vs0,vs1,0,0,5 */ + .long 0x07905000 /* pmdmxvi8ger4spp a4,vs0,vs1,0,0,5 */ .long 0xee000b18 - .long 0x0790000d /* pmxvi8ger4spp a4,vs0,vs1,0,13,0 */ + .long 0x0790000d /* pmdmxvi8ger4spp a4,vs0,vs1,0,13,0 */ .long 0xee000b18 - .long 0x0790500d /* pmxvi8ger4spp a4,vs0,vs1,0,13,5 */ + .long 0x0790500d /* pmdmxvi8ger4spp a4,vs0,vs1,0,13,5 */ .long 0xee000b18 - .long 0x079000b0 /* pmxvi8ger4spp a4,vs0,vs1,11,0,0 */ + .long 0x079000b0 /* pmdmxvi8ger4spp a4,vs0,vs1,11,0,0 */ .long 0xee000b18 - .long 0x079050b0 /* pmxvi8ger4spp a4,vs0,vs1,11,0,5 */ + .long 0x079050b0 /* pmdmxvi8ger4spp a4,vs0,vs1,11,0,5 */ .long 0xee000b18 - .long 0x079000bd /* pmxvi8ger4spp a4,vs0,vs1,11,13,0 */ + .long 0x079000bd /* pmdmxvi8ger4spp a4,vs0,vs1,11,13,0 */ .long 0xee000b18 - .long 0x079050bd /* pmxvi8ger4spp a4,vs0,vs1,11,13,5 */ + .long 0x079050bd /* pmdmxvi8ger4spp a4,vs0,vs1,11,13,5 */ .long 0xee000b18 .long 0x06000000 /* pstb r0,0(r1) */ .long 0x98010000 From patchwork Thu Nov 3 17:30:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Carl Love X-Patchwork-Id: 59847 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id B57123858423 for ; Thu, 3 Nov 2022 17:31:42 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org B57123858423 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1667496702; bh=ACnqZnlRQDtd2Fd6NvJjz1Gzbhfq4k3Y1/lcewvZ2UI=; h=Subject:To:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=EOBcNWeNxsOiyRaKQo31wQijo9sC4axJHJ9eQ3wZxAwaRDrhKOzJzEwCXWuueYq+O oeVZitShxqsgE2kNgb+k6PeLHDNfYd80lQC/PJepzj3KvPz7ESwJh1zYQ3n2wDgF7W 4p0z2fi3/MdqtuwcdAc/pyMAkCqJiuZCnxsiJwoE= X-Original-To: gdb-patches@sourceware.org Delivered-To: gdb-patches@sourceware.org Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by sourceware.org (Postfix) with ESMTPS id 84A89385843F for ; Thu, 3 Nov 2022 17:30:46 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 84A89385843F Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A3G2msf017884 for ; Thu, 3 Nov 2022 17:30:46 GMT Received: from ppma01dal.us.ibm.com (83.d6.3fa9.ip4.static.sl-reverse.com [169.63.214.131]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3kmgtskaan-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Thu, 03 Nov 2022 17:30:45 +0000 Received: from pps.filterd (ppma01dal.us.ibm.com [127.0.0.1]) by ppma01dal.us.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 2A3HJjHx004700 for ; Thu, 3 Nov 2022 17:30:45 GMT Received: from b01cxnp22035.gho.pok.ibm.com (b01cxnp22035.gho.pok.ibm.com [9.57.198.25]) by ppma01dal.us.ibm.com with ESMTP id 3kgutb3njc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Thu, 03 Nov 2022 17:30:45 +0000 Received: from smtpav03.wdc07v.mail.ibm.com ([9.208.128.112]) by b01cxnp22035.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 2A3HUg7B131662 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 3 Nov 2022 17:30:43 GMT Received: from smtpav03.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9DBC05805A; Thu, 3 Nov 2022 17:30:42 +0000 (GMT) Received: from smtpav03.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 00A2358066; Thu, 3 Nov 2022 17:30:42 +0000 (GMT) Received: from li-e362e14c-2378-11b2-a85c-87d605f3c641.ibm.com (unknown [9.163.10.231]) by smtpav03.wdc07v.mail.ibm.com (Postfix) with ESMTP; Thu, 3 Nov 2022 17:30:41 +0000 (GMT) Message-ID: <2e5b20260e2c74de4c4419f08501a1073564c0d5.camel@us.ibm.com> Subject: [PATCH 2/2] PowerPC: update comments for the MMA instruction name changes. To: "gdb-patches@sourceware.org" Date: Thu, 03 Nov 2022 10:30:41 -0700 In-Reply-To: <20111d66a467599d893bf85bbdf2e82b76377127.camel@us.ibm.com> References: <20111d66a467599d893bf85bbdf2e82b76377127.camel@us.ibm.com> X-Mailer: Evolution 3.28.5 (3.28.5-18.el8) Mime-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: WqwgFQyhjHMgLXVvaFXdhFcQqLxi7ptk X-Proofpoint-GUID: WqwgFQyhjHMgLXVvaFXdhFcQqLxi7ptk X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-03_04,2022-11-03_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 priorityscore=1501 mlxlogscore=999 mlxscore=0 bulkscore=0 impostorscore=0 phishscore=0 spamscore=0 lowpriorityscore=0 malwarescore=0 clxscore=1015 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211030114 X-Spam-Status: No, score=-12.0 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gdb-patches@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gdb-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Carl Love via Gdb-patches From: Carl Love Reply-To: Carl Love Cc: Ulrich Weigand Errors-To: gdb-patches-bounces+patchwork=sourceware.org@sourceware.org Sender: "Gdb-patches" GDB maintainers: This patch updates the instruction names in the comments in multiple files with the new mnemonic names. The mnemonics for the various MMA instructions were changed by commit: commit bb98553cad4e017f1851153fa5de91f2cee98fb2 Author: Peter Bergner Date: Sat Oct 8 16:19:51 2022 -0500 PowerPC: Add support for RFC02658 - MMA+ Outer-Product Instructions This patch only changes the comments in the files. There are no functional changes. The patch has been tested as part of the patch set with no regression errors. Carl Love -------------------------- PowerPC update comments for the MMA instruction name changes. The mnemonics for the pmxvf16ger*, pmxvf32ger*,pmxvf64ger*, pmxvi4ger8*, pmxvi8ger4*, and pmxvi16ger2* instructions were officially changed to pmdmxbf16ger*, pmdmxvf32ger*, pmdmxvf64ger*, pmdmxvi4ger8*, pmdmxvi8ger4*, pmdmxvi16ger* respectively. The old mnemonics are still supported by the assembler as extended mnemonics. The disassembler generates the new mnemonics. The name changes occurred in commit: commit bb98553cad4e017f1851153fa5de91f2cee98fb2 Author: Peter Bergner Date: Sat Oct 8 16:19:51 2022 -0500 PowerPC: Add support for RFC02658 - MMA+ Outer-Product Instructions gas/ * config/tc-ppc.c (md_assemble): Only check for prefix opcodes. * testsuite/gas/ppc/rfc02658.s: New test. * testsuite/gas/ppc/rfc02658.d: Likewise. * testsuite/gas/ppc/ppc.exp: Run it. opcodes/ * ppc-opc.c (XMSK8, P_GERX4_MASK, P_GERX2_MASK, XX3GERX_MASK): New. (powerpc_opcodes): Add dmxvi8gerx4pp, dmxvi8gerx4, dmxvf16gerx2pp, dmxvf16gerx2, dmxvbf16gerx2pp, dmxvf16gerx2np, dmxvbf16gerx2, dmxvi8gerx4spp, dmxvbf16gerx2np, dmxvf16gerx2pn, dmxvbf16gerx2pn, dmxvf16gerx2nn, dmxvbf16gerx2nn, pmdmxvi8gerx4pp, pmdmxvi8gerx4, pmdmxvf16gerx2pp, pmdmxvf16gerx2, pmdmxvbf16gerx2pp, pmdmxvf16gerx2np, pmdmxvbf16gerx2, pmdmxvi8gerx4spp, pmdmxvbf16gerx2np, pmdmxvf16gerx2pn, pmdmxvbf16gerx2pn, pmdmxvf16gerx2nn, pmdmxvbf16gerx2nn. This patch updates the comments in the various gdb files to reflect the name changes. There are no functional changes made by this patch. --- gdb/rs6000-tdep.c | 73 +++++++++++-------- .../gdb.reverse/ppc_record_test_isa_3_1.c | 15 +++- .../gdb.reverse/ppc_record_test_isa_3_1.exp | 4 +- 3 files changed, 56 insertions(+), 36 deletions(-) diff --git a/gdb/rs6000-tdep.c b/gdb/rs6000-tdep.c index 51b41967b41..cbd84514795 100644 --- a/gdb/rs6000-tdep.c +++ b/gdb/rs6000-tdep.c @@ -5535,6 +5535,10 @@ ppc_process_record_op59 (struct gdbarch *gdbarch, struct regcache *regcache, int ext = PPC_EXTOP (insn); int at = PPC_FIELD (insn, 6, 3); + /* Note the mnemonics for the pmxvf64ger* instructions were officially + changed to pmdmxvf64ger*. The old mnemonics are still supported as + extended mnemonics. */ + switch (ext & 0x1f) { case 18: /* Floating Divide */ @@ -5603,7 +5607,8 @@ ppc_process_record_op59 (struct gdbarch *gdbarch, struct regcache *regcache, case 218: /* VSX Vector 32-bit Floating-Point GER Negative multiply, Negative accumulate, xvf32gernn */ - case 59: /* VSX Vector 64-bit Floating-Point GER, pmxvf64ger */ + case 59: /* VSX Vector 64-bit Floating-Point GER, pmdmxvf64ger + (pmxvf64ger) */ case 58: /* VSX Vector 64-bit Floating-Point GER Positive multiply, Positive accumulate, xvf64gerpp */ case 186: /* VSX Vector 64-bit Floating-Point GER Positive multiply, @@ -5611,7 +5616,7 @@ ppc_process_record_op59 (struct gdbarch *gdbarch, struct regcache *regcache, case 122: /* VSX Vector 64-bit Floating-Point GER Negative multiply, Positive accumulate, xvf64gernp */ case 250: /* VSX Vector 64-bit Floating-Point GER Negative multiply, - Negative accumulate, pmxvf64gernn */ + Negative accumulate, pmdmxvf64gernn (pmxvf64gernn) */ case 51: /* VSX Vector bfloat16 GER, xvbf16ger2 */ case 50: /* VSX Vector bfloat16 GER Positive multiply, @@ -6486,98 +6491,106 @@ ppc_process_record_prefix_op59_XX3 (struct gdbarch *gdbarch, int at = PPC_FIELD (insn_suffix, 6, 3); ppc_gdbarch_tdep *tdep = gdbarch_tdep (gdbarch); + /* Note, the mnemonics for the pmxvf16ger*, pmxvf32ger*,pmxvf64ger*, + pmxvi4ger8*, pmxvi8ger4* pmxvi16ger2* instructions were officially + changed to pmdmxbf16ger*, pmdmxvf32ger*, pmdmxvf64ger*, pmdmxvi4ger8*, + pmdmxvi8ger4*, pmdmxvi16ger* respectively. The old mnemonics are still + supported by the assembler as extended mnemonics. The disassembler + generates the new mnemonics. */ if (type == 3) { if (ST4 == 9) switch (opcode) { case 35: /* Prefixed Masked VSX Vector 4-bit Signed Integer GER - MMIRR, pmxvi4ger8 */ + MMIRR, pmdmxvi4ger8 (pmxvi4ger8) */ case 34: /* Prefixed Masked VSX Vector 4-bit Signed Integer GER - MMIRR, pmxvi4ger8pp */ + MMIRR, pmdmxvi4ger8pp (pmxvi4ger8pp) */ case 99: /* Prefixed Masked VSX Vector 8-bit Signed/Unsigned Integer GER with Saturate Positive multiply, Positive accumulate, xvi8ger4spp */ case 3: /* Prefixed Masked VSX Vector 8-bit Signed/Unsigned - Integer GER MMIRR, pmxvi8ger4 */ + Integer GER MMIRR, pmdmxvi8ger4 (pmxvi8ger4) */ case 2: /* Prefixed Masked VSX Vector 8-bit Signed/Unsigned Integer GER Positive multiply, Positive accumulate - MMIRR, pmxvi8ger4pp */ + MMIRR, pmdmxvi8ger4pp (pmxvi8ger4pp) */ case 75: /* Prefixed Masked VSX Vector 16-bit Signed Integer - GER MMIRR, pmxvi16ger2 */ + GER MMIRR, pmdmxvi16ger2 (pmxvi16ger2) */ case 107: /* Prefixed Masked VSX Vector 16-bit Signed Integer GER Positive multiply, Positive accumulate, - pmxvi16ger2pp */ + pmdmxvi16ger2pp (pmxvi16ger2pp) */ case 43: /* Prefixed Masked VSX Vector 16-bit Signed Integer - GER with Saturation MMIRR, pmxvi16ger2s */ + GER with Saturation MMIRR, pmdmxvi16ger2s + (pmxvi16ger2s) */ case 42: /* Prefixed Masked VSX Vector 16-bit Signed Integer GER with Saturation Positive multiply, Positive - accumulate MMIRR, pmxvi16ger2spp */ + accumulate MMIRR, pmdmxvi16ger2spp (pmxvi16ger2spp) + */ ppc_record_ACC_fpscr (regcache, tdep, at, false); return 0; case 19: /* Prefixed Masked VSX Vector 16-bit Floating-Point - GER MMIRR, pmxvf16ger2 */ + GER MMIRR, pmdmxvf16ger2 (pmxvf16ger2) */ case 18: /* Prefixed Masked VSX Vector 16-bit Floating-Point GER Positive multiply, Positive accumulate MMIRR, - pmxvf16ger2pp */ + pmdmxvf16ger2pp (pmxvf16ger2pp) */ case 146: /* Prefixed Masked VSX Vector 16-bit Floating-Point GER Positive multiply, Negative accumulate MMIRR, - pmxvf16ger2pn */ + pmdmxvf16ger2pn (pmxvf16ger2pn) */ case 82: /* Prefixed Masked VSX Vector 16-bit Floating-Point GER Negative multiply, Positive accumulate MMIRR, - pmxvf16ger2np */ + pmdmxvf16ger2np (pmxvf16ger2np) */ case 210: /* Prefixed Masked VSX Vector 16-bit Floating-Point GER Negative multiply, Negative accumulate MMIRR, - pmxvf16ger2nn */ + pmdmxvf16ger2nn (pmxvf16ger2nn) */ case 27: /* Prefixed Masked VSX Vector 32-bit Floating-Point - GER MMIRR, pmxvf32ger */ + GER MMIRR, pmdmxvf32ger (pmxvf32ger) */ case 26: /* Prefixed Masked VSX Vector 32-bit Floating-Point GER Positive multiply, Positive accumulate MMIRR, - pmxvf32gerpp */ + pmdmxvf32gerpp (pmxvf32gerpp) */ case 154: /* Prefixed Masked VSX Vector 32-bit Floating-Point GER Positive multiply, Negative accumulate MMIRR, - pmxvf32gerpn */ + pmdmxvf32gerpn (pmxvf32gerpn) */ case 90: /* Prefixed Masked VSX Vector 32-bit Floating-Point GER Negative multiply, Positive accumulate MMIRR, - pmxvf32gernp */ + pmdmxvf32gernp (pmxvf32gernp )*/ case 218: /* Prefixed Masked VSX Vector 32-bit Floating-Point GER Negative multiply, Negative accumulate MMIRR, - pmxvf32gernn */ + pmdmxvf32gernn (pmxvf32gernn) */ case 59: /* Prefixed Masked VSX Vector 64-bit Floating-Point - GER MMIRR, pmxvf64ger */ + GER MMIRR, pmdmxvf64ger (pmxvf64ger) */ case 58: /* Floating-Point GER Positive multiply, Positive - accumulate MMIRR, pmxvf64gerpp */ + accumulate MMIRR, pmdmxvf64gerpp (pmxvf64gerpp) */ case 186: /* Prefixed Masked VSX Vector 64-bit Floating-Point GER Positive multiply, Negative accumulate MMIRR, - pmxvf64gerpn */ + pmdmxvf64gerpn (pmxvf64gerpn) */ case 122: /* Prefixed Masked VSX Vector 64-bit Floating-Point GER Negative multiply, Positive accumulate MMIRR, - pmxvf64gernp */ + pmdmxvf64gernp (pmxvf64gernp) */ case 250: /* Prefixed Masked VSX Vector 64-bit Floating-Point GER Negative multiply, Negative accumulate MMIRR, - pmxvf64gernn */ + pmdmxvf64gernn (pmxvf64gernn) */ case 51: /* Prefixed Masked VSX Vector bfloat16 GER MMIRR, - pmxvbf16ger2 */ + pmdmxvbf16ger2 (pmxvbf16ger2) */ case 50: /* Prefixed Masked VSX Vector bfloat16 GER Positive multiply, Positive accumulate MMIRR, - pmxvbf16ger2pp */ + pmdmxvbf16ger2pp (pmxvbf16ger2pp) */ case 178: /* Prefixed Masked VSX Vector bfloat16 GER Positive multiply, Negative accumulate MMIRR, - pmxvbf16ger2pn */ + pmdmxvbf16ger2pn (pmxvbf16ger2pn) */ case 114: /* Prefixed Masked VSX Vector bfloat16 GER Negative multiply, Positive accumulate MMIRR, - pmxvbf16ger2np */ + pmdmxvbf16ger2np (pmxvbf16ger2np) */ case 242: /* Prefixed Masked VSX Vector bfloat16 GER Negative multiply, Negative accumulate MMIRR, - pmxvbf16ger2nn */ + pmdmxvbf16ger2nn (pmxvbf16ger2nn) */ ppc_record_ACC_fpscr (regcache, tdep, at, true); return 0; } diff --git a/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.c b/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.c index c0d65d944af..6513b61d40a 100644 --- a/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.c +++ b/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.c @@ -22,6 +22,13 @@ static unsigned long ra, rb, rs; int main () { + + /* This test is used to verify the recording of the MMA instructions. The + names of the MMA instructions pmxbf16ger*, pmxvf32ger*,pmxvf64ger*, + pmxvi4ger8*, pmxvi8ger4* pmxvi16ger2* instructions were officially changed + to pmdmxbf16ger*, pmdmxvf32ger*, pmdmxvf64ger*, pmdmxvi4ger8*, + pmdmxvi8ger4*, pmdmxvi16ger* respectively. The new mnemonics are used in + this test. */ ra = 0xABCDEF012; rb = 0; rs = 0x012345678; @@ -42,8 +49,8 @@ main () xxsetaccz - ACC[3] xvi4ger8 - ACC[4] xvf16ger2pn - ACC[5] - pmxvi8ger4 - ACC[6] - pmxvf32gerpp - ACC[7] and fpscr */ + pmdmxvi8ger4 - ACC[6] + pmdmxvf32gerpp - ACC[7] and fpscr */ /* Need to initialize the vs registers to a non zero value. */ ra = (unsigned long) & vec_xb; __asm__ __volatile__ ("lxvd2x 12, %0, %1" :: "r" (ra ), "r" (rb)); @@ -87,9 +94,9 @@ main () "wa" (vec_xb) ); __asm__ __volatile__ ("xvf16ger2pn 5, %x0, %x1" :: "wa" (vec_xa),\ "wa" (vec_xb) ); - __asm__ __volatile__ ("pmxvi8ger4spp 6, %x0, %x1, 11, 13, 5" + __asm__ __volatile__ ("pmdmxvi8ger4spp 6, %x0, %x1, 11, 13, 5" :: "wa" (vec_xa), "wa" (vec_xb) ); - __asm__ __volatile__ ("pmxvf32gerpp 7, %x0, %x1, 11, 13" + __asm__ __volatile__ ("pmdmxvf32gerpp 7, %x0, %x1, 11, 13" :: "wa" (vec_xa), "wa" (vec_xb) ); ra = 0; /* stop 4 */ } diff --git a/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.exp b/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.exp index 8cecb067667..d5a1279374d 100644 --- a/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.exp +++ b/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.exp @@ -121,8 +121,8 @@ gdb_test_no_output "record" "start recording test2" ## xxsetaccz - ACC[3], vs[12] to vs[15] ## xvi4ger8 - ACC[4], vs[16] to vs[19] ## xvf16ger2pn - ACC[5], vs[20] to vs[23] -## pmxvi8ger4 - ACC[6], vs[21] to vs[27] -## pmxvf32gerpp - ACC[7], vs[28] to vs[31] and fpscr +## pmdmxvi8ger4 - ACC[6], vs[21] to vs[27] +## pmdmxvf32gerpp - ACC[7], vs[28] to vs[31] and fpscr set stop3 [gdb_get_line_number "stop 3"] set stop4 [gdb_get_line_number "stop 4"]