From patchwork Wed Dec 20 13:03:18 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ulrich Weigand X-Patchwork-Id: 25047 Received: (qmail 19362 invoked by alias); 20 Dec 2017 13:03:36 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Delivered-To: mailing list gdb-patches@sourceware.org Received: (qmail 19326 invoked by uid 89); 20 Dec 2017 13:03:35 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-25.3 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 spammy= X-HELO: mx0a-001b2d01.pphosted.com Received: from mx0b-001b2d01.pphosted.com (HELO mx0a-001b2d01.pphosted.com) (148.163.158.5) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Wed, 20 Dec 2017 13:03:27 +0000 Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id vBKD0g21080469 for ; Wed, 20 Dec 2017 08:03:25 -0500 Received: from e06smtp13.uk.ibm.com (e06smtp13.uk.ibm.com [195.75.94.109]) by mx0b-001b2d01.pphosted.com with ESMTP id 2eypfge2k5-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 20 Dec 2017 08:03:24 -0500 Received: from localhost by e06smtp13.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 20 Dec 2017 13:03:21 -0000 Received: from b06cxnps4075.portsmouth.uk.ibm.com (9.149.109.197) by e06smtp13.uk.ibm.com (192.168.101.143) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 20 Dec 2017 13:03:19 -0000 Received: from d06av24.portsmouth.uk.ibm.com (mk.ibm.com [9.149.105.60]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id vBKD3JdI30867684; Wed, 20 Dec 2017 13:03:19 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 80E594204F; Wed, 20 Dec 2017 12:57:24 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 715204204B; Wed, 20 Dec 2017 12:57:24 +0000 (GMT) Received: from oc3748833570.ibm.com (unknown [9.152.213.29]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 20 Dec 2017 12:57:24 +0000 (GMT) Received: by oc3748833570.ibm.com (Postfix, from userid 1000) id D649DD80341; Wed, 20 Dec 2017 14:03:18 +0100 (CET) Subject: [pushed] Fix Cell/B.E. regression (Re: [PATCH 1/3] Clear non-significant bits of address on memory access) To: qiyaoltc@gmail.com (Yao Qi) Date: Wed, 20 Dec 2017 14:03:18 +0100 (CET) From: "Ulrich Weigand" Cc: gdb-patches@sourceware.org (GDB Patches) In-Reply-To: <868tdxlo28.fsf@gmail.com> from "Yao Qi" at Dec 20, 2017 09:57:03 AM MIME-Version: 1.0 X-TM-AS-GCONF: 00 x-cbid: 17122013-0012-0000-0000-0000059B27D2 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17122013-0013-0000-0000-00001916552C Message-Id: <20171220130318.D649DD80341@oc3748833570.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-12-20_06:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1709140000 definitions=main-1712200190 Yao Qi wrote: > Nowadays, we strip non-significant bits in address, pass the stripped > address to target cache and to_xfer_partial. It works for aarch64, but > breaks ppc32/spu. However, in ppc32/spu case, ppu address and spu > address is mixed together, differentiated by the top bit, so the number > of significant bits of address is 64, because if we can't remove any of > them. IMO, it is reasonable to set significant_addr_bits to 64 in ppc. > > I considered your suggestion that pushing address_significant call down, > below spu-multiarch target, that means, many target's to_xfer_partial > need to call address_significant, so I don't do that. Secondly, in the > way you suggested, we still pass the original address to target cache, > which works for ppu/spu, but it doesn't work for aarch64. I've now pushed the patch below, which fixes the regression for now. Longer term, I think the correct fix would probably be to make address spaces explit, e.g. by passing an address space identifer to xfer_partial. The gdbarch associated with that address space should then determine whether truncation is required ... Bye, Ulrich gdb/ChangeLog: * spu-tdep.c (spu_gdbarch_init): Set set_gdbarch_significant_addr_bit to 64 bits. (ppc_linux_init_abi): Likewise, if Cell/B.E. is supported. diff --git a/gdb/ppc-linux-tdep.c b/gdb/ppc-linux-tdep.c index 0e43a64..5120490 100644 --- a/gdb/ppc-linux-tdep.c +++ b/gdb/ppc-linux-tdep.c @@ -1809,6 +1809,10 @@ ppc_linux_init_abi (struct gdbarch_info info, /* Cell/B.E. cross-architecture unwinder support. */ frame_unwind_prepend_unwinder (gdbarch, &ppu2spu_unwind); + + /* We need to support more than "addr_bit" significant address bits + in order to support SPUADDR_ADDR encoded values. */ + set_gdbarch_significant_addr_bit (gdbarch, 64); } set_gdbarch_displaced_step_location (gdbarch, diff --git a/gdb/spu-tdep.c b/gdb/spu-tdep.c index fb9a5d8..dda3011 100644 --- a/gdb/spu-tdep.c +++ b/gdb/spu-tdep.c @@ -2720,6 +2720,9 @@ spu_gdbarch_init (struct gdbarch_info info, struct gdbarch_list *arches) set_gdbarch_address_class_name_to_type_flags (gdbarch, spu_address_class_name_to_type_flags); + /* We need to support more than "addr_bit" significant address bits + in order to support SPUADDR_ADDR encoded values. */ + set_gdbarch_significant_addr_bit (gdbarch, 64); /* Inferior function calls. */ set_gdbarch_call_dummy_location (gdbarch, ON_STACK);