From patchwork Tue Mar 26 14:44:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Tromey X-Patchwork-Id: 31993 Received: (qmail 29359 invoked by alias); 26 Mar 2019 14:53:19 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Delivered-To: mailing list gdb-patches@sourceware.org Received: (qmail 28455 invoked by uid 89); 26 Mar 2019 14:53:19 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-23.6 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=ham version=3.3.1 spammy= X-HELO: rock.gnat.com Received: from rock.gnat.com (HELO rock.gnat.com) (205.232.38.15) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 26 Mar 2019 14:53:17 +0000 Received: from localhost (localhost.localdomain [127.0.0.1]) by filtered-rock.gnat.com (Postfix) with ESMTP id 9D78A116C79; Tue, 26 Mar 2019 10:44:08 -0400 (EDT) Received: from rock.gnat.com ([127.0.0.1]) by localhost (rock.gnat.com [127.0.0.1]) (amavisd-new, port 10024) with LMTP id Mj3E1v4P9ZfU; Tue, 26 Mar 2019 10:44:08 -0400 (EDT) Received: from murgatroyd.Home (174-29-37-56.hlrn.qwest.net [174.29.37.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by rock.gnat.com (Postfix) with ESMTPSA id 4928D116C5F; Tue, 26 Mar 2019 10:44:08 -0400 (EDT) From: Tom Tromey To: gdb-patches@sourceware.org Cc: Tom Tromey Subject: [RFA 5/8] Wrap xfer_partial and enable_btrace for Ravenscar Date: Tue, 26 Mar 2019 08:44:01 -0600 Message-Id: <20190326144404.6670-6-tromey@adacore.com> In-Reply-To: <20190326144404.6670-1-tromey@adacore.com> References: <20190326144404.6670-1-tromey@adacore.com> MIME-Version: 1.0 A gdb crash showed that the xfer_partial target method was not wrapped for Ravenscar. This caused remote.c to call remote::set_general_thread with a Ravenscar "fake" ptid, which showed up later as an event ptid. I went through all the target methods and looked to see which ones could call set_general_thread or set_continue_thread (but not set_general_process, as I think Ravenscar targets aren't multi-inferior). This patch wraps the two that I found. xfer_partial requires special treatment, because it can be called recursively via get_base_thread_from_ravenscar_task. To avoid a recursive call, this patch changes update_thread_list to record all tasks in the cpu_map, and changes get_thread_base_cpu to prefer this map. This avoids some memory reads. It was unclear to me whether enable_btrace really makes sense for Ravenscar; but at the same time it seemed harmless to add this patch. gdb/ChangeLog 2019-03-26 Tom Tromey * ravenscar-thread.c (xfer_partial, enable_btrace, add_thread): New methods. (ravenscar_thread_target::get_thread_base_cpu): Check cpu_map first. (ravenscar_thread_target::add_thread): Rename from ravenscar_add_thread. (ravenscar_thread_target::update_thread_list): Use a lambda. (ravenscar_thread_target::xfer_partial): New method. --- gdb/ChangeLog | 11 +++++++ gdb/ravenscar-thread.c | 66 +++++++++++++++++++++++++++++++++++------- 2 files changed, 66 insertions(+), 11 deletions(-) diff --git a/gdb/ravenscar-thread.c b/gdb/ravenscar-thread.c index a3545847288..85a86f5698e 100644 --- a/gdb/ravenscar-thread.c +++ b/gdb/ravenscar-thread.c @@ -102,6 +102,13 @@ struct ravenscar_thread_target final : public target_ops bool stopped_data_address (CORE_ADDR *) override; + enum target_xfer_status xfer_partial (enum target_object object, + const char *annex, + gdb_byte *readbuf, + const gdb_byte *writebuf, + ULONGEST offset, ULONGEST len, + ULONGEST *xfered_len) override; + bool thread_alive (ptid_t ptid) override; int core_of_thread (ptid_t ptid) override; @@ -112,6 +119,14 @@ struct ravenscar_thread_target final : public target_ops ptid_t get_ada_task_ptid (long lwp, long thread) override; + struct btrace_target_info *enable_btrace (ptid_t ptid, + const struct btrace_config *conf) + override + { + ptid = get_base_thread_from_ravenscar_task (ptid); + return beneath ()->enable_btrace (ptid, conf); + } + void mourn_inferior () override; void close () override @@ -133,6 +148,7 @@ private: bool runtime_initialized (); int get_thread_base_cpu (ptid_t ptid); ptid_t get_base_thread_from_ravenscar_task (ptid_t ptid); + void add_thread (struct ada_task_info *task); /* This maps a TID to the CPU on which it was running. This is needed because sometimes the runtime will report an active task @@ -169,16 +185,18 @@ ravenscar_thread_target::get_thread_base_cpu (ptid_t ptid) if (is_ravenscar_task (ptid)) { - struct ada_task_info *task_info = ada_get_task_info_from_ptid (ptid); + /* Prefer to not read inferior memory if possible, to avoid + reentrancy problems with xfer_partial. */ + auto iter = cpu_map.find (ptid.tid ()); - if (task_info != NULL) - base_cpu = task_info->base_cpu; + if (iter != cpu_map.end ()) + base_cpu = iter->second; else { - auto iter = cpu_map.find (ptid.tid ()); + struct ada_task_info *task_info = ada_get_task_info_from_ptid (ptid); - gdb_assert (iter != cpu_map.end ()); - base_cpu = iter->second; + gdb_assert (task_info != NULL); + base_cpu = task_info->base_cpu; } } else @@ -254,7 +272,7 @@ ravenscar_thread_target::update_inferior_ptid () may not always add it to the thread list. Add it here. */ if (!find_thread_ptid (inferior_ptid)) { - add_thread (inferior_ptid); + ::add_thread (inferior_ptid); cpu_map[inferior_ptid.tid ()] = base_cpu; } } @@ -380,11 +398,14 @@ ravenscar_thread_target::wait (ptid_t ptid, /* Add the thread associated to the given TASK to the thread list (if the thread has already been added, this is a no-op). */ -static void -ravenscar_add_thread (struct ada_task_info *task) +void +ravenscar_thread_target::add_thread (struct ada_task_info *task) { if (find_thread_ptid (task->ptid) == NULL) - add_thread (task->ptid); + { + ::add_thread (task->ptid); + cpu_map[task->ptid.tid ()] = task->base_cpu; + } } void @@ -395,7 +416,10 @@ ravenscar_thread_target::update_thread_list () (m_base_ptid) and the running thread, that may not have been included to system.tasking.debug's list yet. */ - iterate_over_live_ada_tasks (ravenscar_add_thread); + iterate_over_live_ada_tasks ([=] (struct ada_task_info *task) + { + this->add_thread (task); + }); } ptid_t @@ -537,6 +561,26 @@ ravenscar_thread_target::core_of_thread (ptid_t ptid) return beneath ()->core_of_thread (inferior_ptid); } +/* Implement the target xfer_partial method. */ + +enum target_xfer_status +ravenscar_thread_target::xfer_partial (enum target_object object, + const char *annex, + gdb_byte *readbuf, + const gdb_byte *writebuf, + ULONGEST offset, ULONGEST len, + ULONGEST *xfered_len) +{ + scoped_restore save_ptid = make_scoped_restore (&inferior_ptid); + /* Calling get_base_thread_from_ravenscar_task can read memory from + the inferior. However, that function is written to prefer our + internal map, so it should not result in recursive calls in + practice. */ + inferior_ptid = get_base_thread_from_ravenscar_task (inferior_ptid); + return beneath ()->xfer_partial (object, annex, readbuf, writebuf, + offset, len, xfered_len); +} + /* Observer on inferior_created: push ravenscar thread stratum if needed. */ static void