From patchwork Sun Oct 29 17:35:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Tromey X-Patchwork-Id: 78692 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 815CC3844008 for ; Sun, 29 Oct 2023 17:40:48 +0000 (GMT) X-Original-To: gdb-patches@sourceware.org Delivered-To: gdb-patches@sourceware.org Received: from omta40.uswest2.a.cloudfilter.net (omta40.uswest2.a.cloudfilter.net [35.89.44.39]) by sourceware.org (Postfix) with ESMTPS id 5181E38618DD for ; Sun, 29 Oct 2023 17:38:29 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 5181E38618DD Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=tromey.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=tromey.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 5181E38618DD Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=35.89.44.39 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1698601116; cv=none; b=ngi2yw7AVHsfYf9+DyVoaJ5HUGQunCIl0gTmV0uOBasQIzeOLGd7BK2Frz/Mw7o7xmZ7STXnV4UzsXJw/ujej4VA9z810IyMahFydq/i2aC4h7/ryQndgOTb6dzCoQDw6m+yp04pRJYtEWFFVCKg9LA8RKeH2qSTY87Ju+kVxMA= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1698601116; c=relaxed/simple; bh=e5pVdrUL0D2sn7vYb+kWwuFCrgi80lovcuXWWd15SG8=; h=DKIM-Signature:From:To:Subject:Date:Message-ID:MIME-Version; b=T6wSavb5xdz90px8HdwF/vlfRmI9jWuTHln3crV1lOKZePcIdzYaRny7469vr8uYRdxmaCSgBD/2fpdrnz5sfqbaRI65unvCRRidPBU81YgN9W5ZNmlCTrebbcH3rTFcUC7Y3NyI83pMEt4m66CYVimiOnRdkrxOyIxDAsmHWBY= ARC-Authentication-Results: i=1; server2.sourceware.org Received: from eig-obgw-5007a.ext.cloudfilter.net ([10.0.29.141]) by cmsmtp with ESMTPS id x7Glqt7XCL9Agx9k4qcQgM; Sun, 29 Oct 2023 17:38:28 +0000 Received: from box5379.bluehost.com ([162.241.216.53]) by cmsmtp with ESMTPS id x9k3qvwLV6uHPx9k3qyVo1; Sun, 29 Oct 2023 17:38:28 +0000 X-Authority-Analysis: v=2.4 cv=TuH1ORbh c=1 sm=1 tr=0 ts=653e9894 a=ApxJNpeYhEAb1aAlGBBbmA==:117 a=ApxJNpeYhEAb1aAlGBBbmA==:17 a=OWjo9vPv0XrRhIrVQ50Ab3nP57M=:19 a=dLZJa+xiwSxG16/P+YVxDGlgEgI=:19 a=bhdUkHdE2iEA:10 a=Qbun_eYptAEA:10 a=8SexAYW74Qp0OzhZtVkA:9 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=tromey.com; s=default; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=jMVF+P4qDSM/7eDZhQd86/pTOqvRSBJ6+KLS0eTxKjo=; b=iNvFfalP9Sp8hpOn/uisk1YAhe 2+AJ9PcSEcSQm7P05ZgmVudJBGnyi8pb+ySDE5dsycx05sieAQ92z5yTrFo5RHSTEHWuOj9K3TJ/c r2HRvxa8msdIJiOgv5eqHY/Z7; Received: from 97-122-77-73.hlrn.qwest.net ([97.122.77.73]:47472 helo=localhost.localdomain) by box5379.bluehost.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.96.2) (envelope-from ) id 1qx9k3-000nop-0U; Sun, 29 Oct 2023 11:38:27 -0600 From: Tom Tromey To: gdb-patches@sourceware.org Cc: Tom Tromey Subject: [PATCH 15/15] Back out some parallel_for_each features Date: Sun, 29 Oct 2023 11:35:34 -0600 Message-ID: <20231029173839.471514-16-tom@tromey.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029173839.471514-1-tom@tromey.com> References: <20231029173839.471514-1-tom@tromey.com> MIME-Version: 1.0 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - box5379.bluehost.com X-AntiAbuse: Original Domain - sourceware.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - tromey.com X-BWhitelist: no X-Source-IP: 97.122.77.73 X-Source-L: No X-Exim-ID: 1qx9k3-000nop-0U X-Source: X-Source-Args: X-Source-Dir: X-Source-Sender: 97-122-77-73.hlrn.qwest.net (localhost.localdomain) [97.122.77.73]:47472 X-Source-Auth: tom+tromey.com X-Email-Count: 16 X-Org: HG=bhshared;ORG=bluehost; X-Source-Cap: ZWx5bnJvYmk7ZWx5bnJvYmk7Ym94NTM3OS5ibHVlaG9zdC5jb20= X-Local-Domain: yes X-CMAE-Envelope: MS4xfGfEdTI8gjfyf/n/WQ0zi3gljBUlpvOW0GGbsIc/yOB0yQ5qD7DOSIy3lB+nEhXUgmU+9++32RthbOKdbLiQjXDExwHgwQ9gVtbw04/Hn+6n49cdJHZZ ZhPW4CU0z4Gk47Rfb239qOs8ClM9xzzGvv49X+v3yOj0/n+Qxe51QOnuXcTVyRNgbHB95PqZnhl6aRQnl7a2c7F8Hz0lPmfMSR8= X-Spam-Status: No, score=-3024.8 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, GIT_PATCH_0, JMQ_SPF_NEUTRAL, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gdb-patches@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gdb-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gdb-patches-bounces+patchwork=sourceware.org@sourceware.org Now that the DWARF reader does not use parallel_for_each, we can remove some of the features that were added just for it: return values and task sizing. The thread_pool typed tasks feature could also be removed, but I haven't done so here. This one seemed less intrusive and perhaps more likely to be needed at some point. --- gdb/unittests/parallel-for-selftests.c | 47 ----- gdbsupport/parallel-for.h | 234 ++++--------------------- 2 files changed, 30 insertions(+), 251 deletions(-) diff --git a/gdb/unittests/parallel-for-selftests.c b/gdb/unittests/parallel-for-selftests.c index 1ad7eaa701c..b67b114e210 100644 --- a/gdb/unittests/parallel-for-selftests.c +++ b/gdb/unittests/parallel-for-selftests.c @@ -120,34 +120,6 @@ TEST (int n_threads) }); SELF_CHECK (counter == 0); - auto task_size_max_ = [] (int iter) - { - return (size_t)SIZE_MAX; - }; - auto task_size_max = gdb::make_function_view (task_size_max_); - - counter = 0; - FOR_EACH (1, 0, NUMBER, - [&] (int start, int end) - { - counter += end - start; - }, task_size_max); - SELF_CHECK (counter == NUMBER); - - auto task_size_one_ = [] (int iter) - { - return (size_t)1; - }; - auto task_size_one = gdb::make_function_view (task_size_one_); - - counter = 0; - FOR_EACH (1, 0, NUMBER, - [&] (int start, int end) - { - counter += end - start; - }, task_size_one); - SELF_CHECK (counter == NUMBER); - #undef NUMBER /* Check that if there are fewer tasks than threads, then we won't @@ -169,25 +141,6 @@ TEST (int n_threads) { return entry != nullptr; })); - - /* The same but using the task size parameter. */ - intresults.clear (); - any_empty_tasks = false; - FOR_EACH (1, 0, 1, - [&] (int start, int end) - { - if (start == end) - any_empty_tasks = true; - return gdb::make_unique (end - start); - }, - task_size_one); - SELF_CHECK (!any_empty_tasks); - SELF_CHECK (std::all_of (intresults.begin (), - intresults.end (), - [] (const std::unique_ptr &entry) - { - return entry != nullptr; - })); } #endif /* FOR_EACH */ diff --git a/gdbsupport/parallel-for.h b/gdbsupport/parallel-for.h index b57f7ea97e1..f9b2e49c701 100644 --- a/gdbsupport/parallel-for.h +++ b/gdbsupport/parallel-for.h @@ -29,104 +29,6 @@ namespace gdb { -namespace detail -{ - -/* This is a helper class that is used to accumulate results for - parallel_for. There is a specialization for 'void', below. */ -template -struct par_for_accumulator -{ -public: - - explicit par_for_accumulator (size_t n_threads) - : m_futures (n_threads) - { - } - - /* The result type that is accumulated. */ - typedef std::vector result_type; - - /* Post the Ith task to a background thread, and store a future for - later. */ - void post (size_t i, std::function task) - { - m_futures[i] - = gdb::thread_pool::g_thread_pool->post_task (std::move (task)); - } - - /* Invoke TASK in the current thread, then compute all the results - from all background tasks and put them into a result vector, - which is returned. */ - result_type finish (gdb::function_view task) - { - result_type result (m_futures.size () + 1); - - result.back () = task (); - - for (size_t i = 0; i < m_futures.size (); ++i) - result[i] = m_futures[i].get (); - - return result; - } - - /* Resize the results to N. */ - void resize (size_t n) - { - m_futures.resize (n); - } - -private: - - /* A vector of futures coming from the tasks run in the - background. */ - std::vector> m_futures; -}; - -/* See the generic template. */ -template<> -struct par_for_accumulator -{ -public: - - explicit par_for_accumulator (size_t n_threads) - : m_futures (n_threads) - { - } - - /* This specialization does not compute results. */ - typedef void result_type; - - void post (size_t i, std::function task) - { - m_futures[i] - = gdb::thread_pool::g_thread_pool->post_task (std::move (task)); - } - - result_type finish (gdb::function_view task) - { - task (); - - for (auto &future : m_futures) - { - /* Use 'get' and not 'wait', to propagate any exception. */ - future.get (); - } - } - - /* Resize the results to N. */ - void resize (size_t n) - { - m_futures.resize (n); - } - -private: - - std::vector> m_futures; -}; - -} - /* A very simple "parallel for". This splits the range of iterators into subranges, and then passes each subrange to the callback. The work may or may not be done in separate threads. @@ -137,23 +39,13 @@ struct par_for_accumulator The parameter N says how batching ought to be done -- there will be at least N elements processed per thread. Setting N to 0 is not - allowed. - - If the function returns a non-void type, then a vector of the - results is returned. The size of the resulting vector depends on - the number of threads that were used. */ + allowed. */ template -typename gdb::detail::par_for_accumulator< - typename gdb::invoke_result::type - >::result_type +void parallel_for_each (unsigned n, RandomIt first, RandomIt last, - RangeFunction callback, - gdb::function_view task_size = nullptr) + RangeFunction callback) { - using result_type - = typename gdb::invoke_result::type; - /* If enabled, print debug info about how the work is distributed across the threads. */ const bool parallel_for_each_debug = false; @@ -163,87 +55,37 @@ parallel_for_each (unsigned n, RandomIt first, RandomIt last, size_t n_elements = last - first; size_t elts_per_thread = 0; size_t elts_left_over = 0; - size_t total_size = 0; - size_t size_per_thread = 0; - size_t max_element_size = n_elements == 0 ? 1 : SIZE_MAX / n_elements; if (n_threads > 1) { - if (task_size != nullptr) - { - gdb_assert (n == 1); - for (RandomIt i = first; i != last; ++i) - { - size_t element_size = task_size (i); - gdb_assert (element_size > 0); - if (element_size > max_element_size) - /* We could start scaling here, but that doesn't seem to be - worth the effort. */ - element_size = max_element_size; - size_t prev_total_size = total_size; - total_size += element_size; - /* Check for overflow. */ - gdb_assert (prev_total_size < total_size); - } - size_per_thread = total_size / n_threads; - } - else - { - /* Require that there should be at least N elements in a - thread. */ - gdb_assert (n > 0); - if (n_elements / n_threads < n) - n_threads = std::max (n_elements / n, (size_t) 1); - elts_per_thread = n_elements / n_threads; - elts_left_over = n_elements % n_threads; - /* n_elements == n_threads * elts_per_thread + elts_left_over. */ - } + /* Require that there should be at least N elements in a + thread. */ + gdb_assert (n > 0); + if (n_elements / n_threads < n) + n_threads = std::max (n_elements / n, (size_t) 1); + elts_per_thread = n_elements / n_threads; + elts_left_over = n_elements % n_threads; + /* n_elements == n_threads * elts_per_thread + elts_left_over. */ } size_t count = n_threads == 0 ? 0 : n_threads - 1; - gdb::detail::par_for_accumulator results (count); + std::vector> results; if (parallel_for_each_debug) { debug_printf (_("Parallel for: n_elements: %zu\n"), n_elements); - if (task_size != nullptr) - { - debug_printf (_("Parallel for: total_size: %zu\n"), total_size); - debug_printf (_("Parallel for: size_per_thread: %zu\n"), size_per_thread); - } - else - { - debug_printf (_("Parallel for: minimum elements per thread: %u\n"), n); - debug_printf (_("Parallel for: elts_per_thread: %zu\n"), elts_per_thread); - } + debug_printf (_("Parallel for: minimum elements per thread: %u\n"), n); + debug_printf (_("Parallel for: elts_per_thread: %zu\n"), elts_per_thread); } - size_t remaining_size = total_size; for (int i = 0; i < count; ++i) { RandomIt end; - size_t chunk_size = 0; - if (task_size == nullptr) - { - end = first + elts_per_thread; - if (i < elts_left_over) - /* Distribute the leftovers over the worker threads, to avoid having - to handle all of them in a single thread. */ - end++; - } - else - { - RandomIt j; - for (j = first; j < last && chunk_size < size_per_thread; ++j) - { - size_t element_size = task_size (j); - if (element_size > max_element_size) - element_size = max_element_size; - chunk_size += element_size; - } - end = j; - remaining_size -= chunk_size; - } + end = first + elts_per_thread; + if (i < elts_left_over) + /* Distribute the leftovers over the worker threads, to avoid having + to handle all of them in a single thread. */ + end++; /* This case means we don't have enough elements to really distribute them. Rather than ever submit a task that does @@ -258,7 +100,6 @@ parallel_for_each (unsigned n, RandomIt first, RandomIt last, the result list here. This avoids submitting empty tasks to the thread pool. */ count = i; - results.resize (count); break; } @@ -266,12 +107,12 @@ parallel_for_each (unsigned n, RandomIt first, RandomIt last, { debug_printf (_("Parallel for: elements on worker thread %i\t: %zu"), i, (size_t)(end - first)); - if (task_size != nullptr) - debug_printf (_("\t(size: %zu)"), chunk_size); debug_printf (_("\n")); } - results.post (i, [=] () - { return callback (first, end); }); + results.push_back (gdb::thread_pool::g_thread_pool->post_task ([=] () + { + return callback (first, end); + })); first = end; } @@ -279,8 +120,6 @@ parallel_for_each (unsigned n, RandomIt first, RandomIt last, if (parallel_for_each_debug) { debug_printf (_("Parallel for: elements on worker thread %i\t: 0"), i); - if (task_size != nullptr) - debug_printf (_("\t(size: 0)")); debug_printf (_("\n")); } @@ -289,14 +128,12 @@ parallel_for_each (unsigned n, RandomIt first, RandomIt last, { debug_printf (_("Parallel for: elements on main thread\t\t: %zu"), (size_t)(last - first)); - if (task_size != nullptr) - debug_printf (_("\t(size: %zu)"), remaining_size); debug_printf (_("\n")); } - return results.finish ([=] () - { - return callback (first, last); - }); + callback (first, last); + + for (auto &fut : results) + fut.get (); } /* A sequential drop-in replacement of parallel_for_each. This can be useful @@ -304,22 +141,11 @@ parallel_for_each (unsigned n, RandomIt first, RandomIt last, multi-threading in a fine-grained way. */ template -typename gdb::detail::par_for_accumulator< - typename gdb::invoke_result::type - >::result_type +void sequential_for_each (unsigned n, RandomIt first, RandomIt last, - RangeFunction callback, - gdb::function_view task_size = nullptr) + RangeFunction callback) { - using result_type = typename gdb::invoke_result::type; - - gdb::detail::par_for_accumulator results (0); - - /* Process all the remaining elements in the main thread. */ - return results.finish ([=] () - { - return callback (first, last); - }); + callback (first, last); } }