From patchwork Tue Jan 10 18:33:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Tromey X-Patchwork-Id: 62900 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id E260638493D1 for ; Tue, 10 Jan 2023 18:34:15 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org E260638493D1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1673375655; bh=KeV1bWAdoQUzhrdQ+PUXRn0w9106CLPMKm+N+JYjUN0=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=uHA+I2wCpcTMDD2KGHGnytj+YX05Up1aDdocHSJNSSe2MCxc+/K9Gqk6rDgo3YLe9 hUHN0jWMyP/ziu1NSiz7iW1uFXzVJ8bj3qRwLeYXOa36A7KewQMsGF2tTnT+YTvXEX QyYI9GQd2Emz6A7RS5phllDv2BdU/UzZyHNmjQ2c= X-Original-To: gdb-patches@sourceware.org Delivered-To: gdb-patches@sourceware.org Received: from mail-io1-xd32.google.com (mail-io1-xd32.google.com [IPv6:2607:f8b0:4864:20::d32]) by sourceware.org (Postfix) with ESMTPS id D331B3858CDA for ; Tue, 10 Jan 2023 18:33:50 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org D331B3858CDA Received: by mail-io1-xd32.google.com with SMTP id z71so302033iof.11 for ; Tue, 10 Jan 2023 10:33:50 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KeV1bWAdoQUzhrdQ+PUXRn0w9106CLPMKm+N+JYjUN0=; b=lF3VXQqXBSiq2Vh/lIJZjAVqJnaO4SYI6ZF5PLo/T9IhUIf3StWumXO3QXqk3M7sL3 0RqM7sKzmRPSojnd38YQNtCg9aN+s6SiWCWFidjX7vGfaXjvwMxalav7MdJQSGxt17vV 9XeHdYAOsaPIr6RMMDo+a+f/iAH8Nx0VDYXo3mXqLnDNKBkvT6yvi4jT6/uDaUC1KwDs rwEd1fhHC5mxlTgXbKnGrnUXXSSNJcEeCSoBAOuDfTp5seI2wLUL9nqvFJC5S1NcBZES p1KPIoeMVr2tj43zp4NrUy/Rfmwj9qkJ0C8xdcPIXIuC2QhrumiaeqsXM/L7JHNvsbVL djuA== X-Gm-Message-State: AFqh2krPt6gI1fOn6kWFYk8ca+HJXmOTQVZIdsyC7DmMxqbd/EoTxHdl J/Oc+q+uT6TVUhmNwJ3h2bdpXELKnUlxWYHs X-Google-Smtp-Source: AMrXdXs9ZEyrtzITTU1T7ZS5v5plvc5bEMSjp4ahH9DckzMzzBFt336UjYj/wsSZLYk/GxHWxLfSow== X-Received: by 2002:a05:6602:124b:b0:6e2:d3f9:56d with SMTP id o11-20020a056602124b00b006e2d3f9056dmr54077745iou.0.1673375630112; Tue, 10 Jan 2023 10:33:50 -0800 (PST) Received: from localhost.localdomain (97-122-76-186.hlrn.qwest.net. [97.122.76.186]) by smtp.gmail.com with ESMTPSA id w20-20020a05663800d400b00375783003fcsm3739933jao.136.2023.01.10.10.33.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Jan 2023 10:33:49 -0800 (PST) To: gdb-patches@sourceware.org Cc: Tom Tromey Subject: [PATCH v2 1/4] Avoid submitting empty tasks in parallel_for_each Date: Tue, 10 Jan 2023 11:33:35 -0700 Message-Id: <20230110183338.2623088-2-tromey@adacore.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230110183338.2623088-1-tromey@adacore.com> References: <20230110183338.2623088-1-tromey@adacore.com> MIME-Version: 1.0 X-Spam-Status: No, score=-11.3 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gdb-patches@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gdb-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Tom Tromey via Gdb-patches From: Tom Tromey Reply-To: Tom Tromey Errors-To: gdb-patches-bounces+patchwork=sourceware.org@sourceware.org Sender: "Gdb-patches" I found that parallel_for_each would submit empty tasks to the thread pool. For example, this can happen if the number of tasks is smaller than the number of available threads. In the DWARF reader, this resulted in the cooked index containing empty sub-indices. This patch arranges to instead shrink the result vector and process the trailing entries in the calling thread. --- gdb/unittests/parallel-for-selftests.c | 39 ++++++++++++++++++++++++++ gdbsupport/parallel-for.h | 30 ++++++++++++++++++++ 2 files changed, 69 insertions(+) diff --git a/gdb/unittests/parallel-for-selftests.c b/gdb/unittests/parallel-for-selftests.c index 3162db18df1..15a095ae62b 100644 --- a/gdb/unittests/parallel-for-selftests.c +++ b/gdb/unittests/parallel-for-selftests.c @@ -149,6 +149,45 @@ TEST (int n_threads) SELF_CHECK (counter == NUMBER); #undef NUMBER + + /* Check that if there are fewer tasks than threads, then we won't + end up with a null result. */ + std::vector> intresults; + std::atomic any_empty_tasks (false); + + FOR_EACH (1, 0, 1, + [&] (int start, int end) + { + if (start == end) + any_empty_tasks = true; + return std::unique_ptr (new int (end - start)); + }); + SELF_CHECK (!any_empty_tasks); + SELF_CHECK (std::all_of (intresults.begin (), + intresults.end (), + [] (const std::unique_ptr &entry) + { + return entry != nullptr; + })); + + /* The same but using the task size parameter. */ + intresults.clear (); + any_empty_tasks = false; + FOR_EACH (1, 0, 1, + [&] (int start, int end) + { + if (start == end) + any_empty_tasks = true; + return std::unique_ptr (new int (end - start)); + }, + task_size_one); + SELF_CHECK (!any_empty_tasks); + SELF_CHECK (std::all_of (intresults.begin (), + intresults.end (), + [] (const std::unique_ptr &entry) + { + return entry != nullptr; + })); } #endif /* FOR_EACH */ diff --git a/gdbsupport/parallel-for.h b/gdbsupport/parallel-for.h index b565676a0d0..de9ebb15746 100644 --- a/gdbsupport/parallel-for.h +++ b/gdbsupport/parallel-for.h @@ -70,6 +70,12 @@ struct par_for_accumulator return result; } + /* Resize the results to N. */ + void resize (size_t n) + { + m_futures.resize (n); + } + private: /* A vector of futures coming from the tasks run in the @@ -108,6 +114,12 @@ struct par_for_accumulator } } + /* Resize the results to N. */ + void resize (size_t n) + { + m_futures.resize (n); + } + private: std::vector> m_futures; @@ -232,6 +244,24 @@ parallel_for_each (unsigned n, RandomIt first, RandomIt last, end = j; remaining_size -= chunk_size; } + + /* This case means we don't have enough elements to really + distribute them. Rather than ever submit a task that does + nothing, we short-circuit here. */ + if (first == end) + end = last; + + if (end == last) + { + /* We're about to dispatch the last batch of elements, which + we normally process in the main thread. So just truncate + the result list here. This avoids submitting empty tasks + to the thread pool. */ + count = i; + results.resize (count); + break; + } + if (parallel_for_each_debug) { debug_printf (_("Parallel for: elements on worker thread %i\t: %zu"),