From patchwork Tue May 9 17:41:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: develop--- via Libc-alpha X-Patchwork-Id: 68994 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 4AFD93851C39 for ; Tue, 9 May 2023 17:43:34 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 4AFD93851C39 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1683654214; bh=yVFijvbw27iH+n52syiMp1VowO4XienkzPHb8A/NVl0=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=w7Qt6MHTjL9PIwO8eWt9yU8p6pVf41rvfkoAe80ZqvktzrJEavi8Srz0uQAk6B1RT Mn0KCEK0Yox/XxDzFsSPxWFnPCDYnPJpy3j80DEsNPCU8Apvs8/AXAWySSJ59F/RMV EB77Ug8UQlm6vRUblaGr25VNNyNRQzpyNPYYrnwM= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by sourceware.org (Postfix) with ESMTPS id 991BD385C301 for ; Tue, 9 May 2023 17:42:24 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 991BD385C301 Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 6CB465C0399; Tue, 9 May 2023 13:42:24 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Tue, 09 May 2023 13:42:24 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeeguddggeduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmhgrlhht vghskhgrrhhuphhkvgesfhgrshhtmhgrihhlrdhfmhenucggtffrrghtthgvrhhnpeetge elgfeggeeuleeuffetveefgffgjedvgeehffdthfekteegtdeguefhffeftdenucevlhhu shhtvghrufhiiigvpedvnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghlthgvshhkrg hruhhpkhgvsehfrghsthhmrghilhdrfhhm X-ME-Proxy: Feedback-ID: ifa6c408f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 9 May 2023 13:42:23 -0400 (EDT) To: libc-alpha@sourceware.org Cc: Malte Skarupke Subject: [PATCH v3 7/9] nptl: Fix indentation Date: Tue, 9 May 2023 13:41:58 -0400 Message-Id: <20230509174200.8136-8-malteskarupke@fastmail.fm> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230509174200.8136-1-malteskarupke@fastmail.fm> References: <20230509174200.8136-1-malteskarupke@fastmail.fm> MIME-Version: 1.0 X-Spam-Status: No, score=-12.8 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_PASS, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: malteskarupke--- via Libc-alpha From: develop--- via Libc-alpha Reply-To: malteskarupke@fastmail.fm Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" From: Malte Skarupke In my previous change I turned a nested loop into a simple loop. I'm doing the resulting indentation changes in a separate commit to make the diff on the previous commit easier to review. Signed-off-by: Malte Skarupke --- nptl/pthread_cond_wait.c | 110 +++++++++++++++++++-------------------- 1 file changed, 55 insertions(+), 55 deletions(-) diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c index 8b7db51148..bf05ac6b22 100644 --- a/nptl/pthread_cond_wait.c +++ b/nptl/pthread_cond_wait.c @@ -383,65 +383,65 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, } - while (1) - { - /* Now wait until a signal is available in our group or it is closed. - Acquire MO so that if we observe (signals == lowseq) after group - switching in __condvar_quiesce_and_switch_g1, we synchronize with that - store and will see the prior update of __g1_start done while switching - groups too. */ - unsigned int signals = atomic_load_acquire (cond->__data.__g_signals + g); - uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); - unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - - if (seq < (g1_start >> 1)) - { - /* If the group is closed already, - then this waiter originally had enough extra signals to - consume, up until the time its group was closed. */ - break; - } - - /* If there is an available signal, don't block. - If __g1_start has advanced at all, then we must be in G1 - by now, perhaps in the process of switching back to an older - G2, but in either case we're allowed to consume the available - signal and should not block anymore. */ - if ((int)(signals - lowseq) >= 2) - { - /* Try to grab a signal. See above for MO. (if we do another loop - iteration we need to see the correct value of g1_start) */ - if (atomic_compare_exchange_weak_acquire ( - cond->__data.__g_signals + g, + while (1) + { + /* Now wait until a signal is available in our group or it is closed. + Acquire MO so that if we observe (signals == lowseq) after group + switching in __condvar_quiesce_and_switch_g1, we synchronize with that + store and will see the prior update of __g1_start done while switching + groups too. */ + unsigned int signals = atomic_load_acquire (cond->__data.__g_signals + g); + uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); + unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; + + if (seq < (g1_start >> 1)) + { + /* If the group is closed already, + then this waiter originally had enough extra signals to + consume, up until the time its group was closed. */ + break; + } + + /* If there is an available signal, don't block. + If __g1_start has advanced at all, then we must be in G1 + by now, perhaps in the process of switching back to an older + G2, but in either case we're allowed to consume the available + signal and should not block anymore. */ + if ((int)(signals - lowseq) >= 2) + { + /* Try to grab a signal. See above for MO. (if we do another loop + iteration we need to see the correct value of g1_start) */ + if (atomic_compare_exchange_weak_acquire ( + cond->__data.__g_signals + g, &signals, signals - 2)) - break; - else - continue; - } - - // Now block. - struct _pthread_cleanup_buffer buffer; - struct _condvar_cleanup_buffer cbuffer; - cbuffer.wseq = wseq; - cbuffer.cond = cond; - cbuffer.mutex = mutex; - cbuffer.private = private; - __pthread_cleanup_push (&buffer, __condvar_cleanup_waiting, &cbuffer); - - err = __futex_abstimed_wait_cancelable64 ( - cond->__data.__g_signals + g, signals, clockid, abstime, private); - - __pthread_cleanup_pop (&buffer, 0); - - if (__glibc_unlikely (err == ETIMEDOUT || err == EOVERFLOW)) - { - /* If we timed out, we effectively cancel waiting. */ - __condvar_cancel_waiting (cond, seq, g, private); - result = err; break; - } + else + continue; } + // Now block. + struct _pthread_cleanup_buffer buffer; + struct _condvar_cleanup_buffer cbuffer; + cbuffer.wseq = wseq; + cbuffer.cond = cond; + cbuffer.mutex = mutex; + cbuffer.private = private; + __pthread_cleanup_push (&buffer, __condvar_cleanup_waiting, &cbuffer); + + err = __futex_abstimed_wait_cancelable64 ( + cond->__data.__g_signals + g, signals, clockid, abstime, private); + + __pthread_cleanup_pop (&buffer, 0); + + if (__glibc_unlikely (err == ETIMEDOUT || err == EOVERFLOW)) + { + /* If we timed out, we effectively cancel waiting. */ + __condvar_cancel_waiting (cond, seq, g, private); + result = err; + break; + } + } + /* Confirm that we have been woken. We do that before acquiring the mutex to allow for execution of pthread_cond_destroy while having acquired the mutex. */