From patchwork Sun Dec 6 12:14:09 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 9907 Received: (qmail 78994 invoked by alias); 6 Dec 2015 12:14:19 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 78970 invoked by uid 89); 6 Dec 2015 12:14:15 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.5 required=5.0 tests=AWL, BAYES_00, SPF_HELO_PASS, T_RP_MATCHES_RCVD autolearn=ham version=3.3.2 X-HELO: mx1.redhat.com Subject: [PATCH v2] malloc: Fix list_lock/arena lock deadlock [BZ #19182] To: GNU C Library References: <56389C10.6060001@redhat.com> Cc: Torvald Riegel From: Florian Weimer Message-ID: <56642691.7060806@redhat.com> Date: Sun, 6 Dec 2015 13:14:09 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0 MIME-Version: 1.0 In-Reply-To: <56389C10.6060001@redhat.com> On 11/03/2015 12:35 PM, Florian Weimer wrote: > This is just a minimal change. The fork handler lock acquisition has to > go away anyway if we make fork async-signal-safe (bug 4737). The attached version is slightly more elaborate. It tries to preserve the original lock order, but has a backup path in case the preferred lock ordering is not possible. It covers reused_arena as well, which has a similar ordering violation. Torvald, we need this patch for backporting, so a review of the concurrency aspect would be extremely welcome. :) Florian 2015-12-06 Florian Weimer [BZ #19182] * malloc/arena.c (list_lock): Document lock ordering requirements. (_int_new_arena): Release and re-acquire arena lock if list_lock cannot be acquired immeidately. (reused_arena): Likewise. diff --git a/malloc/arena.c b/malloc/arena.c index 3dab7bb..834907c 100644 --- a/malloc/arena.c +++ b/malloc/arena.c @@ -69,9 +69,11 @@ extern int sanity_check_heap_info_alignment[(sizeof (heap_info) static __thread mstate thread_arena attribute_tls_model_ie; /* Arena free list. list_lock protects the free_list variable below, - and the next_free and attached_threads members of the mstate - objects. No other (malloc) locks must be taken while list_lock is - active, otherwise deadlocks may occur. */ + the next_free and attached_threads members of struct malloc_state + objects, and concurrent writes to the next member of these objects. + (Read access to the next member is synchronized via a barrier.) + When list_lock is acquired, no arena lock must be acquired, but it + is permitted to acquire arena locks after list_lock. */ static mutex_t list_lock = _LIBC_LOCK_INITIALIZER; static size_t narenas = 1; @@ -790,7 +792,16 @@ _int_new_arena (size_t size) mutex_init (&a->mutex); (void) mutex_lock (&a->mutex); - (void) mutex_lock (&list_lock); + /* Put the arena in the global list. If we cannot acquire + list_lock, we have to temporarily release the arena lock, to + avoid deadlocks. NB: mutex_trylock returns 0 if the lock was + acquired. */ + bool reacquire_arena_lock = mutex_trylock (&list_lock); + if (reacquire_arena_lock) + { + mutex_unlock (&a->mutex); + (void) mutex_lock (&list_lock); + } detach_arena (replaced_arena); @@ -801,6 +812,9 @@ _int_new_arena (size_t size) (void) mutex_unlock (&list_lock); + if (reacquire_arena_lock) + (void) mutex_lock (&a->mutex); + return a; } @@ -884,11 +898,22 @@ reused_arena (mstate avoid_arena) out: { + /* Update the arena thread attachment counters. If we cannot + acquire list_lock, we have to temporarily release the arena + lock, to avoid deadlocks. NB: mutex_trylock returns 0 if the + lock was acquired. */ mstate replaced_arena = thread_arena; - (void) mutex_lock (&list_lock); + bool reacquire_arena_lock = mutex_trylock (&list_lock); + if (reacquire_arena_lock) + { + (void) mutex_unlock (&result->mutex); + (void) mutex_lock (&list_lock); + } detach_arena (replaced_arena); ++result->attached_threads; (void) mutex_unlock (&list_lock); + if (reacquire_arena_lock) + (void) mutex_lock (&result->mutex); } LIBC_PROBE (memory_arena_reuse, 2, result, avoid_arena);