From patchwork Thu Jul 14 15:28:48 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 13801 Received: (qmail 30739 invoked by alias); 14 Jul 2016 15:28:52 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 30190 invoked by uid 89); 14 Jul 2016 15:28:52 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-3.2 required=5.0 tests=BAYES_00, RP_MATCHES_RCVD, SPF_HELO_PASS autolearn=ham version=3.3.2 spammy=acquired, exhibits X-HELO: mx1.redhat.com Date: Thu, 14 Jul 2016 17:28:48 +0200 To: libc-alpha@sourceware.org Subject: [PATCH] malloc: Preseve arena free list for attached threads [BZ #20370] User-Agent: Heirloom mailx 12.5 7/5/10 MIME-Version: 1.0 Message-Id: <20160714152848.50714402408BD@oldenburg.str.redhat.com> From: fweimer@redhat.com (Florian Weimer) It is necessary to preserve the invariant that if an arena is on the free list, it has thread attach count zero. Otherwise, when arena_thread_freeres sees the zero attach count, it will add it, and without the invariant, an arena could get pushed to the list twice, resulting in a cycle. One possible execution trace looks like this: Thread 1 examines free list and observes it as empty. Thread 2 exits and adds its arena to the free list, with attached_threads == 0). Thread 1 selects this arena in reused_arena (not from the free list). Thread 1 increments attached_threads and attaches itself. (The arena remains on the free list.) Thread 1 exits, decrements attached_threads, and adds the arena to the free list. The final step creates a cycle in the usual way (by overwriting the next_free member with the former list head, while there is another list item pointing to the arena structure). tst-malloc-thread-exit exhibits this issue, but it was only visible with a debugger because the incorrect fix in bug 19243 removed the assert from get_free_list. 2016-07-14 Florian Weimer * malloc/arena.c (get_free_list): Update comment. Assert that arenas on the free list have no attached threads. (remove_from_free_list): New function. (reused_arena): Call it. diff --git a/malloc/arena.c b/malloc/arena.c index 229783f..4e16593 100644 --- a/malloc/arena.c +++ b/malloc/arena.c @@ -702,8 +702,7 @@ _int_new_arena (size_t size) } -/* Remove an arena from free_list. The arena may be in use because it - was attached concurrently to a thread by reused_arena below. */ +/* Remove an arena from free_list. */ static mstate get_free_list (void) { @@ -718,7 +717,8 @@ get_free_list (void) free_list = result->next_free; /* The arena will be attached to this thread. */ - ++result->attached_threads; + assert (result->attached_threads == 0); + result->attached_threads = 1; detach_arena (replaced_arena); } @@ -735,6 +735,26 @@ get_free_list (void) return result; } +/* Remove the arena from the free list (if it is present). + free_list_lock must have been acquired by the caller. */ +static void +remove_from_free_list (mstate arena) +{ + mstate *previous = &free_list; + for (mstate p = free_list; p != NULL; p = p->next_free) + { + assert (p->attached_threads == 0); + if (p == arena) + { + /* Remove the requested arena from the list. */ + *previous = p->next_free; + break; + } + else + previous = &p->next_free; + } +} + /* Lock and return an arena that can be reused for memory allocation. Avoid AVOID_ARENA as we have already failed to allocate memory in it and it is currently locked. */ @@ -782,14 +802,25 @@ reused_arena (mstate avoid_arena) (void) mutex_lock (&result->mutex); out: - /* Attach the arena to the current thread. Note that we may have - selected an arena which was on free_list. */ + /* Attach the arena to the current thread. */ { /* Update the arena thread attachment counters. */ mstate replaced_arena = thread_arena; (void) mutex_lock (&free_list_lock); detach_arena (replaced_arena); + + /* We may have picked up an arena on the free list. We need to + preserve the invariant that no arena on the free list has a + positive attached_threads counter (otherwise, + arena_thread_freeres cannot use the counter to determine if the + arena needs to be put on the free list). We unconditionally + remove the selected arena from the free list. The caller of + reused_arena checked the free list and observed it to be empty, + so the list is very short. */ + remove_from_free_list (result); + ++result->attached_threads; + (void) mutex_unlock (&free_list_lock); }