From patchwork Wed Jul 6 19:35:47 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Samuel Thibault X-Patchwork-Id: 13678 Received: (qmail 114368 invoked by alias); 6 Jul 2016 19:36:06 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 114337 invoked by uid 89); 6 Jul 2016 19:36:05 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.1 required=5.0 tests=BAYES_00, SPF_HELO_PASS, SPF_NEUTRAL autolearn=no version=3.3.2 spammy=luck, wasted, samuel, Hurd X-HELO: hera.aquilenet.fr Date: Wed, 6 Jul 2016 21:35:47 +0200 From: Samuel Thibault To: libc-alpha@sourceware.org, Florian Weimer Subject: malloc_set_state and heap content Message-ID: <20160706193547.GD8550@var.home> References: <20160320164214.GA21096@var.home> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20160320164214.GA21096@var.home> User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30) Hello, In 4cf6c72fd2a482e7499c29162349810029632c3f ('malloc: Rewrite dumped heap for compatibility in __malloc_set_state'), __malloc_set_state was reimplemented, using the following look to detect the first chunk of the heap: /* Find the chunk with the lowest address with the heap. */ mchunkptr chunk = NULL; { size_t *candidate = (size_t *) ms->sbrk_base; size_t *end = (size_t *) (ms->sbrk_base + ms->sbrked_mem_bytes); while (candidate < end) if (*candidate != 0) { chunk = mem2chunk ((void *) (candidate + 1)); break; } else ++candidate; That assumes that the beginning of the heap is zeroed. It happens that in malloc/malloc.c one can read: /* Skip over some bytes to arrive at an aligned position. We don't need to specially mark these wasted front bytes. They will never be accessed anyway because prev_inuse of av->top (and any chunk created from its start) is always true after initialization. */ On Linux the space happens to be zero by luck, but with other kernels that may not be true (it is not with the Hurd). Also, only the 'size' field of the first chunk is initialized by set_head (av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE); So I'd say we need the attached patch, don't we? Samuel diff --git a/ChangeLog b/ChangeLog index 690012c..343accd 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,3 +1,9 @@ +2016-07-06 Samuel Thibault + + * malloc/malloc.c (sysmalloc): Zero memory between brk and the heap + top, and set prev_size of the first chunk to zero, so malloc_set_state + can properly find the first chunk. + 2016-07-06 Stefan Liebler * sysdeps/s390/linkmap.h (struct link_map_machine): diff --git a/malloc/malloc.c b/malloc/malloc.c index 1f5f166..beb97e9 100644 --- a/malloc/malloc.c +++ b/malloc/malloc.c @@ -2600,13 +2600,12 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) { /* Skip over some bytes to arrive at an aligned position. - We don't need to specially mark these wasted front bytes. - They will never be accessed anyway because - prev_inuse of av->top (and any chunk created from its start) - is always true after initialization. + We zero them for malloc_set_state to properly find the + first chunk. */ correction = MALLOC_ALIGNMENT - front_misalign; + memset (brk, 0, correction); aligned_brk += correction; } @@ -2661,13 +2660,13 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) { /* Skip over some bytes to arrive at an aligned position. - We don't need to specially mark these wasted front bytes. - They will never be accessed anyway because - prev_inuse of av->top (and any chunk created from its start) - is always true after initialization. + We zero them for malloc_set_state to properly find + the first chunk. */ - aligned_brk += MALLOC_ALIGNMENT - front_misalign; + correction = MALLOC_ALIGNMENT - front_misalign; + memset (brk, 0, correction); + aligned_brk += correction; } } @@ -2682,6 +2681,7 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) if (snd_brk != (char *) (MORECORE_FAILURE)) { av->top = (mchunkptr) aligned_brk; + av->top->prev_size = 0; set_head (av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE); av->system_mem += correction;