From patchwork Thu Nov 4 13:11:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella Netto X-Patchwork-Id: 47048 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 40C323857C53 for ; Thu, 4 Nov 2021 13:12:02 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 40C323857C53 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1636031522; bh=Cu9AXIDNYM48eHu5dLiPTE5MxBL/eUE5AAEge/PHpaQ=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=WCTLwF3JRjqajtjDn360a557hIGFJZyZgAypyccB7SOWu8NyPh0ivy7HG45XTX+4N GeMjiTg4V1zCWZF/mN9Kwalef9/dCxTalzigYedA75FXXrI9YlbTclJLraWpUtStyF 438LCTOKt7Zpw069S2vEk0kv9lNLbxZ9vnIbmwbs= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-ot1-x32d.google.com (mail-ot1-x32d.google.com [IPv6:2607:f8b0:4864:20::32d]) by sourceware.org (Postfix) with ESMTPS id 870FC385841C for ; Thu, 4 Nov 2021 13:11:38 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 870FC385841C Received: by mail-ot1-x32d.google.com with SMTP id g91-20020a9d12e4000000b0055ae68cfc3dso5464955otg.9 for ; Thu, 04 Nov 2021 06:11:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:mime-version :content-transfer-encoding; bh=Cu9AXIDNYM48eHu5dLiPTE5MxBL/eUE5AAEge/PHpaQ=; b=bOmFpSs+LhI6+wiDlzPwgmV3pUkf3ooVOW9WqLmqujDWRY6r9pmZVKh5iJMzBG2FWB TMrKUQtwjFgmT0k9VHKjJjTIlBv5A+MCpqqU8EOMF7hkIP3Yz4dsSL+Py3XYPVqiIKmo HdVE4fT9ZdhRbF3YPttdFMhwVGjRX4RzMhpvyDvAbJ3o6z4cVVVuiumCiQpDcb5cHc12 gP6FbiK/OQvqEiC74FDXgoe/qP34p3EKPHkZDBnq7KqLKAiovOjOk97SNTSne4TmqRCm jdAYY7+tnsr8t9NpqILqOSpY0XNl4pkDgzvD1wTJ0dvFte5li6UlyMyJG2Ul8dgMH13c hxUw== X-Gm-Message-State: AOAM531S7ZFWpZWMrC/uLArY989e5izuPl+x06pv1dCh39+AEhVLMYKo iAFvGuD28GlAtAC38QQsS+W/MgBmwBvRsQ== X-Google-Smtp-Source: ABdhPJwK8eyi63V6tQ+tAckLAD1nCaSp7VxbBu3JGZCEOkIRael161PvCtbC/e6BeehijVpyX/FzCQ== X-Received: by 2002:a05:6830:2336:: with SMTP id q22mr10930729otg.296.1636031497480; Thu, 04 Nov 2021 06:11:37 -0700 (PDT) Received: from birita.. ([2804:431:c7cb:b64f:4cb2:18f:d2ae:b202]) by smtp.gmail.com with ESMTPSA id w12sm1456520ott.35.2021.11.04.06.11.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Nov 2021 06:11:36 -0700 (PDT) To: libc-alpha@sourceware.org, Siddhesh Poyarekar , DJ Delorie Subject: [PATCH v2] elf: Use the minimal malloc on tunables_strdup Date: Thu, 4 Nov 2021 10:11:30 -0300 Message-Id: <20211104131130.801849-1-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.32.0 MIME-Version: 1.0 X-Spam-Status: No, score=-12.9 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Adhemerval Zanella via Libc-alpha From: Adhemerval Zanella Netto Reply-To: Adhemerval Zanella Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" The rtld_malloc functions are moved to its own file so it can be used on csu code. Also, the functiosn are renamed to __minimal_* (since there are now used not only on loader code). Using the __minimal_malloc on tunables_strdup() avoids potential issues with sbrk() calls while processing the tunables (I see sporadic elf/tst-dso-ordering9 on powerpc64le with different tests failing due ASLR). Also, using __minimal_malloc over plain mmap optimizes the memory allocation on both static and dynamic case (since it will any unused space in either the last page of data segments, avoiding mmap() call, or from the previous mmap() call). Checked on x86_64-linux-gnu, i686-linux-gnu, and powerpc64le-linux-gnu. Reviewed-by: Siddhesh Poyarekar --- Changes from v1: * Fixed memory allocation failure message. * Fixed dl-minimal-malloc.h. --- elf/Makefile | 7 +- elf/dl-minimal-malloc.c | 112 +++++++++++++++++++++++++ elf/dl-minimal.c | 122 ++-------------------------- elf/dl-tunables.c | 7 +- sysdeps/generic/dl-minimal-malloc.h | 28 +++++++ 5 files changed, 157 insertions(+), 119 deletions(-) create mode 100644 elf/dl-minimal-malloc.c create mode 100644 sysdeps/generic/dl-minimal-malloc.h diff --git a/elf/Makefile b/elf/Makefile index 7e4f0c3121..7245309516 100644 --- a/elf/Makefile +++ b/elf/Makefile @@ -36,7 +36,7 @@ dl-routines = $(addprefix dl-,load lookup object reloc deps \ exception sort-maps lookup-direct \ call-libc-early-init write \ thread_gscope_wait tls_init_tp \ - debug-symbols) + debug-symbols minimal-malloc) ifeq (yes,$(use-ldconfig)) dl-routines += dl-cache endif @@ -75,6 +75,11 @@ CFLAGS-dl-runtime.c += -fexceptions -fasynchronous-unwind-tables CFLAGS-dl-lookup.c += -fexceptions -fasynchronous-unwind-tables CFLAGS-dl-iteratephdr.c += $(uses-callbacks) +# Called during static library initialization, so turn stack-protection +# off for non-shared builds. +CFLAGS-dl-minimal-malloc.o = $(no-stack-protector) +CFLAGS-dl-minimal-malloc.op = $(no-stack-protector) + # On targets without __builtin_memset, rtld.c uses a hand-coded loop # in _dl_start. Make sure this isn't turned into a call to regular memset. ifeq (yes,$(have-loop-to-function)) diff --git a/elf/dl-minimal-malloc.c b/elf/dl-minimal-malloc.c new file mode 100644 index 0000000000..939b5271ca --- /dev/null +++ b/elf/dl-minimal-malloc.c @@ -0,0 +1,112 @@ +/* Minimal malloc implementation for dynamic linker and static + initialization. + Copyright (C) 1995-2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include +#include +#include + +static void *alloc_ptr, *alloc_end, *alloc_last_block; + +/* Allocate an aligned memory block. */ +void * +__minimal_malloc (size_t n) +{ + if (alloc_end == 0) + { + /* Consume any unused space in the last page of our data segment. */ + extern int _end attribute_hidden; + alloc_ptr = &_end; + alloc_end = (void *) 0 + (((alloc_ptr - (void *) 0) + + GLRO(dl_pagesize) - 1) + & ~(GLRO(dl_pagesize) - 1)); + } + + /* Make sure the allocation pointer is ideally aligned. */ + alloc_ptr = (void *) 0 + (((alloc_ptr - (void *) 0) + MALLOC_ALIGNMENT - 1) + & ~(MALLOC_ALIGNMENT - 1)); + + if (alloc_ptr + n >= alloc_end || n >= -(uintptr_t) alloc_ptr) + { + /* Insufficient space left; allocate another page plus one extra + page to reduce number of mmap calls. */ + caddr_t page; + size_t nup = (n + GLRO(dl_pagesize) - 1) & ~(GLRO(dl_pagesize) - 1); + if (__glibc_unlikely (nup == 0 && n != 0)) + return NULL; + nup += GLRO(dl_pagesize); + page = __mmap (0, nup, PROT_READ|PROT_WRITE, + MAP_ANON|MAP_PRIVATE, -1, 0); + if (page == MAP_FAILED) + return NULL; + if (page != alloc_end) + alloc_ptr = page; + alloc_end = page + nup; + } + + alloc_last_block = (void *) alloc_ptr; + alloc_ptr += n; + return alloc_last_block; +} + +/* We use this function occasionally since the real implementation may + be optimized when it can assume the memory it returns already is + set to NUL. */ +void * +__minimal_calloc (size_t nmemb, size_t size) +{ + /* New memory from the trivial malloc above is always already cleared. + (We make sure that's true in the rare occasion it might not be, + by clearing memory in free, below.) */ + size_t bytes = nmemb * size; + +#define HALF_SIZE_T (((size_t) 1) << (8 * sizeof (size_t) / 2)) + if (__builtin_expect ((nmemb | size) >= HALF_SIZE_T, 0) + && size != 0 && bytes / size != nmemb) + return NULL; + + return malloc (bytes); +} + +/* This will rarely be called. */ +void +__minimal_free (void *ptr) +{ + /* We can free only the last block allocated. */ + if (ptr == alloc_last_block) + { + /* Since this is rare, we clear the freed block here + so that calloc can presume malloc returns cleared memory. */ + memset (alloc_last_block, '\0', alloc_ptr - alloc_last_block); + alloc_ptr = alloc_last_block; + } +} + +/* This is only called with the most recent block returned by malloc. */ +void * +__minimal_realloc (void *ptr, size_t n) +{ + if (ptr == NULL) + return malloc (n); + assert (ptr == alloc_last_block); + size_t old_size = alloc_ptr - alloc_last_block; + alloc_ptr = alloc_last_block; + void *new = malloc (n); + return new != ptr ? memcpy (new, ptr, old_size) : new; +} diff --git a/elf/dl-minimal.c b/elf/dl-minimal.c index 4e7f11aeab..152192d451 100644 --- a/elf/dl-minimal.c +++ b/elf/dl-minimal.c @@ -16,23 +16,14 @@ License along with the GNU C Library; if not, see . */ -#include -#include -#include -#include -#include -#include -#include -#include -#include +#include #include #include #include #include #include <_itoa.h> -#include +#include -#include /* The rtld startup code calls __rtld_malloc_init_stubs after the first self-relocation to adjust the pointers to the minimal @@ -44,19 +35,13 @@ __typeof (free) *__rtld_free attribute_relro; __typeof (malloc) *__rtld_malloc attribute_relro; __typeof (realloc) *__rtld_realloc attribute_relro; -/* Defined below. */ -static __typeof (calloc) rtld_calloc; -static __typeof (free) rtld_free; -static __typeof (malloc) rtld_malloc; -static __typeof (realloc) rtld_realloc; - void __rtld_malloc_init_stubs (void) { - __rtld_calloc = &rtld_calloc; - __rtld_free = &rtld_free; - __rtld_malloc = &rtld_malloc; - __rtld_realloc = &rtld_realloc; + __rtld_calloc = &__minimal_calloc; + __rtld_free = &__minimal_free; + __rtld_malloc = &__minimal_malloc; + __rtld_realloc = &__minimal_realloc; } bool @@ -64,7 +49,7 @@ __rtld_malloc_is_complete (void) { /* The caller assumes that there is an active malloc. */ assert (__rtld_malloc != NULL); - return __rtld_malloc != &rtld_malloc; + return __rtld_malloc != &__minimal_malloc; } /* Lookup NAME at VERSION in the scope of MATCH. */ @@ -115,99 +100,6 @@ __rtld_malloc_init_real (struct link_map *main_map) __rtld_realloc = new_realloc; } -/* Minimal malloc allocator for used during initial link. After the - initial link, a full malloc implementation is interposed, either - the one in libc, or a different one supplied by the user through - interposition. */ - -static void *alloc_ptr, *alloc_end, *alloc_last_block; - -/* Allocate an aligned memory block. */ -static void * -rtld_malloc (size_t n) -{ - if (alloc_end == 0) - { - /* Consume any unused space in the last page of our data segment. */ - extern int _end attribute_hidden; - alloc_ptr = &_end; - alloc_end = (void *) 0 + (((alloc_ptr - (void *) 0) - + GLRO(dl_pagesize) - 1) - & ~(GLRO(dl_pagesize) - 1)); - } - - /* Make sure the allocation pointer is ideally aligned. */ - alloc_ptr = (void *) 0 + (((alloc_ptr - (void *) 0) + MALLOC_ALIGNMENT - 1) - & ~(MALLOC_ALIGNMENT - 1)); - - if (alloc_ptr + n >= alloc_end || n >= -(uintptr_t) alloc_ptr) - { - /* Insufficient space left; allocate another page plus one extra - page to reduce number of mmap calls. */ - caddr_t page; - size_t nup = (n + GLRO(dl_pagesize) - 1) & ~(GLRO(dl_pagesize) - 1); - if (__glibc_unlikely (nup == 0 && n != 0)) - return NULL; - nup += GLRO(dl_pagesize); - page = __mmap (0, nup, PROT_READ|PROT_WRITE, - MAP_ANON|MAP_PRIVATE, -1, 0); - if (page == MAP_FAILED) - return NULL; - if (page != alloc_end) - alloc_ptr = page; - alloc_end = page + nup; - } - - alloc_last_block = (void *) alloc_ptr; - alloc_ptr += n; - return alloc_last_block; -} - -/* We use this function occasionally since the real implementation may - be optimized when it can assume the memory it returns already is - set to NUL. */ -static void * -rtld_calloc (size_t nmemb, size_t size) -{ - /* New memory from the trivial malloc above is always already cleared. - (We make sure that's true in the rare occasion it might not be, - by clearing memory in free, below.) */ - size_t bytes = nmemb * size; - -#define HALF_SIZE_T (((size_t) 1) << (8 * sizeof (size_t) / 2)) - if (__builtin_expect ((nmemb | size) >= HALF_SIZE_T, 0) - && size != 0 && bytes / size != nmemb) - return NULL; - - return malloc (bytes); -} - -/* This will rarely be called. */ -void -rtld_free (void *ptr) -{ - /* We can free only the last block allocated. */ - if (ptr == alloc_last_block) - { - /* Since this is rare, we clear the freed block here - so that calloc can presume malloc returns cleared memory. */ - memset (alloc_last_block, '\0', alloc_ptr - alloc_last_block); - alloc_ptr = alloc_last_block; - } -} - -/* This is only called with the most recent block returned by malloc. */ -void * -rtld_realloc (void *ptr, size_t n) -{ - if (ptr == NULL) - return malloc (n); - assert (ptr == alloc_last_block); - size_t old_size = alloc_ptr - alloc_last_block; - alloc_ptr = alloc_last_block; - void *new = malloc (n); - return new != ptr ? memcpy (new, ptr, old_size) : new; -} /* Avoid signal frobnication in setjmp/longjmp. Keeps things smaller. */ diff --git a/elf/dl-tunables.c b/elf/dl-tunables.c index 1666736bc1..497e948f1c 100644 --- a/elf/dl-tunables.c +++ b/elf/dl-tunables.c @@ -31,6 +31,7 @@ #include #include #include +#include #define TUNABLES_INTERNAL 1 #include "dl-tunables.h" @@ -48,13 +49,13 @@ tunables_strdup (const char *in) size_t i = 0; while (in[i++] != '\0'); - char *out = __sbrk (i); + char *out = __minimal_malloc (i + 1); /* For most of the tunables code, we ignore user errors. However, this is a system error - and running out of memory at program startup should be reported, so we do. */ - if (out == (void *)-1) - _dl_fatal_printf ("sbrk() failure while processing tunables\n"); + if (out == NULL) + _dl_fatal_printf ("failed to allocate memory to process tunables\n"); while (i-- > 0) out[i] = in[i]; diff --git a/sysdeps/generic/dl-minimal-malloc.h b/sysdeps/generic/dl-minimal-malloc.h new file mode 100644 index 0000000000..7f50e52df5 --- /dev/null +++ b/sysdeps/generic/dl-minimal-malloc.h @@ -0,0 +1,28 @@ +/* Minimal malloc implementation for dynamic linker and static + initialization. + Copyright (C) 2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef _DL_MINIMAL_MALLOC_H +#define _DL_MINIMAL_MALLOC_H + +extern void *__minimal_malloc (size_t n) attribute_hidden; +extern void *__minimal_calloc (size_t nmemb, size_t size) attribute_hidden; +extern void __minimal_free (void *ptr) attribute_hidden; +extern void *__minimal_realloc (void *ptr, size_t n) attribute_hidden; + +#endif