From patchwork Mon Aug 30 18:52:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella Netto X-Patchwork-Id: 44815 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id E21813858428 for ; Mon, 30 Aug 2021 18:54:16 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org E21813858428 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1630349656; bh=/ZASenRC8EDCHopaMxXK+A6kpcObo8zvZoGOdrZiMpE=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=Ei4JtX5DquKn9rf3fqjn5hXnIcqfL6M9nwMhFI+pR5wowFDxdb6XGh4zCx4GNYqgv VuJXTd9PmUclI+PmQ9LRjWbsFNwXMv2WVkOKuU6QqrSfuYnrentKskyQMjG+8n9XSt CXfWWHz7YMgvgNToc943aSqhjmzAxMl+jsVV9T7Y= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-qk1-x733.google.com (mail-qk1-x733.google.com [IPv6:2607:f8b0:4864:20::733]) by sourceware.org (Postfix) with ESMTPS id 4E4113858414 for ; Mon, 30 Aug 2021 18:52:25 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 4E4113858414 Received: by mail-qk1-x733.google.com with SMTP id f22so16805388qkm.5 for ; Mon, 30 Aug 2021 11:52:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/ZASenRC8EDCHopaMxXK+A6kpcObo8zvZoGOdrZiMpE=; b=VCIo8i1wAmP+CttosIyi50LdcjeRisYXDELrWc+K8agioeAyKaE10vvwf55cHM8AT2 fTGZnsD5WzEqYkrhp6l90JXOd0uUTiSNqwyXnKTZ9rb3DLL4zO3DZ0rJRcCEHZ2AY1dE G+mUFiuFgh2AIAbDv4jk9b08C/CUIdizfz0S0ieNqFu0rpw02euAe4eVhkVDCmhoiO/j bF/VrCC675e/vA6cTS54SHUgjnv8JQ462CIhe2+Np5kZ3jDThPINvnM38EF13iBes/9I ScMV5XBnSY4dHi+UdDsIrSq1FLgIaePdkTy+oMYd58iwQ4YaWNMX6EJO0NlPTSsak2eu kdRw== X-Gm-Message-State: AOAM532EE/MjlfSEBhYEFE0h/vvLgEQssotju7GvH6a+91TZdtmh2TME 1WhHsx4Qu+rVOrVs+F+GkIix0q0f/9YiNA== X-Google-Smtp-Source: ABdhPJw9lFnFx4udBXPylB5ImFeu0zcS6A4D38Lq+jJq+p4Op3R8BnBmdD4TCb6vtr0I1aq03MRS9g== X-Received: by 2002:a05:620a:14ad:: with SMTP id x13mr23506096qkj.172.1630349544715; Mon, 30 Aug 2021 11:52:24 -0700 (PDT) Received: from birita.. ([2804:431:c7ca:1a68:7647:1f41:2147:1ed2]) by smtp.gmail.com with ESMTPSA id m187sm11752338qkd.131.2021.08.30.11.52.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Aug 2021 11:52:24 -0700 (PDT) To: libc-alpha@sourceware.org Subject: [PATCH v4 3/7] malloc: Move mmap logic to its own function Date: Mon, 30 Aug 2021 15:52:11 -0300 Message-Id: <20210830185215.449572-4-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210830185215.449572-1-adhemerval.zanella@linaro.org> References: <20210830185215.449572-1-adhemerval.zanella@linaro.org> MIME-Version: 1.0 X-Spam-Status: No, score=-13.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=unavailable autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Adhemerval Zanella via Libc-alpha From: Adhemerval Zanella Netto Reply-To: Adhemerval Zanella Cc: Norbert Manthey , Guillaume Morin , Siddhesh Poyarekar Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" So it can be used with different pagesize and flags. --- malloc/malloc.c | 164 ++++++++++++++++++++++++++---------------------- 1 file changed, 88 insertions(+), 76 deletions(-) diff --git a/malloc/malloc.c b/malloc/malloc.c index f65e448130..dc5ecb84c5 100644 --- a/malloc/malloc.c +++ b/malloc/malloc.c @@ -2414,6 +2414,85 @@ do_check_malloc_state (mstate av) be extended or replaced. */ +static void * +sysmalloc_mmap (INTERNAL_SIZE_T nb, size_t pagesize, int extra_flags, mstate av) +{ + long int size; + + /* + Round up size to nearest page. For mmapped chunks, the overhead is one + SIZE_SZ unit larger than for normal chunks, because there is no + following chunk whose prev_size field could be used. + + See the front_misalign handling below, for glibc there is no need for + further alignments unless we have have high alignment. + */ + if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) + size = ALIGN_UP (nb + SIZE_SZ, pagesize); + else + size = ALIGN_UP (nb + SIZE_SZ + MALLOC_ALIGN_MASK, pagesize); + + /* Don't try if size wraps around 0. */ + if ((unsigned long) (size) <= (unsigned long) (nb)) + return MAP_FAILED; + + char *mm = (char *) MMAP (0, size, + mtag_mmap_flags | PROT_READ | PROT_WRITE, + extra_flags); + if (mm == MAP_FAILED) + return mm; + + madvise_thp (mm, size); + + /* + The offset to the start of the mmapped region is stored in the prev_size + field of the chunk. This allows us to adjust returned start address to + meet alignment requirements here and in memalign(), and still be able to + compute proper address argument for later munmap in free() and realloc(). + */ + + INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */ + + if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) + { + /* For glibc, chunk2mem increases the address by CHUNK_HDR_SZ and + MALLOC_ALIGN_MASK is CHUNK_HDR_SZ-1. Each mmap'ed area is page + aligned and therefore definitely MALLOC_ALIGN_MASK-aligned. */ + assert (((INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK) == 0); + front_misalign = 0; + } + else + front_misalign = (INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK; + + mchunkptr p; /* the allocated/returned chunk */ + + if (front_misalign > 0) + { + ptrdiff_t correction = MALLOC_ALIGNMENT - front_misalign; + p = (mchunkptr) (mm + correction); + set_prev_size (p, correction); + set_head (p, (size - correction) | IS_MMAPPED); + } + else + { + p = (mchunkptr) mm; + set_prev_size (p, 0); + set_head (p, size | IS_MMAPPED); + } + + /* update statistics */ + int new = atomic_exchange_and_add (&mp_.n_mmaps, 1) + 1; + atomic_max (&mp_.max_n_mmaps, new); + + unsigned long sum; + sum = atomic_exchange_and_add (&mp_.mmapped_mem, size) + size; + atomic_max (&mp_.max_mmapped_mem, sum); + + check_chunk (av, p); + + return chunk2mem (p); +} + static void * sysmalloc (INTERNAL_SIZE_T nb, mstate av) { @@ -2451,81 +2530,10 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) || ((unsigned long) (nb) >= (unsigned long) (mp_.mmap_threshold) && (mp_.n_mmaps < mp_.n_mmaps_max))) { - char *mm; /* return value from mmap call*/ - - try_mmap: - /* - Round up size to nearest page. For mmapped chunks, the overhead - is one SIZE_SZ unit larger than for normal chunks, because there - is no following chunk whose prev_size field could be used. - - See the front_misalign handling below, for glibc there is no - need for further alignments unless we have have high alignment. - */ - if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) - size = ALIGN_UP (nb + SIZE_SZ, pagesize); - else - size = ALIGN_UP (nb + SIZE_SZ + MALLOC_ALIGN_MASK, pagesize); + char *mm = sysmalloc_mmap (nb, pagesize, 0, av); + if (mm != MAP_FAILED) + return mm; tried_mmap = true; - - /* Don't try if size wraps around 0 */ - if ((unsigned long) (size) > (unsigned long) (nb)) - { - mm = (char *) (MMAP (0, size, - mtag_mmap_flags | PROT_READ | PROT_WRITE, 0)); - - if (mm != MAP_FAILED) - { - madvise_thp (mm, size); - - /* - The offset to the start of the mmapped region is stored - in the prev_size field of the chunk. This allows us to adjust - returned start address to meet alignment requirements here - and in memalign(), and still be able to compute proper - address argument for later munmap in free() and realloc(). - */ - - if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) - { - /* For glibc, chunk2mem increases the address by - CHUNK_HDR_SZ and MALLOC_ALIGN_MASK is - CHUNK_HDR_SZ-1. Each mmap'ed area is page - aligned and therefore definitely - MALLOC_ALIGN_MASK-aligned. */ - assert (((INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK) == 0); - front_misalign = 0; - } - else - front_misalign = (INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK; - if (front_misalign > 0) - { - correction = MALLOC_ALIGNMENT - front_misalign; - p = (mchunkptr) (mm + correction); - set_prev_size (p, correction); - set_head (p, (size - correction) | IS_MMAPPED); - } - else - { - p = (mchunkptr) mm; - set_prev_size (p, 0); - set_head (p, size | IS_MMAPPED); - } - - /* update statistics */ - - int new = atomic_exchange_and_add (&mp_.n_mmaps, 1) + 1; - atomic_max (&mp_.max_n_mmaps, new); - - unsigned long sum; - sum = atomic_exchange_and_add (&mp_.mmapped_mem, size) + size; - atomic_max (&mp_.max_mmapped_mem, sum); - - check_chunk (av, p); - - return chunk2mem (p); - } - } } /* There are no usable arenas and mmap also failed. */ @@ -2602,8 +2610,12 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) } } else if (!tried_mmap) - /* We can at least try to use to mmap memory. */ - goto try_mmap; + { + /* We can at least try to use to mmap memory. */ + char *mm = sysmalloc_mmap (nb, pagesize, 0, av); + if (mm != MAP_FAILED) + return mm; + } } else /* av == main_arena */