From patchwork Wed Aug 18 14:19:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella X-Patchwork-Id: 44692 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 2A46E3986407 for ; Wed, 18 Aug 2021 14:22:05 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 2A46E3986407 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1629296525; bh=kU8Jnx8CXfZO+RRmTfFFfrSJvcNr/nzYWm18XvJeKm0=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=joEMTWuIU/1oRigDIxCUufOSvmdX1ZvWf3aU6OAcOh3YgnHc4aKQEOQdijSQEM3Es KDkIEzfkX/7Qvb1tnedR5IU+bNORBUPDJXuqipJxGxRYW9q7kYJ8gYtZYzw2s16ObK sApvSqlDRvLUYA3s8UI9etLUCqr3AODFGJBaOpSU= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by sourceware.org (Postfix) with ESMTPS id C9F633986434 for ; Wed, 18 Aug 2021 14:20:10 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org C9F633986434 Received: by mail-pf1-x436.google.com with SMTP id y190so2280488pfg.7 for ; Wed, 18 Aug 2021 07:20:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kU8Jnx8CXfZO+RRmTfFFfrSJvcNr/nzYWm18XvJeKm0=; b=MoULgeLJ76ccMnMB5uwoUubQUjvzf7fhroj/lQjF3/0ViThNWffVh4cNKCJIr9A8rO FUy6Q+6WwnINlol5EHYScnZmE2ManQSIOPfmdogXhczbFOIW33bb/yTNejRDS9YkVX1r 4Ki0LlE88EklTWfOnDGW71wuk6nVMJ61VpLNwny78CxbGPN3/LUOoG9IKBl0Fb9XtQEp 5XPhWSCLVFfJXf7Nf04JSMerI991bifROYz6j3g3CXWGQ+rM/kKRifGoMi+Xo9RL3ZKv Jta/W04UWntC45Y2w0u2Y/skqc5cFaawVM5uilwuWWnL0kxFyc5RfjkY8iY2R0YSg7K6 l9Rg== X-Gm-Message-State: AOAM530eJILiEWq5L8jW7flllUnxALLGUPWR9asBdexqLZpFoYg2ydPT GbE+5MGZ1uI2bkm9VZhnYP1ThCysiUkCeA== X-Google-Smtp-Source: ABdhPJzeeYYVaX9kGSpgHMpDrsElq2fbD7gsXw0UoF9dDUAvRnrxDNJSQs/+btjqczzAScflntqpnw== X-Received: by 2002:a65:448a:: with SMTP id l10mr9091771pgq.313.1629296409692; Wed, 18 Aug 2021 07:20:09 -0700 (PDT) Received: from birita.. ([2804:431:c7ca:cd83:8c0a:d250:6dae:d807]) by smtp.gmail.com with ESMTPSA id c133sm6805015pfb.39.2021.08.18.07.20.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Aug 2021 07:20:09 -0700 (PDT) To: libc-alpha@sourceware.org Subject: [PATCH v2 3/4] malloc: Move mmap logic to its own function Date: Wed, 18 Aug 2021 11:19:59 -0300 Message-Id: <20210818142000.128752-4-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210818142000.128752-1-adhemerval.zanella@linaro.org> References: <20210818142000.128752-1-adhemerval.zanella@linaro.org> MIME-Version: 1.0 X-Spam-Status: No, score=-12.9 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Adhemerval Zanella via Libc-alpha From: Adhemerval Zanella Reply-To: Adhemerval Zanella Cc: Norbert Manthey , Guillaume Morin , Siddhesh Poyarekar Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" So it can be used with different pagesize and flags. --- malloc/malloc.c | 155 +++++++++++++++++++++++++----------------------- 1 file changed, 82 insertions(+), 73 deletions(-) diff --git a/malloc/malloc.c b/malloc/malloc.c index 1a2c798a35..4bfcea286f 100644 --- a/malloc/malloc.c +++ b/malloc/malloc.c @@ -2414,6 +2414,85 @@ do_check_malloc_state (mstate av) be extended or replaced. */ +static void * +sysmalloc_mmap (INTERNAL_SIZE_T nb, size_t pagesize, int extra_flags, mstate av) +{ + long int size; + + /* + Round up size to nearest page. For mmapped chunks, the overhead is one + SIZE_SZ unit larger than for normal chunks, because there is no + following chunk whose prev_size field could be used. + + See the front_misalign handling below, for glibc there is no need for + further alignments unless we have have high alignment. + */ + if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) + size = ALIGN_UP (nb + SIZE_SZ, pagesize); + else + size = ALIGN_UP (nb + SIZE_SZ + MALLOC_ALIGN_MASK, pagesize); + + /* Don't try if size wraps around 0. */ + if ((unsigned long) (size) <= (unsigned long) (nb)) + return MAP_FAILED; + + char *mm = (char *) MMAP (0, size, + mtag_mmap_flags | PROT_READ | PROT_WRITE, + extra_flags); + if (mm == MAP_FAILED) + return mm; + + sysmadvise_thp (mm, size); + + /* + The offset to the start of the mmapped region is stored in the prev_size + field of the chunk. This allows us to adjust returned start address to + meet alignment requirements here and in memalign(), and still be able to + compute proper address argument for later munmap in free() and realloc(). + */ + + INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */ + + if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) + { + /* For glibc, chunk2mem increases the address by CHUNK_HDR_SZ and + MALLOC_ALIGN_MASK is CHUNK_HDR_SZ-1. Each mmap'ed area is page + aligned and therefore definitely MALLOC_ALIGN_MASK-aligned. */ + assert (((INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK) == 0); + front_misalign = 0; + } + else + front_misalign = (INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK; + + mchunkptr p; /* the allocated/returned chunk */ + + if (front_misalign > 0) + { + ptrdiff_t correction = MALLOC_ALIGNMENT - front_misalign; + p = (mchunkptr) (mm + correction); + set_prev_size (p, correction); + set_head (p, (size - correction) | IS_MMAPPED); + } + else + { + p = (mchunkptr) mm; + set_prev_size (p, 0); + set_head (p, size | IS_MMAPPED); + } + + /* update statistics */ + int new = atomic_exchange_and_add (&mp_.n_mmaps, 1) + 1; + atomic_max (&mp_.max_n_mmaps, new); + + unsigned long sum; + sum = atomic_exchange_and_add (&mp_.mmapped_mem, size) + size; + atomic_max (&mp_.max_mmapped_mem, sum); + + check_chunk (av, p); + + return chunk2mem (p); +} + static void * sysmalloc (INTERNAL_SIZE_T nb, mstate av) { @@ -2451,81 +2530,11 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) || ((unsigned long) (nb) >= (unsigned long) (mp_.mmap_threshold) && (mp_.n_mmaps < mp_.n_mmaps_max))) { - char *mm; /* return value from mmap call*/ - try_mmap: - /* - Round up size to nearest page. For mmapped chunks, the overhead - is one SIZE_SZ unit larger than for normal chunks, because there - is no following chunk whose prev_size field could be used. - - See the front_misalign handling below, for glibc there is no - need for further alignments unless we have have high alignment. - */ - if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) - size = ALIGN_UP (nb + SIZE_SZ, pagesize); - else - size = ALIGN_UP (nb + SIZE_SZ + MALLOC_ALIGN_MASK, pagesize); + char *mm = sysmalloc_mmap (nb, pagesize, 0, av); + if (mm != MAP_FAILED) + return mm; tried_mmap = true; - - /* Don't try if size wraps around 0 */ - if ((unsigned long) (size) > (unsigned long) (nb)) - { - mm = (char *) (MMAP (0, size, - mtag_mmap_flags | PROT_READ | PROT_WRITE, 0)); - - if (mm != MAP_FAILED) - { - sysmadvise_thp (mm, size); - - /* - The offset to the start of the mmapped region is stored - in the prev_size field of the chunk. This allows us to adjust - returned start address to meet alignment requirements here - and in memalign(), and still be able to compute proper - address argument for later munmap in free() and realloc(). - */ - - if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) - { - /* For glibc, chunk2mem increases the address by - CHUNK_HDR_SZ and MALLOC_ALIGN_MASK is - CHUNK_HDR_SZ-1. Each mmap'ed area is page - aligned and therefore definitely - MALLOC_ALIGN_MASK-aligned. */ - assert (((INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK) == 0); - front_misalign = 0; - } - else - front_misalign = (INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK; - if (front_misalign > 0) - { - correction = MALLOC_ALIGNMENT - front_misalign; - p = (mchunkptr) (mm + correction); - set_prev_size (p, correction); - set_head (p, (size - correction) | IS_MMAPPED); - } - else - { - p = (mchunkptr) mm; - set_prev_size (p, 0); - set_head (p, size | IS_MMAPPED); - } - - /* update statistics */ - - int new = atomic_exchange_and_add (&mp_.n_mmaps, 1) + 1; - atomic_max (&mp_.max_n_mmaps, new); - - unsigned long sum; - sum = atomic_exchange_and_add (&mp_.mmapped_mem, size) + size; - atomic_max (&mp_.max_mmapped_mem, sum); - - check_chunk (av, p); - - return chunk2mem (p); - } - } } /* There are no usable arenas and mmap also failed. */