From patchwork Mon Aug 30 18:52:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella X-Patchwork-Id: 44820 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id EF22A3858404 for ; Mon, 30 Aug 2021 18:58:02 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org EF22A3858404 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1630349883; bh=L8f4yI0FmjkP8PVLVDq3vnYe36HRR4H5UOfuAl+QGIE=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=n/K+ufIGbHjvMhcv3ESGaneCMa49US5ly99FGKygVaGXitEBYTZfXCMFNz8tXua1U E6ZVRoORkX0xdkT/yDeCqelulLtSxEIb1DBhhkzSqT9YqY40Fi9OJgEJNfJr7l+hAM sVhgDjvZhunN9Y66FFHl9aNGzlBmWA/p/kABHZgI= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-qk1-x72b.google.com (mail-qk1-x72b.google.com [IPv6:2607:f8b0:4864:20::72b]) by sourceware.org (Postfix) with ESMTPS id 42F233858407 for ; Mon, 30 Aug 2021 18:52:31 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 42F233858407 Received: by mail-qk1-x72b.google.com with SMTP id bk29so16786971qkb.8 for ; Mon, 30 Aug 2021 11:52:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=L8f4yI0FmjkP8PVLVDq3vnYe36HRR4H5UOfuAl+QGIE=; b=qhL8nHwFqVAg8T7gvtgqWs57aHbY43JagCErlO0AkoazoT2AJvH81sYA9rG5UyXl5I 3cZw6j1Un1vav7J5cLoYV/RL/H97FIYv7Q686N19Gid9bb+mw+sLlgdQ+klAz9x6vUPa N7bH1O0WMApcCu6kvCPLMW9/STXDy9DFJxPTI07Q1pSuBXFnzDkeRPgW37gwg4i12wRk /WEUF13lf0dJsoRWohwluWvcuAI0sUkNf5q12FHQ9G2pMn6FbZKpvQjPAcomeD+qaSIf rNeT0Z9t3nw0yB3gkZPq0zBJf0b7qYuEx66QbFUXXvDkwqmZkDsIoHMO4i4cp8j5ubck t17g== X-Gm-Message-State: AOAM530rzKBsIWj6oC5/RSUQG9yXihwWTHXSLrCH6yDFG469bJBFanAW y1Ou58YpbB4DiR0VbXy9IA/pSCZnqGuEqg== X-Google-Smtp-Source: ABdhPJyk5wdvYc5iVJcVJw7JuWhzX3akE8kV16LinSFNYAIKetIRrfZNbdQhjz8O4XmhEdOSIWVmiQ== X-Received: by 2002:a37:6114:: with SMTP id v20mr24029650qkb.348.1630349550754; Mon, 30 Aug 2021 11:52:30 -0700 (PDT) Received: from birita.. ([2804:431:c7ca:1a68:7647:1f41:2147:1ed2]) by smtp.gmail.com with ESMTPSA id m187sm11752338qkd.131.2021.08.30.11.52.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Aug 2021 11:52:30 -0700 (PDT) To: libc-alpha@sourceware.org Subject: [PATCH v4 7/7] malloc: Enable huge page support on main arena Date: Mon, 30 Aug 2021 15:52:15 -0300 Message-Id: <20210830185215.449572-8-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210830185215.449572-1-adhemerval.zanella@linaro.org> References: <20210830185215.449572-1-adhemerval.zanella@linaro.org> MIME-Version: 1.0 X-Spam-Status: No, score=-13.9 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Adhemerval Zanella via Libc-alpha From: Adhemerval Zanella Reply-To: Adhemerval Zanella Cc: Norbert Manthey , Guillaume Morin , Siddhesh Poyarekar Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" This patch adds support huge page support on main arena allocation, enable with tunable glibc.malloc.hugetlb=2. The patch essentially disable the __glibc_morecore() sbrk() call (similar when memory tag does when sbrk() call does not support it) and fallback to default page size if the memory allocation fails. Checked on x86_64-linux-gnu. --- malloc/arena.c | 4 ++++ malloc/malloc.c | 12 ++++++++++-- malloc/morecore.c | 2 -- 3 files changed, 14 insertions(+), 4 deletions(-) diff --git a/malloc/arena.c b/malloc/arena.c index 81dc2f93d1..0d38cad9b8 100644 --- a/malloc/arena.c +++ b/malloc/arena.c @@ -357,6 +357,10 @@ ptmalloc_init (void) # endif TUNABLE_GET (mxfast, size_t, TUNABLE_CALLBACK (set_mxfast)); TUNABLE_GET (hugetlb, int32_t, TUNABLE_CALLBACK (set_hugetlb)); + if (mp_.hp_pagesize > 0) + /* Force mmap() for main arena instead of sbrk(), so hugepages are + explicitly used. */ + __always_fail_morecore = true; #else if (__glibc_likely (_environ != NULL)) { diff --git a/malloc/malloc.c b/malloc/malloc.c index 3421a0b5da..616aaf9e59 100644 --- a/malloc/malloc.c +++ b/malloc/malloc.c @@ -2742,8 +2742,16 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) segregated mmap region. */ - char *mbrk = sysmalloc_mmap_fallback (&size, nb, old_size, pagesize, - MMAP_AS_MORECORE_SIZE, 0, av); + char *mbrk = MAP_FAILED; +#if HAVE_TUNABLES + if (mp_.hp_pagesize > 0) + mbrk = sysmalloc_mmap_fallback (&size, nb, old_size, + mp_.hp_pagesize, mp_.hp_pagesize, + mp_.hp_flags, av); +#endif + if (mbrk == MAP_FAILED) + mbrk = sysmalloc_mmap_fallback (&size, nb, old_size, pagesize, + MMAP_AS_MORECORE_SIZE, 0, av); if (mbrk != MAP_FAILED) { /* We do not need, and cannot use, another sbrk call to find end */ diff --git a/malloc/morecore.c b/malloc/morecore.c index 8168ef158c..1ace85a37d 100644 --- a/malloc/morecore.c +++ b/malloc/morecore.c @@ -15,9 +15,7 @@ License along with the GNU C Library; if not, see . */ -#if defined(SHARED) || defined(USE_MTAG) static bool __always_fail_morecore = false; -#endif /* Allocate INCREMENT more bytes of data space, and return the start of data space, or NULL on errors.