From patchwork Tue Dec 14 18:58:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella X-Patchwork-Id: 48914 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 9483A385B83D for ; Tue, 14 Dec 2021 19:03:41 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 9483A385B83D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1639508621; bh=3QFKDjeTDqVK4GWkJRqEP9GxqjhLyGdeRg/BGIIpLxs=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=aS0Elv+tYUcDXw7YF2+rIqLAgCdpOwNCkXydplAALEg5m+sT4zGJR6PyvgwmmxJ8S 3qtAgdc25f7VrPYacFJa2gWlBPO+6YgeO/MX/F1sub1t3EP3kZxne3mTCAq6bFLvVA n96Kd6vgdmVIvcQIvKXWcmHb1yxEpZhEtSOOZxHY= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-qk1-x72c.google.com (mail-qk1-x72c.google.com [IPv6:2607:f8b0:4864:20::72c]) by sourceware.org (Postfix) with ESMTPS id 8E865385BF83 for ; Tue, 14 Dec 2021 18:58:22 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 8E865385BF83 Received: by mail-qk1-x72c.google.com with SMTP id d21so10394894qkl.3 for ; Tue, 14 Dec 2021 10:58:22 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3QFKDjeTDqVK4GWkJRqEP9GxqjhLyGdeRg/BGIIpLxs=; b=q/ZOASjQ3VXruQ3YoEutZ2qiKVfOE0eBY4Ov3aMRc4+Ax8P0RIk0AQkMDNUAwFkMq0 EXOvNUPzVWtz0mYI+0jK4cNl8NgaASbBZPGL96TZy/7ep/N30RbXLHb9c4ZtCZWz8bw0 v+OAXvQy0VZAViCXSPmf8tP7TrMXNqYn4WM1a/zltOBpZY6qLc57yn6lvN6UNjzDHz+j RafTa8JAlXt0omD1U2e4ozdeXiwq+oVz9HbNJAqQcIdAdyVFLmcaf4W2jjRFAWHuezE6 k94mJacoyRiwJV60Rp34e0x/xWzcWdrdjyZtgfyKSn1oTrvXUBmh17ff2HAfwkBZy64C 895g== X-Gm-Message-State: AOAM533fyUYkvIRwCOP4gsb9NBMXm4c6tevlEzE80D9l3oR0jMVZe6N1 WzVlAc4RYsa9aVNvGiwyllBztgi8Kx5z0Q== X-Google-Smtp-Source: ABdhPJyrRzWSr8xAT7GtTKAqyLe/kyhUHa/unDgZLEeAWalnAMTFmIhh5zfspoAYzb6YkAimf+bRcA== X-Received: by 2002:a05:620a:bc7:: with SMTP id s7mr5735988qki.214.1639508301777; Tue, 14 Dec 2021 10:58:21 -0800 (PST) Received: from birita.. ([2804:431:c7ca:103f:1000:c46d:a2d6:9bed]) by smtp.gmail.com with ESMTPSA id j124sm422848qkd.98.2021.12.14.10.58.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Dec 2021 10:58:21 -0800 (PST) To: libc-alpha@sourceware.org, Siddhesh Poyarekar Subject: [PATCH v5 7/7] malloc: Enable huge page support on main arena Date: Tue, 14 Dec 2021 15:58:06 -0300 Message-Id: <20211214185806.4109231-8-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20211214185806.4109231-1-adhemerval.zanella@linaro.org> References: <20211214185806.4109231-1-adhemerval.zanella@linaro.org> MIME-Version: 1.0 X-Spam-Status: No, score=-12.8 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Adhemerval Zanella via Libc-alpha From: Adhemerval Zanella Reply-To: Adhemerval Zanella Cc: Norbert Manthey , Guillaume Morin Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" This patch adds support huge page support on main arena allocation, enable with tunable glibc.malloc.hugetlb=2. The patch essentially disable the __glibc_morecore() sbrk() call (similar when memory tag does when sbrk() call does not support it) and fallback to default page size if the memory allocation fails. Checked on x86_64-linux-gnu. Reviewed-by: DJ Delorie --- malloc/arena.c | 4 ++++ malloc/malloc.c | 12 ++++++++++-- malloc/morecore.c | 4 ---- 3 files changed, 14 insertions(+), 6 deletions(-) diff --git a/malloc/arena.c b/malloc/arena.c index c6d811ff3b..bd09a4fd9e 100644 --- a/malloc/arena.c +++ b/malloc/arena.c @@ -364,6 +364,10 @@ ptmalloc_init (void) # endif TUNABLE_GET (mxfast, size_t, TUNABLE_CALLBACK (set_mxfast)); TUNABLE_GET (hugetlb, int32_t, TUNABLE_CALLBACK (set_hugetlb)); + if (mp_.hp_pagesize > 0) + /* Force mmap for main arena instead of sbrk, so hugepages are explicitly + used. */ + __always_fail_morecore = true; #else if (__glibc_likely (_environ != NULL)) { diff --git a/malloc/malloc.c b/malloc/malloc.c index 9118306923..5c2bb153f5 100644 --- a/malloc/malloc.c +++ b/malloc/malloc.c @@ -2740,8 +2740,16 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) segregated mmap region. */ - char *mbrk = sysmalloc_mmap_fallback (&size, nb, old_size, pagesize, - MMAP_AS_MORECORE_SIZE, 0, av); + char *mbrk = MAP_FAILED; +#if HAVE_TUNABLES + if (mp_.hp_pagesize > 0) + mbrk = sysmalloc_mmap_fallback (&size, nb, old_size, + mp_.hp_pagesize, mp_.hp_pagesize, + mp_.hp_flags, av); +#endif + if (mbrk == MAP_FAILED) + mbrk = sysmalloc_mmap_fallback (&size, nb, old_size, pagesize, + MMAP_AS_MORECORE_SIZE, 0, av); if (mbrk != MAP_FAILED) { /* We do not need, and cannot use, another sbrk call to find end */ diff --git a/malloc/morecore.c b/malloc/morecore.c index 8168ef158c..004cd3ead4 100644 --- a/malloc/morecore.c +++ b/malloc/morecore.c @@ -15,9 +15,7 @@ License along with the GNU C Library; if not, see . */ -#if defined(SHARED) || defined(USE_MTAG) static bool __always_fail_morecore = false; -#endif /* Allocate INCREMENT more bytes of data space, and return the start of data space, or NULL on errors. @@ -25,10 +23,8 @@ static bool __always_fail_morecore = false; void * __glibc_morecore (ptrdiff_t increment) { -#if defined(SHARED) || defined(USE_MTAG) if (__always_fail_morecore) return NULL; -#endif void *result = (void *) __sbrk (increment); if (result == (void *) -1)