[v2,2/2] malloc: Improve MAP_HUGETLB with glibc.malloc.hugetlb=2

Message ID 20231123172915.893408-3-adhemerval.zanella@linaro.org
State Committed
Commit bc6d79f4ae99206e7ec7d6a8c5abf26cdefc8bff
Headers
Series Improve MAP_HUGETLB with glibc.malloc.hugetlb=2 |

Checks

Context Check Description
redhat-pt-bot/TryBot-apply_patch success Patch applied to master at the time it was sent
redhat-pt-bot/TryBot-32bit success Build for i686
linaro-tcwg-bot/tcwg_glibc_build--master-arm success Testing passed
linaro-tcwg-bot/tcwg_glibc_check--master-arm success Testing passed
linaro-tcwg-bot/tcwg_glibc_build--master-aarch64 success Testing passed
linaro-tcwg-bot/tcwg_glibc_check--master-aarch64 success Testing passed

Commit Message

Adhemerval Zanella Netto Nov. 23, 2023, 5:29 p.m. UTC
  Even for explicit large page support, allocation might use mmap without
the hugepage bit set if the requested size is smaller than
mmap_threshold.  For this case where mmap is issued, MAP_HUGETLB is set
iff the allocation size is larger than the used large page.

To force such allocations to use large pages, also tune the mmap_threhold
(if it is not explicit set by a tunable).  This forces allocation to
follow the sbrk path, which will fall back to mmap (which will try large
pages before galling back to default mmap).

Checked on x86_64-linux-gnu.
---
 malloc/arena.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)
  

Comments

DJ Delorie Nov. 28, 2023, 9:43 p.m. UTC | #1
Adhemerval Zanella <adhemerval.zanella@linaro.org> writes:
> diff --git a/malloc/arena.c b/malloc/arena.c
> index a1a75e5a2b..c73f68890d 100644
> --- a/malloc/arena.c
> +++ b/malloc/arena.c
> @@ -312,10 +312,17 @@ ptmalloc_init (void)
>  # endif
>    TUNABLE_GET (mxfast, size_t, TUNABLE_CALLBACK (set_mxfast));
>    TUNABLE_GET (hugetlb, size_t, TUNABLE_CALLBACK (set_hugetlb));
> +
>    if (mp_.hp_pagesize > 0)
> -    /* Force mmap for main arena instead of sbrk, so hugepages are explicitly
> -       used.  */
> -    __always_fail_morecore = true;
> +    {
> +      /* Force mmap for main arena instead of sbrk, so MAP_HUGETLB is always
> +         tried.  Also tune the mmap threshold, so allocation smaller than the
> +	 large page will also try to use large pages by falling back
> +	 to sysmalloc_mmap_fallback on sysmalloc.  */
> +      if (!TUNABLE_IS_INITIALIZED (mmap_threshold))
> +	do_set_mmap_threshold (mp_.hp_pagesize);
> +      __always_fail_morecore = true;
> +    }
>  }
>  
>  /* Managing heaps and arenas (for concurrent threads) */

Ok.

LGTM
Reviewed-by: DJ Delorie <dj@redhat.com>
  

Patch

diff --git a/malloc/arena.c b/malloc/arena.c
index a1a75e5a2b..c73f68890d 100644
--- a/malloc/arena.c
+++ b/malloc/arena.c
@@ -312,10 +312,17 @@  ptmalloc_init (void)
 # endif
   TUNABLE_GET (mxfast, size_t, TUNABLE_CALLBACK (set_mxfast));
   TUNABLE_GET (hugetlb, size_t, TUNABLE_CALLBACK (set_hugetlb));
+
   if (mp_.hp_pagesize > 0)
-    /* Force mmap for main arena instead of sbrk, so hugepages are explicitly
-       used.  */
-    __always_fail_morecore = true;
+    {
+      /* Force mmap for main arena instead of sbrk, so MAP_HUGETLB is always
+         tried.  Also tune the mmap threshold, so allocation smaller than the
+	 large page will also try to use large pages by falling back
+	 to sysmalloc_mmap_fallback on sysmalloc.  */
+      if (!TUNABLE_IS_INITIALIZED (mmap_threshold))
+	do_set_mmap_threshold (mp_.hp_pagesize);
+      __always_fail_morecore = true;
+    }
 }
 
 /* Managing heaps and arenas (for concurrent threads) */