[v2,2/2] malloc: Improve MAP_HUGETLB with glibc.malloc.hugetlb=2
Checks
Context |
Check |
Description |
redhat-pt-bot/TryBot-apply_patch |
success
|
Patch applied to master at the time it was sent
|
redhat-pt-bot/TryBot-32bit |
success
|
Build for i686
|
linaro-tcwg-bot/tcwg_glibc_build--master-arm |
success
|
Testing passed
|
linaro-tcwg-bot/tcwg_glibc_check--master-arm |
success
|
Testing passed
|
linaro-tcwg-bot/tcwg_glibc_build--master-aarch64 |
success
|
Testing passed
|
linaro-tcwg-bot/tcwg_glibc_check--master-aarch64 |
success
|
Testing passed
|
Commit Message
Even for explicit large page support, allocation might use mmap without
the hugepage bit set if the requested size is smaller than
mmap_threshold. For this case where mmap is issued, MAP_HUGETLB is set
iff the allocation size is larger than the used large page.
To force such allocations to use large pages, also tune the mmap_threhold
(if it is not explicit set by a tunable). This forces allocation to
follow the sbrk path, which will fall back to mmap (which will try large
pages before galling back to default mmap).
Checked on x86_64-linux-gnu.
---
malloc/arena.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
Comments
Adhemerval Zanella <adhemerval.zanella@linaro.org> writes:
> diff --git a/malloc/arena.c b/malloc/arena.c
> index a1a75e5a2b..c73f68890d 100644
> --- a/malloc/arena.c
> +++ b/malloc/arena.c
> @@ -312,10 +312,17 @@ ptmalloc_init (void)
> # endif
> TUNABLE_GET (mxfast, size_t, TUNABLE_CALLBACK (set_mxfast));
> TUNABLE_GET (hugetlb, size_t, TUNABLE_CALLBACK (set_hugetlb));
> +
> if (mp_.hp_pagesize > 0)
> - /* Force mmap for main arena instead of sbrk, so hugepages are explicitly
> - used. */
> - __always_fail_morecore = true;
> + {
> + /* Force mmap for main arena instead of sbrk, so MAP_HUGETLB is always
> + tried. Also tune the mmap threshold, so allocation smaller than the
> + large page will also try to use large pages by falling back
> + to sysmalloc_mmap_fallback on sysmalloc. */
> + if (!TUNABLE_IS_INITIALIZED (mmap_threshold))
> + do_set_mmap_threshold (mp_.hp_pagesize);
> + __always_fail_morecore = true;
> + }
> }
>
> /* Managing heaps and arenas (for concurrent threads) */
Ok.
LGTM
Reviewed-by: DJ Delorie <dj@redhat.com>
@@ -312,10 +312,17 @@ ptmalloc_init (void)
# endif
TUNABLE_GET (mxfast, size_t, TUNABLE_CALLBACK (set_mxfast));
TUNABLE_GET (hugetlb, size_t, TUNABLE_CALLBACK (set_hugetlb));
+
if (mp_.hp_pagesize > 0)
- /* Force mmap for main arena instead of sbrk, so hugepages are explicitly
- used. */
- __always_fail_morecore = true;
+ {
+ /* Force mmap for main arena instead of sbrk, so MAP_HUGETLB is always
+ tried. Also tune the mmap threshold, so allocation smaller than the
+ large page will also try to use large pages by falling back
+ to sysmalloc_mmap_fallback on sysmalloc. */
+ if (!TUNABLE_IS_INITIALIZED (mmap_threshold))
+ do_set_mmap_threshold (mp_.hp_pagesize);
+ __always_fail_morecore = true;
+ }
}
/* Managing heaps and arenas (for concurrent threads) */