Reduce number of mmap calls from __libc_memalign in ld.so
Commit Message
On Sat, Apr 2, 2016 at 10:43 AM, H.J. Lu <hjl.tools@gmail.com> wrote:
> On Sat, Apr 2, 2016 at 10:33 AM, Mike Frysinger <vapier@gentoo.org> wrote:
>> On 02 Apr 2016 08:34, H.J. Lu wrote:
>>> __libc_memalign in ld.so allocates one page at a time and tries to
>>> optimize consecutive __libc_memalign calls by hoping that the next
>>> mmap is after the current memory allocation.
>>>
>>> However, the kernel hands out mmap addresses in top-down order, so
>>> this optimization in practice never happens, with the result that we
>>> have more mmap calls and waste a bunch of space for each __libc_memalign.
>>>
>>> This change makes __libc_memalign to mmap one page extra. Worst case,
>>> the kernel never puts a backing page behind it, but best case it allows
>>> __libc_memalign to operate much much better. For elf/tst-align --direct,
>>> it reduces number of mmap calls from 12 to 9.
>>>
>>> --- a/elf/dl-minimal.c
>>> +++ b/elf/dl-minimal.c
>>> @@ -75,6 +75,7 @@ __libc_memalign (size_t align, size_t n)
>>> return NULL;
>>> nup = GLRO(dl_pagesize);
>>> }
>>> + nup += GLRO(dl_pagesize);
>>
>> should this be in the else case ?
>>
>> also the comment above this code needs updating
>> -mike
>
> You are right. Here is the updated patch.
>
We can just always increment number of pages by one.
Comments
"H.J. Lu" <hjl.tools@gmail.com> writes:
> + if (__glibc_unlikely (nup == 0 && n))
Please also fix the implicit boolean coercion.
Andreas.
On Sat, Apr 2, 2016 at 11:55 PM, Andreas Schwab <schwab@linux-m68k.org> wrote:
> "H.J. Lu" <hjl.tools@gmail.com> writes:
>
>> + if (__glibc_unlikely (nup == 0 && n))
>
> Please also fix the implicit boolean coercion.
>
> Andreas.
>
Like this? OK for master?
On Sun, Apr 3, 2016 at 6:42 AM, H.J. Lu <hjl.tools@gmail.com> wrote:
> On Sat, Apr 2, 2016 at 11:55 PM, Andreas Schwab <schwab@linux-m68k.org> wrote:
>> "H.J. Lu" <hjl.tools@gmail.com> writes:
>>
>>> + if (__glibc_unlikely (nup == 0 && n))
>>
>> Please also fix the implicit boolean coercion.
>>
>> Andreas.
>>
>
> Like this? OK for master?
>
I am checking it now.
From 4aad224c5dc8c8e8496868cc1bb00d587aa4f1ed Mon Sep 17 00:00:00 2001
From: "H.J. Lu" <hjl.tools@gmail.com>
Date: Sat, 2 Apr 2016 08:25:31 -0700
Subject: [PATCH] Reduce number of mmap calls from __libc_memalign in ld.so
__libc_memalign in ld.so allocates one page at a time and tries to
optimize consecutive __libc_memalign calls by hoping that the next
mmap is after the current memory allocation.
However, the kernel hands out mmap addresses in top-down order, so
this optimization in practice never happens, with the result that we
have more mmap calls and waste a bunch of space for each __libc_memalign.
This change makes __libc_memalign to mmap one page extra. Worst case,
the kernel never puts a backing page behind it, but best case it allows
__libc_memalign to operate much much better. For elf/tst-align --direct,
it reduces number of mmap calls from 12 to 9.
* elf/dl-minimal.c (__libc_memalign): Mmap one extra page.
---
elf/dl-minimal.c | 12 +++++-------
1 file changed, 5 insertions(+), 7 deletions(-)
@@ -66,15 +66,13 @@ __libc_memalign (size_t align, size_t n)
if (alloc_ptr + n >= alloc_end || n >= -(uintptr_t) alloc_ptr)
{
- /* Insufficient space left; allocate another page. */
+ /* Insufficient space left; allocate another page plus one extra
+ page to reduce number of mmap calls. */
caddr_t page;
size_t nup = (n + GLRO(dl_pagesize) - 1) & ~(GLRO(dl_pagesize) - 1);
- if (__glibc_unlikely (nup == 0))
- {
- if (n)
- return NULL;
- nup = GLRO(dl_pagesize);
- }
+ if (__glibc_unlikely (nup == 0 && n))
+ return NULL;
+ nup += GLRO(dl_pagesize);
page = __mmap (0, nup, PROT_READ|PROT_WRITE,
MAP_ANON|MAP_PRIVATE, -1, 0);
if (page == MAP_FAILED)
--
2.5.5