From patchwork Sat Apr 2 17:43:51 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "H.J. Lu" X-Patchwork-Id: 11604 Received: (qmail 81764 invoked by alias); 2 Apr 2016 17:43:56 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 81747 invoked by uid 89); 2 Apr 2016 17:43:55 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.6 required=5.0 tests=AWL, BAYES_05, FREEMAIL_FROM, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 spammy=66, 8, caddr_t, Mmap, sk:d56ca4f X-HELO: mail-qk0-f176.google.com X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to; bh=hiOD9SHy6+fisWNFRXPo6OtmfjrPsc2DkwsHfC9+NEU=; b=NEAv07VAq3KfYblg1SFR6MgcUkWu6Ieg4zxbZl7CFl84ejoPlQNeftAVgBmmm+1kwK 5hfCGIzgl3AXkdAXZcZcX7RBKmGF1ljcTP1zFre1K9hi6hcEE+qy+YG8bculsU3KTgLX 64OARU2i9aDUcriWKO5uLL2q1H256qKOWFHWbof7J8HAqze7afluPqBeXBMMF+Pn84Vm 4ybOKs9pWfzdxPKI0WNNJzN9xc4HJar/OMS7DSX0tI/73Zx7DvYL9yQao5Psd5fOfI4z 77OC4jhCt2vxs0NeqXLVnQhzSfcM0MMiyj+MOZEw2voBQMJjO2oGP8ewIXHYhOADTRb+ JknQ== X-Gm-Message-State: AD7BkJKoy+Y/r15db7QncNbGw/QyP2negLWHU7f/Hiz542FK6j4upd+gvE0yAy7vwotiEBSsBV5v9g4/trFIuQ== MIME-Version: 1.0 X-Received: by 10.55.27.42 with SMTP id b42mr25434677qkb.51.1459619031999; Sat, 02 Apr 2016 10:43:51 -0700 (PDT) In-Reply-To: <20160402173308.GU6588@vapier.lan> References: <20160402153421.GA28788@intel.com> <20160402173308.GU6588@vapier.lan> Date: Sat, 2 Apr 2016 10:43:51 -0700 Message-ID: Subject: Re: [PATCH] Reduce number of mmap calls from __libc_memalign in ld.so From: "H.J. Lu" To: GNU C Library On Sat, Apr 2, 2016 at 10:33 AM, Mike Frysinger wrote: > On 02 Apr 2016 08:34, H.J. Lu wrote: >> __libc_memalign in ld.so allocates one page at a time and tries to >> optimize consecutive __libc_memalign calls by hoping that the next >> mmap is after the current memory allocation. >> >> However, the kernel hands out mmap addresses in top-down order, so >> this optimization in practice never happens, with the result that we >> have more mmap calls and waste a bunch of space for each __libc_memalign. >> >> This change makes __libc_memalign to mmap one page extra. Worst case, >> the kernel never puts a backing page behind it, but best case it allows >> __libc_memalign to operate much much better. For elf/tst-align --direct, >> it reduces number of mmap calls from 12 to 9. >> >> --- a/elf/dl-minimal.c >> +++ b/elf/dl-minimal.c >> @@ -75,6 +75,7 @@ __libc_memalign (size_t align, size_t n) >> return NULL; >> nup = GLRO(dl_pagesize); >> } >> + nup += GLRO(dl_pagesize); > > should this be in the else case ? > > also the comment above this code needs updating > -mike You are right. Here is the updated patch. From d56ca4f3269e47cba3e8d22ba8e48cd20d470757 Mon Sep 17 00:00:00 2001 From: "H.J. Lu" Date: Sat, 2 Apr 2016 08:25:31 -0700 Subject: [PATCH] Reduce number of mmap calls from __libc_memalign in ld.so __libc_memalign in ld.so allocates one page at a time and tries to optimize consecutive __libc_memalign calls by hoping that the next mmap is after the current memory allocation. However, the kernel hands out mmap addresses in top-down order, so this optimization in practice never happens, with the result that we have more mmap calls and waste a bunch of space for each __libc_memalign. This change makes __libc_memalign to mmap one page extra. Worst case, the kernel never puts a backing page behind it, but best case it allows __libc_memalign to operate much much better. For elf/tst-align --direct, it reduces number of mmap calls from 12 to 9. * elf/dl-minimal.c (__libc_memalign): Mmap one extra page. --- elf/dl-minimal.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/elf/dl-minimal.c b/elf/dl-minimal.c index 762e65b..8bffdc7 100644 --- a/elf/dl-minimal.c +++ b/elf/dl-minimal.c @@ -66,7 +66,8 @@ __libc_memalign (size_t align, size_t n) if (alloc_ptr + n >= alloc_end || n >= -(uintptr_t) alloc_ptr) { - /* Insufficient space left; allocate another page. */ + /* Insufficient space left; allocate another page plus one extra + page to reduce number of mmap calls. */ caddr_t page; size_t nup = (n + GLRO(dl_pagesize) - 1) & ~(GLRO(dl_pagesize) - 1); if (__glibc_unlikely (nup == 0)) @@ -75,6 +76,8 @@ __libc_memalign (size_t align, size_t n) return NULL; nup = GLRO(dl_pagesize); } + else + nup += GLRO(dl_pagesize); page = __mmap (0, nup, PROT_READ|PROT_WRITE, MAP_ANON|MAP_PRIVATE, -1, 0); if (page == MAP_FAILED) -- 2.5.5