From patchwork Sat Dec 4 04:58:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Wang X-Patchwork-Id: 48489 X-Patchwork-Delegate: hjl.tools@gmail.com Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 2F95A3858D28 for ; Sat, 4 Dec 2021 05:00:12 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 2F95A3858D28 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1638594012; bh=50hRLc2GpN6QXpNhs7Ybxd3yaVlhZU1XbAQ3bNzM5t4=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=qhWTjKc3kkR4su0fEXJhqjLG0dTFcdkmm6sn66zvTNsXJG6c5aWjun0lxAhNhWd/N HboHJjUBZU3TLAFKY2vgRwYybFXrioXFm6RAHDVrJJ8NQig6QPphr60DoMWmtznCI2 46F8XQkbvbaZQZoBJASyoKZuYdUKNo8zVNBe8w7Y= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) by sourceware.org (Postfix) with ESMTPS id 9C4A9385840B for ; Sat, 4 Dec 2021 04:58:53 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 9C4A9385840B X-Alimail-AntiSpam: AC=PASS; BC=-1|-1; BR=01201311R851e4; CH=green; DM=||false|; DS=||; FP=0|-1|-1|-1|0|-1|-1|-1; HT=e01e04400; MF=rongwei.wang@linux.alibaba.com; NM=1; PH=DS; RN=3; SR=0; TI=SMTPD_---0UzKumWs_1638593929; Received: from localhost.localdomain(mailfrom:rongwei.wang@linux.alibaba.com fp:SMTPD_---0UzKumWs_1638593929) by smtp.aliyun-inc.com(127.0.0.1); Sat, 04 Dec 2021 12:58:50 +0800 To: libc-alpha@sourceware.org Subject: [PATCH RFC 1/1] elf: align the mapping address of LOAD segments with p_align Date: Sat, 4 Dec 2021 12:58:48 +0800 Message-Id: <20211204045848.71105-2-rongwei.wang@linux.alibaba.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20211204045848.71105-1-rongwei.wang@linux.alibaba.com> References: <20211204045848.71105-1-rongwei.wang@linux.alibaba.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.5 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH, GIT_PATCH_0, KAM_DMARC_STATUS, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_PASS, TXREP, UNPARSEABLE_RELAY, USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Rongwei Wang via Libc-alpha From: Rongwei Wang Reply-To: Rongwei Wang Cc: xuyu@linux.alibaba.com, gavin.dg@linux.alibaba.com Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" Now, ld.so always map the LOAD segments and aligned by base page size (e.g. 4k in x86 or 4k, 16k and 64k in arm64). And this patch improve the scheme here. In this patch, ld.so can align the mapping address of the first LOAD segment with p_align when p_align is greater than the current base page size. And this change makes code segments using huge pages become simple and available. Signed-off-by: Rongwei Wang --- elf/dl-load.c | 1 + elf/dl-map-segments.h | 54 +++++++++++++++++++++++++++++++++++++++++-- include/link.h | 3 +++ 3 files changed, 56 insertions(+), 2 deletions(-) diff --git a/elf/dl-load.c b/elf/dl-load.c index e39980fb19..136cfe2fa8 100644 --- a/elf/dl-load.c +++ b/elf/dl-load.c @@ -1154,6 +1154,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, c->dataend = ph->p_vaddr + ph->p_filesz; c->allocend = ph->p_vaddr + ph->p_memsz; c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize)); + l->l_load_align = ph->p_align; /* Determine whether there is a gap between the last segment and this one. */ diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h index ac9f09ab4c..ae03236045 100644 --- a/elf/dl-map-segments.h +++ b/elf/dl-map-segments.h @@ -18,6 +18,47 @@ #include +static __always_inline void * +_dl_map_segments_align (const struct loadcmd *c, + ElfW(Addr) mappref, int fd, size_t alignment, + const size_t maplength) +{ + unsigned long map_start, map_start_align, map_end; + unsigned long maplen = (maplength >= alignment) ? + (maplength + alignment) : (2 * alignment); + + /* Allocate enough space to ensure that address aligned by + p_align is included. */ + map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen, + PROT_NONE, + MAP_ANONYMOUS | MAP_PRIVATE, + -1, 0); + if (__glibc_unlikely ((void *) map_start == MAP_FAILED)) { + /* If mapping a aligned address failed, then ... */ + map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, + c->prot, + MAP_COPY|MAP_FILE, + fd, c->mapoff); + + return (void *) map_start; + } + map_start_align = ALIGN_UP(map_start, alignment); + map_end = map_start_align + maplength; + + /* Remember which part of the address space this object uses. */ + map_start_align = (ElfW(Addr)) __mmap ((void *) map_start_align, maplength, + c->prot, + MAP_COPY|MAP_FILE|MAP_FIXED, + fd, c->mapoff); + if (__glibc_unlikely ((void *) map_start_align == MAP_FAILED)) + return MAP_FAILED; + if (map_start_align > map_start) + __munmap((void *)map_start, map_start_align - map_start); + __munmap((void *)map_end, map_start + maplen - map_end); + + return (void *) map_start_align; +} + /* This implementation assumes (as does the corresponding implementation of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects are always laid out with all segments contiguous (or with gaps @@ -52,11 +93,20 @@ _dl_map_segments (struct link_map *l, int fd, c->mapstart & GLRO(dl_use_load_bias)) - MAP_BASE_ADDR (l)); - /* Remember which part of the address space this object uses. */ - l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, + /* During mapping, align the mapping address of the LOAD segments + according to own p_align. This helps OS map its code segment to + huge pages. */ + if (l->l_load_align > GLRO(dl_pagesize)) { + l->l_map_start = (ElfW(Addr)) _dl_map_segments_align (c, + mappref, fd, + l->l_load_align, maplength); + } else { + /* Remember which part of the address space this object uses. */ + l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, c->prot, MAP_COPY|MAP_FILE, fd, c->mapoff); + } if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED)) return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT; diff --git a/include/link.h b/include/link.h index aea268439c..fc6ce29fab 100644 --- a/include/link.h +++ b/include/link.h @@ -298,6 +298,9 @@ struct link_map /* Thread-local storage related info. */ + /* Alignment requirement of the LOAD block. */ + size_t l_load_align; + /* Start of the initialization image. */ void *l_tls_initimage; /* Size of the initialization image. */