From patchwork Thu Dec 9 18:04:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "H.J. Lu" X-Patchwork-Id: 48718 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id E59373858434 for ; Thu, 9 Dec 2021 18:04:57 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org E59373858434 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1639073097; bh=hyCz/SWrEIjCjvHQzbd7gBDCTLmIseCFU4ZNYu4fGa8=; h=Date:To:Subject:References:In-Reply-To:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=EQBjWq5K/aJRab/8mlHPrFpopSErbpKbiO2E6PCAJslcwdPVfsNXrQ2DXmSMMCoFm maFHsSz1ppqvjOiFPcL35Mk4lLL+GYd53TPr2stV2jDyjzw69qKUFXjx6gd7jUorN/ mCdOOp2vM3uNV16ZJde1mZGXboYWml6qjBOMkj5s= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by sourceware.org (Postfix) with ESMTPS id DECED3858C3A for ; Thu, 9 Dec 2021 18:04:34 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org DECED3858C3A Received: by mail-pf1-x42e.google.com with SMTP id 8so6151397pfo.4 for ; Thu, 09 Dec 2021 10:04:34 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=hyCz/SWrEIjCjvHQzbd7gBDCTLmIseCFU4ZNYu4fGa8=; b=VnpH4vQlKJasVReFBmjigsLgAHCZSdI8N2xZ9mgQqF4zzvH6lvQ79heN5iHfJEnlab poZJ1EwkKPwBaMVVCGwfMW9t1/zcOiXi7sQvQXs0XZHt9IXfWJ0C5ZoizQa8HSTo1RKJ 8zybfFS1RMAZ/f5N4IALogoaEsHAX8oI99O8tBuzGi2jOgXiqkWzUo1syPkqSnV+o0yg 5t2yfl/gwiKqSdhiGczWqfw++4t5daieB4Y0iFmbqQna2fBtt5yt/K2y7enOt/BARcg2 a2njPGPi29wBnEQ5bErqDDlAm7hgS5+8wynRygtJ4V2z7DyrFz1MgaORv4WGHKCb/JXP KyEw== X-Gm-Message-State: AOAM531w8KPrAQEm26k0SxvfgPZdgYYdSsr7F5W1KJ0dhMMfDHC6KHqb hV6LVnztC6Gzw2+sl9hhEnQ= X-Google-Smtp-Source: ABdhPJyjd8XZ9HC7MI3Ae66i593C9zCsDO7SZlIsVa7WoXZPZyyuRSb4TE1gjY6zqkCLBDi4I20emQ== X-Received: by 2002:a05:6a00:2408:b0:4a8:45ef:c960 with SMTP id z8-20020a056a00240800b004a845efc960mr12850138pfh.53.1639073073820; Thu, 09 Dec 2021 10:04:33 -0800 (PST) Received: from gnu-cfl-2.localdomain ([172.58.35.133]) by smtp.gmail.com with ESMTPSA id j16sm402879pfj.16.2021.12.09.10.04.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 Dec 2021 10:04:33 -0800 (PST) Received: by gnu-cfl-2.localdomain (Postfix, from userid 1000) id 4393A421539; Thu, 9 Dec 2021 10:04:32 -0800 (PST) Date: Thu, 9 Dec 2021 10:04:32 -0800 To: Rongwei Wang Subject: [PATCH v3] elf: Properly align PT_LOAD segments [BZ #28676] Message-ID: References: <20211209055719.56245-1-rongwei.wang@linux.alibaba.com> <20211209055719.56245-2-rongwei.wang@linux.alibaba.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-3029.6 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_BARRACUDACENTRAL, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: "H.J. Lu via Libc-alpha" From: "H.J. Lu" Reply-To: "H.J. Lu" Cc: Florian Weimer , xuyu@linux.alibaba.com, GNU C Library , gavin.dg@linux.alibaba.com Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" On Thu, Dec 09, 2021 at 07:14:10AM -0800, H.J. Lu wrote: > On Wed, Dec 8, 2021 at 9:57 PM Rongwei Wang > wrote: > > > > Now, ld.so always map the LOAD segments and aligned by base > > page size (e.g. 4k in x86 or 4k, 16k and 64k in arm64). This > > is a bug, and had been reported: > > > > https://sourceware.org/bugzilla/show_bug.cgi?id=28676 > > > > This patch mainly to fix it. In this patch, ld.so can align > > the mapping address of the first LOAD segment with p_align > > when p_align is greater than the current base page size. > > > > A testcase: > > main.c: > > > > extern void dso_test(void); > > int main(void) > > { > > dso_test(); > > getchar(); > > > > return 0; > > } > > > > load.c, used to generate libload.so: > > > > int foo __attribute__((aligned(0x200000))) = 1; > > void dso_test(void) > > { > > printf("dso test\n"); > > printf("foo: %p\n", &foo); > > } > > > > The steps: > > $ gcc -O2 -fPIC -c -o load.o load.c > > $ gcc -shared -Wl,-z,max-page-size=0x200000 -o libload.so load.o > > $ gcc -no-pie -Wl,-z,max-page-size=0x200000 -O2 -o dso main.c libload.so -Wl,-R,. > > > > Before fixing: > > $ ./dso > > dso test > > foo: 0xffff88ae2000 > > > > After fixed: > > $ ./dso > > dso test > > foo: 0xffff9e000000 > > > > And this fix can help code segments use huge pages become > > simple and available. > > Please include a testcase, like > > https://gitlab.com/x86-glibc/glibc/-/commits/users/hjl/pr28676/master > > > Signed-off-by: Xu Yu > > Signed-off-by: Rongwei Wang > > --- > > elf/dl-load.c | 1 + > > elf/dl-map-segments.h | 63 +++++++++++++++++++++++++++++++++++++++---- > > include/link.h | 3 +++ > > 3 files changed, 62 insertions(+), 5 deletions(-) > > > > diff --git a/elf/dl-load.c b/elf/dl-load.c > > index e39980fb19..136cfe2fa8 100644 > > --- a/elf/dl-load.c > > +++ b/elf/dl-load.c > > @@ -1154,6 +1154,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, > > c->dataend = ph->p_vaddr + ph->p_filesz; > > c->allocend = ph->p_vaddr + ph->p_memsz; > > c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize)); > > + l->l_load_align = ph->p_align; > > Can you add an alignment field to > > /* This structure describes one PT_LOAD command. > Its details have been expanded out and converted. */ > struct loadcmd > { > ElfW(Addr) mapstart, mapend, dataend, allocend; > ElfW(Off) mapoff; > int prot; /* PROT_* bits. */ > }; > > instead? > Hi, I updated your patch. Please take a look. H.J. --- When PT_LOAD segment alignment > the page size, allocate enough space to ensure that the segment can be properly aligned. This fixes [BZ #28676]. --- elf/dl-load.c | 1 + elf/dl-load.h | 2 +- elf/dl-map-segments.h | 51 +++++++++++++++++++++++++++++++++++++++---- 3 files changed, 49 insertions(+), 5 deletions(-) diff --git a/elf/dl-load.c b/elf/dl-load.c index bf8957e73c..9a23590bf4 100644 --- a/elf/dl-load.c +++ b/elf/dl-load.c @@ -1150,6 +1150,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, c->mapend = ALIGN_UP (ph->p_vaddr + ph->p_filesz, GLRO(dl_pagesize)); c->dataend = ph->p_vaddr + ph->p_filesz; c->allocend = ph->p_vaddr + ph->p_memsz; + c->mapalign = ph->p_align; c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize)); /* Determine whether there is a gap between the last segment diff --git a/elf/dl-load.h b/elf/dl-load.h index e329d49a81..c121e3456c 100644 --- a/elf/dl-load.h +++ b/elf/dl-load.h @@ -74,7 +74,7 @@ ELF_PREFERRED_ADDRESS_DATA; Its details have been expanded out and converted. */ struct loadcmd { - ElfW(Addr) mapstart, mapend, dataend, allocend; + ElfW(Addr) mapstart, mapend, dataend, allocend, mapalign; ElfW(Off) mapoff; int prot; /* PROT_* bits. */ }; diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h index f9fb110ee3..f147ec232f 100644 --- a/elf/dl-map-segments.h +++ b/elf/dl-map-segments.h @@ -18,6 +18,52 @@ #include +/* Map a segment and align it properly. */ + +static __always_inline ElfW(Addr) +_dl_map_segment (const struct loadcmd *c, ElfW(Addr) mappref, + const size_t maplength, int fd) +{ + if (c->mapalign > GLRO(dl_pagesize)) + { + /* If the segment alignment > the page size, allocate enough space + to ensure that the segment can be properly aligned. */ + ElfW(Addr) maplen = (maplength >= c->mapalign + ? (maplength + c->mapalign) + : (2 * c->mapalign)); + ElfW(Addr) map_start + = (ElfW(Addr)) __mmap ((void *) mappref, maplen, + PROT_NONE, MAP_ANONYMOUS|MAP_PRIVATE, + -1, 0); + if (__glibc_unlikely ((void *) map_start == MAP_FAILED)) + return map_start; + + ElfW(Addr) map_start_aligned = ALIGN_UP (map_start, c->mapalign); + ElfW(Addr) map_end = map_start_aligned + maplength; + map_start_aligned + = (ElfW(Addr)) __mmap ((void *) map_start_aligned, + maplength, c->prot, + MAP_COPY|MAP_FILE|MAP_FIXED, + fd, c->mapoff); + if (__glibc_likely ((void *) map_start_aligned != MAP_FAILED)) + { + /* Unmap the unused regions. */ + ElfW(Addr) delta = map_start_aligned - map_start; + if (delta) + __munmap ((void *) map_start, delta); + delta = map_start + maplen - map_end; + if (delta) + __munmap ((void *) map_end, delta); + } + + return map_start_aligned; + } + + return (ElfW(Addr)) __mmap ((void *) mappref, maplength, + c->prot, MAP_COPY|MAP_FILE, + fd, c->mapoff); +} + /* This implementation assumes (as does the corresponding implementation of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects are always laid out with all segments contiguous (or with gaps @@ -53,10 +99,7 @@ _dl_map_segments (struct link_map *l, int fd, - MAP_BASE_ADDR (l)); /* Remember which part of the address space this object uses. */ - l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, - c->prot, - MAP_COPY|MAP_FILE, - fd, c->mapoff); + l->l_map_start = _dl_map_segment (c, mappref, maplength, fd); if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED)) return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT;