x86-64: Ignore r_addend for R_X86_64_GLOB_DAT/R_X86_64_JUMP_SLOT
Checks
Context |
Check |
Description |
dj/TryBot-apply_patch |
success
|
Patch applied to master at the time it was sent
|
dj/TryBot-32bit |
success
|
Build for i686
|
Commit Message
According to x86-64 psABI, r_addend should be ignored for R_X86_64_GLOB_DAT
and R_X86_64_JUMP_SLOT.
---
sysdeps/x86_64/dl-machine.h | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
Comments
On 2022-05-20, H.J. Lu via Libc-alpha wrote:
>According to x86-64 psABI, r_addend should be ignored for R_X86_64_GLOB_DAT
>and R_X86_64_JUMP_SLOT.
Looks good. In musl, All GLOB_DAT and JUMP_SLOT ignore the addend.
Ignoring r_addend may or may not be profitable. Since GLOB and JUMP_SLOT
are handled the same way as the symbolic relocation R_X86_64_64, this
may not save code.
>---
> sysdeps/x86_64/dl-machine.h | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
>diff --git a/sysdeps/x86_64/dl-machine.h b/sysdeps/x86_64/dl-machine.h
>index c70af7ab1e..7f607f6dff 100644
>--- a/sysdeps/x86_64/dl-machine.h
>+++ b/sysdeps/x86_64/dl-machine.h
>@@ -325,11 +325,13 @@ and creates an unsatisfiable circular dependency.\n",
> # endif
> /* Set to symbol size plus addend. */
> value = sym->st_size;
>+ *reloc_addr = value + reloc->r_addend;
>+ break;
> # endif
>- /* Fall through. */
>+
> case R_X86_64_GLOB_DAT:
> case R_X86_64_JUMP_SLOT:
>- *reloc_addr = value + reloc->r_addend;
>+ *reloc_addr = value;
> break;
>
> case R_X86_64_DTPMOD64:
>--
>2.36.1
>
* H. J. Lu via Libc-alpha:
> According to x86-64 psABI, r_addend should be ignored for R_X86_64_GLOB_DAT
> and R_X86_64_JUMP_SLOT.
> ---
> sysdeps/x86_64/dl-machine.h | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/sysdeps/x86_64/dl-machine.h b/sysdeps/x86_64/dl-machine.h
> index c70af7ab1e..7f607f6dff 100644
> --- a/sysdeps/x86_64/dl-machine.h
> +++ b/sysdeps/x86_64/dl-machine.h
> @@ -325,11 +325,13 @@ and creates an unsatisfiable circular dependency.\n",
> # endif
> /* Set to symbol size plus addend. */
> value = sym->st_size;
> + *reloc_addr = value + reloc->r_addend;
> + break;
> # endif
> - /* Fall through. */
> +
> case R_X86_64_GLOB_DAT:
> case R_X86_64_JUMP_SLOT:
> - *reloc_addr = value + reloc->r_addend;
> + *reloc_addr = value;
> break;
>
> case R_X86_64_DTPMOD64:
Context:
{
# ifndef RTLD_BOOTSTRAP
# ifdef __ILP32__
case R_X86_64_SIZE64:
/* Set to symbol size plus addend. */
*(Elf64_Addr *) (uintptr_t) reloc_addr
= (Elf64_Addr) sym->st_size + reloc->r_addend;
break;
case R_X86_64_SIZE32:
# else
case R_X86_64_SIZE64:
# endif
/* Set to symbol size plus addend. */
value = sym->st_size;
# endif
/* Fall through. */
case R_X86_64_GLOB_DAT:
case R_X86_64_JUMP_SLOT:
*reloc_addr = value + reloc->r_addend;
break;
The new version is:
{
# ifndef RTLD_BOOTSTRAP
# ifdef __ILP32__
case R_X86_64_SIZE64:
/* Set to symbol size plus addend. */
*(Elf64_Addr *) (uintptr_t) reloc_addr
= (Elf64_Addr) sym->st_size + reloc->r_addend;
break;
case R_X86_64_SIZE32:
# else
case R_X86_64_SIZE64:
# endif
/* Set to symbol size plus addend. */
value = sym->st_size;
*reloc_addr = value + reloc->r_addend;
break;
# endif
case R_X86_64_GLOB_DAT:
case R_X86_64_JUMP_SLOT:
*reloc_addr = value;
break;
So that “break” is actually in the right place.
Are there binaries out there with a non-zero addend? I'm a bit worried
that this change introduces hard-to-diagnose regressions.
Thanks,
Florian
On Wed, May 25, 2022 at 9:52 AM Florian Weimer <fweimer@redhat.com> wrote:
>
> * H. J. Lu via Libc-alpha:
>
> > According to x86-64 psABI, r_addend should be ignored for R_X86_64_GLOB_DAT
> > and R_X86_64_JUMP_SLOT.
> > ---
> > sysdeps/x86_64/dl-machine.h | 6 ++++--
> > 1 file changed, 4 insertions(+), 2 deletions(-)
> >
> > diff --git a/sysdeps/x86_64/dl-machine.h b/sysdeps/x86_64/dl-machine.h
> > index c70af7ab1e..7f607f6dff 100644
> > --- a/sysdeps/x86_64/dl-machine.h
> > +++ b/sysdeps/x86_64/dl-machine.h
> > @@ -325,11 +325,13 @@ and creates an unsatisfiable circular dependency.\n",
> > # endif
> > /* Set to symbol size plus addend. */
> > value = sym->st_size;
> > + *reloc_addr = value + reloc->r_addend;
> > + break;
> > # endif
> > - /* Fall through. */
> > +
> > case R_X86_64_GLOB_DAT:
> > case R_X86_64_JUMP_SLOT:
> > - *reloc_addr = value + reloc->r_addend;
> > + *reloc_addr = value;
> > break;
> >
> > case R_X86_64_DTPMOD64:
>
> Context:
>
> {
> # ifndef RTLD_BOOTSTRAP
> # ifdef __ILP32__
> case R_X86_64_SIZE64:
> /* Set to symbol size plus addend. */
> *(Elf64_Addr *) (uintptr_t) reloc_addr
> = (Elf64_Addr) sym->st_size + reloc->r_addend;
> break;
>
> case R_X86_64_SIZE32:
> # else
> case R_X86_64_SIZE64:
> # endif
> /* Set to symbol size plus addend. */
> value = sym->st_size;
> # endif
> /* Fall through. */
> case R_X86_64_GLOB_DAT:
> case R_X86_64_JUMP_SLOT:
> *reloc_addr = value + reloc->r_addend;
> break;
>
>
> The new version is:
>
> {
> # ifndef RTLD_BOOTSTRAP
> # ifdef __ILP32__
> case R_X86_64_SIZE64:
> /* Set to symbol size plus addend. */
> *(Elf64_Addr *) (uintptr_t) reloc_addr
> = (Elf64_Addr) sym->st_size + reloc->r_addend;
> break;
>
> case R_X86_64_SIZE32:
> # else
> case R_X86_64_SIZE64:
> # endif
> /* Set to symbol size plus addend. */
> value = sym->st_size;
> *reloc_addr = value + reloc->r_addend;
> break;
> # endif
> case R_X86_64_GLOB_DAT:
> case R_X86_64_JUMP_SLOT:
> *reloc_addr = value;
> break;
>
> So that “break” is actually in the right place.
>
> Are there binaries out there with a non-zero addend? I'm a bit worried
Linker always sets r_addend to 0 for R_X86_64_GLOB_DAT and
R_X86_64_JUMP_SLOT.
> that this change introduces hard-to-diagnose regressions.
>
> Thanks,
> Florian
>
* H. J. Lu:
> Linker always sets r_addend to 0 for R_X86_64_GLOB_DAT and
> R_X86_64_JUMP_SLOT.
Okay, so it's very unlikely that we see non-zero addends.
Maybe you can add this to the commit message?
Given that, I think the change should be safe.
Thanks,
Florian
On 25/05/2022 15:19, Florian Weimer via Libc-alpha wrote:
> * H. J. Lu:
>
>> Linker always sets r_addend to 0 for R_X86_64_GLOB_DAT and
>> R_X86_64_JUMP_SLOT.
>
> Okay, so it's very unlikely that we see non-zero addends.
> Maybe you can add this to the commit message?
>
> Given that, I think the change should be safe.
>
> Thanks,
> Florian
>
Maybe assert that r_addend is always zero?
On Wed, May 25, 2022 at 11:28 AM Adhemerval Zanella via Libc-alpha
<libc-alpha@sourceware.org> wrote:
>
>
>
> On 25/05/2022 15:19, Florian Weimer via Libc-alpha wrote:
> > * H. J. Lu:
> >
> >> Linker always sets r_addend to 0 for R_X86_64_GLOB_DAT and
> >> R_X86_64_JUMP_SLOT.
> >
> > Okay, so it's very unlikely that we see non-zero addends.
> > Maybe you can add this to the commit message?
> >
> > Given that, I think the change should be safe.
> >
> > Thanks,
> > Florian
> >
>
> Maybe assert that r_addend is always zero?
I don't think it is necessary since r_addend has never been non-zero.
On 2022-05-25, H.J. Lu via Libc-alpha wrote:
>On Wed, May 25, 2022 at 11:28 AM Adhemerval Zanella via Libc-alpha
><libc-alpha@sourceware.org> wrote:
>>
>>
>>
>> On 25/05/2022 15:19, Florian Weimer via Libc-alpha wrote:
>> > * H. J. Lu:
>> >
>> >> Linker always sets r_addend to 0 for R_X86_64_GLOB_DAT and
>> >> R_X86_64_JUMP_SLOT.
>> >
>> > Okay, so it's very unlikely that we see non-zero addends.
>> > Maybe you can add this to the commit message?
>> >
>> > Given that, I think the change should be safe.
>> >
>> > Thanks,
>> > Florian
>> >
>>
>> Maybe assert that r_addend is always zero?
>
>I don't think it is necessary since r_addend has never been non-zero.
I agree that an assert is unnecessary. Linkers just guarantee zero
addends.
It seems that by omitting reloc->r_addend, the GCC generated assembler
is slightly shorter. I've tried placing GLOB_DAT/JUMP_SLOT near 64
and don't get simplified assembly.
Reviewed-by: Fangrui Song <maskray@google.com>
On Wed, May 25, 2022 at 1:24 PM Fangrui Song <maskray@google.com> wrote:
>
> On 2022-05-25, H.J. Lu via Libc-alpha wrote:
> >On Wed, May 25, 2022 at 11:28 AM Adhemerval Zanella via Libc-alpha
> ><libc-alpha@sourceware.org> wrote:
> >>
> >>
> >>
> >> On 25/05/2022 15:19, Florian Weimer via Libc-alpha wrote:
> >> > * H. J. Lu:
> >> >
> >> >> Linker always sets r_addend to 0 for R_X86_64_GLOB_DAT and
> >> >> R_X86_64_JUMP_SLOT.
> >> >
> >> > Okay, so it's very unlikely that we see non-zero addends.
> >> > Maybe you can add this to the commit message?
> >> >
> >> > Given that, I think the change should be safe.
> >> >
> >> > Thanks,
> >> > Florian
> >> >
> >>
> >> Maybe assert that r_addend is always zero?
> >
> >I don't think it is necessary since r_addend has never been non-zero.
>
> I agree that an assert is unnecessary. Linkers just guarantee zero
> addends.
>
> It seems that by omitting reloc->r_addend, the GCC generated assembler
> is slightly shorter. I've tried placing GLOB_DAT/JUMP_SLOT near 64
> and don't get simplified assembly.
>
> Reviewed-by: Fangrui Song <maskray@google.com>
Here is the patch I am checking in.
Thanks.
On Thu, May 26, 2022 at 2:00 PM H.J. Lu via Libc-alpha
<libc-alpha@sourceware.org> wrote:
>
> On Wed, May 25, 2022 at 1:24 PM Fangrui Song <maskray@google.com> wrote:
> >
> > On 2022-05-25, H.J. Lu via Libc-alpha wrote:
> > >On Wed, May 25, 2022 at 11:28 AM Adhemerval Zanella via Libc-alpha
> > ><libc-alpha@sourceware.org> wrote:
> > >>
> > >>
> > >>
> > >> On 25/05/2022 15:19, Florian Weimer via Libc-alpha wrote:
> > >> > * H. J. Lu:
> > >> >
> > >> >> Linker always sets r_addend to 0 for R_X86_64_GLOB_DAT and
> > >> >> R_X86_64_JUMP_SLOT.
> > >> >
> > >> > Okay, so it's very unlikely that we see non-zero addends.
> > >> > Maybe you can add this to the commit message?
> > >> >
> > >> > Given that, I think the change should be safe.
> > >> >
> > >> > Thanks,
> > >> > Florian
> > >> >
> > >>
> > >> Maybe assert that r_addend is always zero?
> > >
> > >I don't think it is necessary since r_addend has never been non-zero.
> >
> > I agree that an assert is unnecessary. Linkers just guarantee zero
> > addends.
> >
> > It seems that by omitting reloc->r_addend, the GCC generated assembler
> > is slightly shorter. I've tried placing GLOB_DAT/JUMP_SLOT near 64
> > and don't get simplified assembly.
> >
> > Reviewed-by: Fangrui Song <maskray@google.com>
>
> Here is the patch I am checking in.
>
> Thanks.
>
> --
> H.J.
I would like to backport this patch to release branches.
Any comments or objections?
--Sunil
@@ -325,11 +325,13 @@ and creates an unsatisfiable circular dependency.\n",
# endif
/* Set to symbol size plus addend. */
value = sym->st_size;
+ *reloc_addr = value + reloc->r_addend;
+ break;
# endif
- /* Fall through. */
+
case R_X86_64_GLOB_DAT:
case R_X86_64_JUMP_SLOT:
- *reloc_addr = value + reloc->r_addend;
+ *reloc_addr = value;
break;
case R_X86_64_DTPMOD64: