[1/2] aarch64: Add DT_RELR support

Message ID ae9d0e1f8adcc0191011c57d80d3a2b076f2ae2b.1716972656.git.szabolcs.nagy@arm.com
State Committed
Headers
Series AArch64 DT_RELR support |

Checks

Context Check Description
linaro-tcwg-bot/tcwg_binutils_build--master-aarch64 success Testing passed
linaro-tcwg-bot/tcwg_binutils_build--master-arm success Testing passed
linaro-tcwg-bot/tcwg_binutils_check--master-aarch64 success Testing passed
linaro-tcwg-bot/tcwg_binutils_check--master-arm success Testing passed

Commit Message

Szabolcs Nagy May 29, 2024, 9:03 a.m. UTC
  The logic to decide if an input relocation for a symbol becomes a
particular kind of output relocation is one of the hard to maintain
parts of the bfd ld backend, since it is partially repeated across

 elfNN_aarch64_check_relocs (where dynamic relocations are counted per
 symbol and input section),

 elfNN_aarch64_late_size_sections (where relocation sections are sized
 and GOT offsets assigned),

 elfNN_aarch64_relocate_section (where most relocations are applied and
 output to a relocation section),

 elfNN_aarch64_finish_dynamic_symbol (where some of the GOT
 relocations are applied and output).

The DT_RELR support adds another layer to this complexity: after the
output relocation sections are sized, so all dynamic relocations are
accounted (in elfNN_aarch64_late_size_sections), we scan the symbols
and input relocations again to decide which ones become relative
relocations that can be packed. The sizes of the relocation sections
are updated accordingly. This logic must be consistent with the code
that applies the relocs later so each relative relocation is emitted
exactly once: either in .rela.* or packed in .relr.dyn.

Sizing of .relr.dyn is done via elfNN_aarch64_size_relative_relocs that
may be called repeatedly whenever the layout changes since an address
change can affect the size of the packed format. Then the final content
is emitted in elfNN_aarch64_finish_relative_relocs, that is called
after the layout is final and before relocations are applied and
emitted. These hooks are only called if DT_RELR is enabled.

We only pack relative relocs that are known to be aligned in the output
during .relr.dyn sizing, the potentially unaligned relative relocs are
emitted normally (in .rela.*, not packed), because the format requires
aligned addresses.
---
 bfd/elfnn-aarch64.c           | 429 +++++++++++++++++++++++++++++++++-
 bfd/elfxx-aarch64.h           |  11 +
 ld/emulparams/aarch64elf.sh   |   2 +
 ld/emulparams/aarch64fbsd.sh  |   2 +
 ld/emulparams/aarch64linux.sh |   2 +
 5 files changed, 445 insertions(+), 1 deletion(-)
  

Comments

Nick Clifton May 31, 2024, 8:14 a.m. UTC | #1
Hi Szabolcs,

> The logic to decide if an input relocation for a symbol becomes a
> particular kind of output relocation is one of the hard to maintain
> parts of the bfd ld backend, since it is partially repeated across
> 
>   elfNN_aarch64_check_relocs (where dynamic relocations are counted per
>   symbol and input section),
> 
>   elfNN_aarch64_late_size_sections (where relocation sections are sized
>   and GOT offsets assigned),
> 
>   elfNN_aarch64_relocate_section (where most relocations are applied and
>   output to a relocation section),
> 
>   elfNN_aarch64_finish_dynamic_symbol (where some of the GOT
>   relocations are applied and output).
> 
> The DT_RELR support adds another layer to this complexity: after the
> output relocation sections are sized, so all dynamic relocations are
> accounted (in elfNN_aarch64_late_size_sections), we scan the symbols
> and input relocations again to decide which ones become relative
> relocations that can be packed. The sizes of the relocation sections
> are updated accordingly. This logic must be consistent with the code
> that applies the relocs later so each relative relocation is emitted
> exactly once: either in .rela.* or packed in .relr.dyn.
> 
> Sizing of .relr.dyn is done via elfNN_aarch64_size_relative_relocs that
> may be called repeatedly whenever the layout changes since an address
> change can affect the size of the packed format. Then the final content
> is emitted in elfNN_aarch64_finish_relative_relocs, that is called
> after the layout is final and before relocations are applied and
> emitted. These hooks are only called if DT_RELR is enabled.
> 
> We only pack relative relocs that are known to be aligned in the output
> during .relr.dyn sizing, the potentially unaligned relative relocs are
> emitted normally (in .rela.*, not packed), because the format requires
> aligned addresses.
> ---
>   bfd/elfnn-aarch64.c           | 429 +++++++++++++++++++++++++++++++++-
>   bfd/elfxx-aarch64.h           |  11 +
>   ld/emulparams/aarch64elf.sh   |   2 +
>   ld/emulparams/aarch64fbsd.sh  |   2 +
>   ld/emulparams/aarch64linux.sh |   2 +
>   5 files changed, 445 insertions(+), 1 deletion(-)
> 

Approved - please apply.

Cheers
   Nick
  

Patch

diff --git a/bfd/elfnn-aarch64.c b/bfd/elfnn-aarch64.c
index 55871dbe763..81f1a646125 100644
--- a/bfd/elfnn-aarch64.c
+++ b/bfd/elfnn-aarch64.c
@@ -2706,6 +2706,19 @@  struct elf_aarch64_link_hash_table
   /* Used by local STT_GNU_IFUNC symbols.  */
   htab_t loc_hash_table;
   void * loc_hash_memory;
+
+  /* Array of relative relocs to be emitted in DT_RELR format.  */
+  bfd_size_type relr_alloc;
+  bfd_size_type relr_count;
+  struct relr_entry
+  {
+    asection *sec;
+    bfd_vma off;
+  } *relr;
+  /* Sorted output addresses of above relative relocs.  */
+  bfd_vma *relr_sorted;
+  /* Layout recomputation count.  */
+  bfd_size_type relr_layout_iter;
 };
 
 /* Create an entry in an AArch64 ELF linker hash table.  */
@@ -5971,6 +5984,18 @@  elfNN_aarch64_final_link_relocate (reloc_howto_type *howto,
 		       || !(bfd_link_pie (info) || SYMBOLIC_BIND (info, h))
 		       || !h->def_regular))
 	    outrel.r_info = ELFNN_R_INFO (h->dynindx, r_type);
+	  else if (info->enable_dt_relr
+		   && input_section->alignment_power != 0
+		   && rel->r_offset % 2 == 0)
+	    {
+	      /* Don't emit a relative relocation that is packed, only
+		 apply the addend.  */
+	      if (globals->no_apply_dynamic_relocs)
+		return bfd_reloc_ok;
+	      return _bfd_final_link_relocate (howto, input_bfd, input_section,
+					       contents, rel->r_offset, value,
+					       signed_addend);
+	    }
 	  else
 	    {
 	      int symbol;
@@ -6237,7 +6262,8 @@  elfNN_aarch64_final_link_relocate (reloc_howto_type *howto,
 						     addend, weak_undef_p);
       }
 
-      if (relative_reloc)
+      /* Emit relative relocations, but not if they are packed (DT_RELR).  */
+      if (relative_reloc && !info->enable_dt_relr)
 	{
 	  asection *s;
 	  Elf_Internal_Rela outrel;
@@ -9169,6 +9195,363 @@  elfNN_aarch64_allocate_local_ifunc_dynrelocs (void **slot, void *inf)
   return elfNN_aarch64_allocate_ifunc_dynrelocs (h, inf);
 }
 
+/* Record a relative relocation that will be emitted packed (DT_RELR).
+   Called after relocation sections are sized, so undo the size accounting
+   for this relocation.  */
+
+static bool
+record_relr (struct elf_aarch64_link_hash_table *htab, asection *sec,
+	     bfd_vma off, asection *sreloc)
+{
+  /* Undo the relocation section size accounting.  */
+  BFD_ASSERT (sreloc->size >= RELOC_SIZE (htab));
+  sreloc->size -= RELOC_SIZE (htab);
+  /* The packing format uses the last bit of the address so that
+     must be aligned.  We don't pack relocations that may not be
+     aligned even though the final output address could end up
+     aligned, to avoid complex sizing logic for a rare case.  */
+  BFD_ASSERT (off % 2 == 0 && sec->alignment_power > 0);
+  if (htab->relr_count >= htab->relr_alloc)
+    {
+      if (htab->relr_alloc == 0)
+	htab->relr_alloc = 4096;
+      else
+	htab->relr_alloc *= 2;
+      htab->relr = bfd_realloc (htab->relr,
+				htab->relr_alloc * sizeof (*htab->relr));
+      if (htab->relr == NULL)
+	return false;
+    }
+  htab->relr[htab->relr_count].sec = sec;
+  htab->relr[htab->relr_count].off = off;
+  htab->relr_count++;
+  return true;
+}
+
+/* Follow elfNN_aarch64_allocate_dynrelocs, but only record relative
+   relocations against the GOT and undo their previous size accounting.  */
+
+static bool
+record_relr_dyn_got_relocs (struct elf_link_hash_entry *h, void *inf)
+{
+
+  if (h->root.type == bfd_link_hash_indirect)
+    return true;
+  if (h->root.type == bfd_link_hash_warning)
+    h = (struct elf_link_hash_entry *) h->root.u.i.link;
+  if (h->type == STT_GNU_IFUNC && h->def_regular)
+    return true;
+  if (h->got.refcount <= 0)
+    return true;
+  if (elf_aarch64_hash_entry (h)->got_type != GOT_NORMAL)
+    return true;
+
+  struct bfd_link_info *info = (struct bfd_link_info *) inf;
+  struct elf_aarch64_link_hash_table *htab = elf_aarch64_hash_table (info);
+
+  if ((ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
+       || h->root.type != bfd_link_hash_undefweak)
+      && bfd_link_pic (info)
+      /* Undefined weak symbol in static PIE resolves to 0 without
+	 any dynamic relocations.  */
+      && !UNDEFWEAK_NO_DYNAMIC_RELOC (info, h))
+    {
+      bool relative_reloc = SYMBOL_REFERENCES_LOCAL (info, h)
+			    && !bfd_is_abs_symbol (&h->root);
+      if (relative_reloc)
+	if (!record_relr (htab, htab->root.sgot, h->got.offset,
+			  htab->root.srelgot))
+	  return false;
+    }
+  return true;
+}
+
+/* Record packed relative relocs against the GOT for local symbols.
+   Undo the size accounting of elfNN_aarch64_late_size_sections.  */
+
+static bool
+record_relr_local_got_relocs (bfd *input_bfd, struct bfd_link_info *info)
+{
+  struct elf_aarch64_local_symbol *locals;
+  Elf_Internal_Shdr *symtab_hdr;
+  struct elf_aarch64_link_hash_table *htab;
+
+  if (!bfd_link_pic (info))
+    return true;
+
+  locals = elf_aarch64_locals (input_bfd);
+  if (locals == NULL)
+    return true;
+
+  symtab_hdr = &elf_symtab_hdr (input_bfd);
+  htab = elf_aarch64_hash_table (info);
+  for (unsigned int i = 0; i < symtab_hdr->sh_info; i++)
+    {
+      bfd_vma off = locals[i].got_offset;
+      if (locals[i].got_refcount <= 0)
+	continue;
+      if ((locals[i].got_type & GOT_NORMAL) == 0)
+	continue;
+
+      /* FIXME: If the local symbol is in SHN_ABS then emitting
+	 a relative relocation is not correct, but it seems to
+	 be wrong in elfNN_aarch64_final_link_relocate too.  */
+      if (!record_relr (htab, htab->root.sgot, off, htab->root.srelgot))
+	return false;
+    }
+  return true;
+}
+
+/* Follows the logic of elfNN_aarch64_relocate_section to decide which
+   relocations will become relative and possible to pack.  Ignore
+   relocations against the GOT, those are handled separately per-symbol.
+   Undo the size accounting of the packed relocations and record them
+   so the relr section can be sized later.  */
+
+static bool
+record_relr_non_got_relocs (bfd *input_bfd, struct bfd_link_info *info,
+			    asection *sec)
+{
+  const Elf_Internal_Rela *relocs;
+  const Elf_Internal_Rela *rel;
+  const Elf_Internal_Rela *rel_end;
+  asection *sreloc;
+  struct elf_aarch64_link_hash_table *htab;
+  Elf_Internal_Shdr *symtab_hdr;
+  struct elf_link_hash_entry **sym_hashes;
+
+  if (sec->reloc_count == 0)
+    return true;
+  if ((sec->flags & (SEC_RELOC | SEC_ALLOC | SEC_DEBUGGING))
+      != (SEC_RELOC | SEC_ALLOC))
+    return true;
+  if (sec->alignment_power == 0)
+    return true;
+  sreloc = elf_section_data (sec)->sreloc;
+  if (sreloc == NULL)
+    return true;
+  htab = elf_aarch64_hash_table (info);
+  symtab_hdr = &elf_symtab_hdr (input_bfd);
+  sym_hashes = elf_sym_hashes (input_bfd);
+  relocs = _bfd_elf_link_info_read_relocs (input_bfd, info, sec, NULL, NULL,
+					   info->keep_memory);
+  BFD_ASSERT (relocs != NULL);
+  rel_end = relocs + sec->reloc_count;
+  for (rel = relocs; rel < rel_end; rel++)
+    {
+      unsigned int r_symndx = ELFNN_R_SYM (rel->r_info);
+      unsigned int r_type = ELFNN_R_TYPE (rel->r_info);
+
+      bfd_reloc_code_real_type bfd_r_type
+	= elfNN_aarch64_bfd_reloc_from_type (input_bfd, r_type);
+      /* Handle relocs that can become R_AARCH64_RELATIVE,
+	 but not ones against the GOT as those are handled
+	 separately per-symbol.  */
+      if (bfd_r_type != BFD_RELOC_AARCH64_NN)
+	continue;
+      /* Can only pack relocation against an aligned address.  */
+      if (rel->r_offset % 2 != 0)
+	continue;
+
+      struct elf_link_hash_entry *h = NULL;
+      asection *def_sec = NULL;
+      bool resolved_to_zero = false;
+      if (r_symndx < symtab_hdr->sh_info)
+	{
+	  /* A local symbol.  */
+	  Elf_Internal_Sym *isym;
+	  isym = bfd_sym_from_r_symndx (&htab->root.sym_cache,
+					input_bfd, r_symndx);
+	  BFD_ASSERT (isym != NULL);
+	  if (ELF_ST_TYPE (isym->st_info) == STT_GNU_IFUNC)
+	    continue;
+	  def_sec = bfd_section_from_elf_index (input_bfd, isym->st_shndx);
+	}
+      else
+	{
+	  h = sym_hashes[r_symndx - symtab_hdr->sh_info];
+	  while (h->root.type == bfd_link_hash_indirect
+		 || h->root.type == bfd_link_hash_warning)
+	    h = (struct elf_link_hash_entry *) h->root.u.i.link;
+
+	  /* Filter out symbols that cannot have a relative reloc.  */
+	  if (h->dyn_relocs == NULL)
+	    continue;
+	  if (bfd_is_abs_symbol (&h->root))
+	    continue;
+	  if (h->type == STT_GNU_IFUNC)
+	    continue;
+
+	  if (h->root.type == bfd_link_hash_defined
+	      || h->root.type == bfd_link_hash_defweak)
+	    def_sec = h->root.u.def.section;
+	  resolved_to_zero = UNDEFWEAK_NO_DYNAMIC_RELOC (info, h);
+	}
+      if (def_sec != NULL && discarded_section (def_sec))
+	continue;
+      /* Same logic as in elfNN_aarch64_final_link_relocate.
+	 Except conditionals trimmed that cannot result a reltive reloc.  */
+      if (bfd_link_pic (info)
+	  && (h == NULL
+	      || (ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
+		  && !resolved_to_zero)
+	      || h->root.type != bfd_link_hash_undefweak))
+	{
+	  if (h != NULL
+	      && h->dynindx != -1
+	      && (!(bfd_link_pie (info) || SYMBOLIC_BIND (info, h))
+		  || !h->def_regular))
+	    continue;
+	  if (!record_relr (htab, sec, rel->r_offset, sreloc))
+	    return false;
+	}
+    }
+  return true;
+}
+
+static int
+cmp_relr_addr (const void *p, const void *q)
+{
+  const bfd_vma *a = p;
+  const bfd_vma *b = q;
+  return *a < *b ? -1 : *a > *b ? 1 : 0;
+}
+
+/* Produce a malloc'd sorted array of reloc addresses in htab->relr_sorted.
+   Returns false on allocation failure.  */
+
+static bool
+sort_relr (struct bfd_link_info *info,
+	   struct elf_aarch64_link_hash_table *htab)
+{
+  if (htab->relr_count == 0)
+    return true;
+
+  bfd_vma *addr = htab->relr_sorted;
+  if (addr == NULL)
+    {
+      addr = bfd_malloc (htab->relr_count * sizeof (*addr));
+      if (addr == NULL)
+	return false;
+      htab->relr_sorted = addr;
+    }
+
+  for (bfd_size_type i = 0; i < htab->relr_count; i++)
+    {
+      bfd_vma off = _bfd_elf_section_offset (info->output_bfd, info,
+					     htab->relr[i].sec,
+					     htab->relr[i].off);
+      addr[i] = htab->relr[i].sec->output_section->vma
+		+ htab->relr[i].sec->output_offset
+		+ off;
+    }
+  qsort (addr, htab->relr_count, sizeof (*addr), cmp_relr_addr);
+  return true;
+}
+
+/* Size .relr.dyn whenever the layout changes, the number of packed
+   relocs are unchanged but the packed representation can.  */
+
+bool
+elfNN_aarch64_size_relative_relocs (struct bfd_link_info *info,
+				   bool *need_layout)
+{
+  struct elf_aarch64_link_hash_table *htab = elf_aarch64_hash_table (info);
+  asection *srelrdyn = htab->root.srelrdyn;
+  *need_layout = false;
+
+  if (!sort_relr (info, htab))
+    return false;
+  bfd_vma *addr = htab->relr_sorted;
+
+  BFD_ASSERT (srelrdyn != NULL);
+  bfd_size_type oldsize = srelrdyn->size;
+  srelrdyn->size = 0;
+  for (bfd_size_type i = 0; i < htab->relr_count; )
+    {
+      bfd_vma base = addr[i];
+      i++;
+      srelrdyn->size += 8;
+      base += 8;
+      for (;;)
+	{
+	  bfd_size_type start_i = i;
+	  while (i < htab->relr_count
+		 && addr[i] - base < 63 * 8
+		 && (addr[i] - base) % 8 == 0)
+	    i++;
+	  if (i == start_i)
+	    break;
+	  srelrdyn->size += 8;
+	  base += 63 * 8;
+	}
+    }
+  if (srelrdyn->size != oldsize)
+    {
+      *need_layout = true;
+      /* Stop after a few iterations in case the layout does not converge,
+	 we can do this when the size would shrink.  */
+      if (htab->relr_layout_iter++ > 5 && srelrdyn->size < oldsize)
+	{
+	  srelrdyn->size = oldsize;
+	  *need_layout = false;
+	}
+    }
+  return true;
+}
+
+/* Emit the .relr.dyn section after it is sized and the layout is fixed.  */
+
+bool
+elfNN_aarch64_finish_relative_relocs (struct bfd_link_info *info)
+{
+  struct elf_aarch64_link_hash_table *htab = elf_aarch64_hash_table (info);
+  asection *srelrdyn = htab->root.srelrdyn;
+  bfd *dynobj = htab->root.dynobj;
+
+  if (srelrdyn == NULL || srelrdyn->size == 0)
+    return true;
+  srelrdyn->contents = bfd_alloc (dynobj, srelrdyn->size);
+  if (srelrdyn->contents == NULL)
+    return false;
+  bfd_vma *addr = htab->relr_sorted;
+  bfd_byte *loc = srelrdyn->contents;
+  for (bfd_size_type i = 0; i < htab->relr_count; )
+    {
+      bfd_vma base = addr[i];
+      i++;
+      bfd_put_64 (dynobj, base, loc);
+      loc += 8;
+      base += 8;
+      for (;;)
+	{
+	  bfd_vma bits = 0;
+	  while (i < htab->relr_count)
+	    {
+	      bfd_vma delta = addr[i] - base;
+	      if (delta >= 63 * 8 || delta % 8 != 0)
+		break;
+	      bits |= (bfd_vma) 1 << (delta / 8);
+	      i++;
+	    }
+	  if (bits == 0)
+	    break;
+	  bfd_put_64 (dynobj, (bits << 1) | 1, loc);
+	  loc += 8;
+	  base += 63 * 8;
+	}
+    }
+  free (addr);
+  htab->relr_sorted = NULL;
+  /* Pad any excess with 1's, a do-nothing encoding.  */
+  while (loc < srelrdyn->contents + srelrdyn->size)
+    {
+      bfd_put_64 (dynobj, 1, loc);
+      loc += 8;
+    }
+  return true;
+}
+
 /* This is the most important function of all . Innocuosly named
    though !  */
 
@@ -9344,6 +9727,27 @@  elfNN_aarch64_late_size_sections (bfd *output_bfd ATTRIBUTE_UNUSED,
 	}
     }
 
+  /* Record the relative relocations that will be packed and undo the
+     size allocation for them in .rela.*. The size of .relr.dyn will be
+     computed later iteratively since it depends on the final layout.  */
+  if (info->enable_dt_relr && !bfd_link_relocatable (info))
+    {
+      elf_link_hash_traverse (&htab->root, record_relr_dyn_got_relocs, info);
+
+      for (ibfd = info->input_bfds; ibfd != NULL; ibfd = ibfd->link.next)
+	{
+	  if (!is_aarch64_elf (ibfd))
+	    continue;
+
+	  for (s = ibfd->sections; s != NULL; s = s->next)
+	    if (!record_relr_non_got_relocs (ibfd, info, s))
+	      return false;
+
+	  if (!record_relr_local_got_relocs (ibfd, info))
+	    return false;
+	}
+    }
+
   /* Init mapping symbols information to use later to distingush between
      code and data while scanning for errata.  */
   if (htab->fix_erratum_835769 || htab->fix_erratum_843419)
@@ -9383,6 +9787,19 @@  elfNN_aarch64_late_size_sections (bfd *output_bfd ATTRIBUTE_UNUSED,
 	  if (s != htab->root.srelplt)
 	    s->reloc_count = 0;
 	}
+      else if (s == htab->root.srelrdyn)
+	{
+	  /* Remove .relr.dyn based on relr_count, not size, since
+	     it is not sized yet.  */
+	  if (htab->relr_count == 0)
+	    s->flags |= SEC_EXCLUDE;
+	  else
+	    /* Force dynamic tags for relocs even if there are no
+	       .rela* relocs, required for setting DT_TEXTREL.  */
+	    relocs = true;
+	  /* Allocate contents later.  */
+	  continue;
+	}
       else
 	{
 	  /* It's not one of our sections, so don't allocate space.  */
@@ -9741,6 +10158,9 @@  elfNN_aarch64_finish_dynamic_symbol (bfd *output_bfd,
 	  if (!(h->def_regular || ELF_COMMON_DEF_P (h)))
 	    return false;
 	  BFD_ASSERT ((h->got.offset & 1) != 0);
+	  /* Don't emit relative relocs if they are packed.  */
+	  if (info->enable_dt_relr)
+	    goto skip_got_reloc;
 	  rela.r_info = ELFNN_R_INFO (0, AARCH64_R (RELATIVE));
 	  rela.r_addend = (h->root.u.def.value
 			   + h->root.u.def.section->output_section->vma
@@ -9760,6 +10180,7 @@  elfNN_aarch64_finish_dynamic_symbol (bfd *output_bfd,
       loc += htab->root.srelgot->reloc_count++ * RELOC_SIZE (htab);
       bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
     }
+skip_got_reloc:
 
   if (h->needs_copy)
     {
@@ -10395,6 +10816,12 @@  const struct elf_size_info elfNN_aarch64_size_info =
 #define elf_backend_merge_gnu_properties	\
   elfNN_aarch64_merge_gnu_properties
 
+#define elf_backend_size_relative_relocs	\
+  elfNN_aarch64_size_relative_relocs
+
+#define elf_backend_finish_relative_relocs	\
+  elfNN_aarch64_finish_relative_relocs
+
 #define elf_backend_can_refcount       1
 #define elf_backend_can_gc_sections    1
 #define elf_backend_plt_readonly       1
diff --git a/bfd/elfxx-aarch64.h b/bfd/elfxx-aarch64.h
index 6c084f75796..f21e5ee5efc 100644
--- a/bfd/elfxx-aarch64.h
+++ b/bfd/elfxx-aarch64.h
@@ -96,6 +96,17 @@  extern bool elf32_aarch64_size_stubs
 extern bool elf32_aarch64_build_stubs
   (struct bfd_link_info *);
 
+/* AArch64 relative relocation packing support for ELF64.  */
+extern bool elf64_aarch64_size_relative_relocs
+  (struct bfd_link_info *, bool *);
+extern bool elf64_aarch64_finish_relative_relocs
+  (struct bfd_link_info *);
+/* AArch64 relative relocation packing support for ELF32.  */
+extern bool elf32_aarch64_size_relative_relocs
+  (struct bfd_link_info *, bool *);
+extern bool elf32_aarch64_finish_relative_relocs
+  (struct bfd_link_info *);
+
 /* Take the PAGE component of an address or offset.  */
 #define PG(x)	     ((x) & ~ (bfd_vma) 0xfff)
 #define PG_OFFSET(x) ((x) &   (bfd_vma) 0xfff)
diff --git a/ld/emulparams/aarch64elf.sh b/ld/emulparams/aarch64elf.sh
index 8f68e517b45..72616b57691 100644
--- a/ld/emulparams/aarch64elf.sh
+++ b/ld/emulparams/aarch64elf.sh
@@ -1,3 +1,5 @@ 
+source_sh ${srcdir}/emulparams/dt-relr.sh
+
 ARCH=aarch64
 MACHINE=
 NOP=0x1f2003d5
diff --git a/ld/emulparams/aarch64fbsd.sh b/ld/emulparams/aarch64fbsd.sh
index 0bcab7a4baa..db68d436544 100644
--- a/ld/emulparams/aarch64fbsd.sh
+++ b/ld/emulparams/aarch64fbsd.sh
@@ -1,3 +1,5 @@ 
+source_sh ${srcdir}/emulparams/dt-relr.sh
+
 ARCH=aarch64
 MACHINE=
 NOP=0x1f2003d5
diff --git a/ld/emulparams/aarch64linux.sh b/ld/emulparams/aarch64linux.sh
index 7b4a0e86073..6fe82a70659 100644
--- a/ld/emulparams/aarch64linux.sh
+++ b/ld/emulparams/aarch64linux.sh
@@ -1,3 +1,5 @@ 
+source_sh ${srcdir}/emulparams/dt-relr.sh
+
 ARCH=aarch64
 MACHINE=
 NOP=0x1f2003d5