[4/7] Class-fy partial_die_info

Message ID c2b2b52e-707c-b2c3-c7ee-f581bb2e17b7@simark.ca
State New, archived
Headers

Commit Message

Simon Marchi Jan. 29, 2018, 1:15 a.m. UTC
  On 2018-01-25 04:38 AM, Yao Qi wrote:
> This patch is to class-fy partial_die_info.  Two things special here,
> 
>  - disable assignment operator, but keep copy ctor, which is used in
>    load_partial_dies,
>  - have a private ctor which is only used by dwarf2_cu::find_partial_die,
>    I don't want other code use it, so make it private,

Hi Yao,

From what I understand, the only reason to have that private constructor is
to construct a temporary partial_die_info object used to search in the htab,
is that right?  If so, converting that htab_t to an std::unordered_map would
remove the need for all this, since you don't need to construct an object
to search it.  See the diff below that applies on top of this patch.

It's not thoroughly tested and I am not sure of the validity of the
per_cu->cu->partial_dies.empty () call in find_partial_die, but I think it
should work.  Plus, it adds some type-safety, which I am a big fan of.

But otherwise, the patch is fine with me.

Simon


From dacaf6028e921efeb8c7420db7f663e5b4d7e8f1 Mon Sep 17 00:00:00 2001
From: Simon Marchi <simon.marchi@polymtl.ca>
Date: Sun, 28 Jan 2018 19:47:01 -0500
Subject: [PATCH] use std::unordered_map

---
 gdb/dwarf2read.c | 115 +++++++++++++------------------------------------------
 1 file changed, 26 insertions(+), 89 deletions(-)
  

Comments

Yao Qi Jan. 30, 2018, 10:49 a.m. UTC | #1
Simon Marchi <simark@simark.ca> writes:

> From what I understand, the only reason to have that private constructor is
> to construct a temporary partial_die_info object used to search in the htab,
> is that right?  If so, converting that htab_t to an std::unordered_map would

Right.

>
> -  /* Hash table holding all the loaded partial DIEs
> -     with partial_die->offset.SECT_OFF as hash.  */
> -  htab_t partial_dies = nullptr;
> +  /* Hash map holding all the loaded partial DIEs
> +     with their section offset as the key.  */
> +  std::unordered_map<sect_offset, partial_die_info *> partial_dies;
>

This doesn't compile with my g++ 4.9, as library doesn't provide
std::hash<T> specialization for enumeration types.  It is available
since C++ 14.  http://en.cppreference.com/w/cpp/utility/hash

I can change it to

std::unordered_map<std::underlying_type<sect_offset>::type,
		     partial_die_info *> partial_dies;

to fix the compiler errors.

> @@ -18259,14 +18227,7 @@ load_partial_dies (const struct die_reader_specs *reader,
>    if (cu->per_cu->load_all_dies)
>      load_all = 1;
>
> -  cu->partial_dies
> -    = htab_create_alloc_ex (cu->header.length / 12,
> -			    partial_die_hash,
> -			    partial_die_eq,
> -			    NULL,
> -			    &cu->comp_unit_obstack,
> -			    hashtab_obstack_allocate,
> -			    dummy_obstack_deallocate);
> +  cu->partial_dies.clear ();

This changes the behavior, without this patch, cu->partial_dies's
elements are allocated on cu->comp_unit_obstack.  After this patch,
elements are allocated on heap.  I don't know how does it affect
performance.
  
Pedro Alves Jan. 30, 2018, 11:39 a.m. UTC | #2
On 01/29/2018 01:15 AM, Simon Marchi wrote:

> From what I understand, the only reason to have that private constructor is
> to construct a temporary partial_die_info object used to search in the htab,
> is that right?  If so, converting that htab_t to an std::unordered_map would
> remove the need for all this, since you don't need to construct an object
> to search it.  See the diff below that applies on top of this patch.
> 
> It's not thoroughly tested and I am not sure of the validity of the
> per_cu->cu->partial_dies.empty () call in find_partial_die, but I think it
> should work.  Plus, it adds some type-safety, which I am a big fan of.
> 
> But otherwise, the patch is fine with me.

Careful here.  This could do with some benchmarking.  The DWARF reading code
is performance (both timing and memory) sensitive.  This is trading an open
addressing hash table (htab_t), for a node-based closed addressing hash table.
The keys/elements in the map are small so I'd expect this to make
a difference.  Also, this is trading a in-principle cache-friendly
obstack allocation scheme for the standard new allocator.

Thanks,
Pedro Alves
  
Pedro Alves Jan. 30, 2018, 3:11 p.m. UTC | #3
On 01/30/2018 10:49 AM, Yao Qi wrote:
> Simon Marchi <simark@simark.ca> writes:

>> -  /* Hash table holding all the loaded partial DIEs
>> -     with partial_die->offset.SECT_OFF as hash.  */
>> -  htab_t partial_dies = nullptr;
>> +  /* Hash map holding all the loaded partial DIEs
>> +     with their section offset as the key.  */
>> +  std::unordered_map<sect_offset, partial_die_info *> partial_dies;
>>
> This doesn't compile with my g++ 4.9, as library doesn't provide
> std::hash<T> specialization for enumeration types.  It is available
> since C++ 14.  http://en.cppreference.com/w/cpp/utility/hash
> 
> I can change it to
> 
> std::unordered_map<std::underlying_type<sect_offset>::type,
> 		     partial_die_info *> partial_dies;
> 
> to fix the compiler errors.

Note: in cases like these, we don't need to forgo using the
(strong) enum as key type.  unordered_map's third (defaulted) template
parameter type is the hasher to use.  And we have a convenience
hasher for this:

 [pushed] Add gdb::hash_enum
 https://sourceware.org/ml/gdb-patches/2017-12/msg00210.html

Currently used in dwarf2read.c:

  std::unordered_map<sect_offset,
                     dwarf2_per_cu_data *,
                     gdb::hash_enum<sect_offset>>

Thanks,
Pedro Alves
  
Simon Marchi Jan. 31, 2018, 3:46 a.m. UTC | #4
On 2018-01-30 06:39, Pedro Alves wrote:
> On 01/29/2018 01:15 AM, Simon Marchi wrote:
> 
>> From what I understand, the only reason to have that private 
>> constructor is
>> to construct a temporary partial_die_info object used to search in the 
>> htab,
>> is that right?  If so, converting that htab_t to an std::unordered_map 
>> would
>> remove the need for all this, since you don't need to construct an 
>> object
>> to search it.  See the diff below that applies on top of this patch.
>> 
>> It's not thoroughly tested and I am not sure of the validity of the
>> per_cu->cu->partial_dies.empty () call in find_partial_die, but I 
>> think it
>> should work.  Plus, it adds some type-safety, which I am a big fan of.
>> 
>> But otherwise, the patch is fine with me.
> 
> Careful here.  This could do with some benchmarking.  The DWARF reading 
> code
> is performance (both timing and memory) sensitive.  This is trading an 
> open
> addressing hash table (htab_t), for a node-based closed addressing hash 
> table.
> The keys/elements in the map are small so I'd expect this to make
> a difference.  Also, this is trading a in-principle cache-friendly
> obstack allocation scheme for the standard new allocator.

Ah, indeed.  I thought that unordered_map would be implemented the same 
way as htab_t, but I see it's not the case.  Doing some quick tests on a 
big binary, it increases the time reading the symbols from an average of 
37 seconds to an average of 42 seconds.

I understand the different hash table implementation having an impact, 
but I don't really understand how the allocation scheme can have a 
meaningful impact.  The partial_die_info objects are still allocated on 
the obstack, aren't they?  So it's just the space for the table itself 
that isn't on the objstack, but I don't see why that would make a 
difference.

If we want to have a data structure with the same kind of performance as 
htab_t but with type-safety in the future, is your vision that we'll 
have to implement it ourselves?  Should we make a wrapper around htab_t?

Simon
  
Yao Qi Jan. 31, 2018, 11:55 a.m. UTC | #5
Simon Marchi <simark@simark.ca> writes:

> Ah, indeed.  I thought that unordered_map would be implemented the
> same way as htab_t, but I see it's not the case.  Doing some quick
> tests on a big binary, it increases the time reading the symbols from
> an average of 37 seconds to an average of 42 seconds.
>
> I understand the different hash table implementation having an impact,
> but I don't really understand how the allocation scheme can have a
> meaningful impact.  The partial_die_info objects are still allocated
> on the obstack, aren't they?  So it's just the space for the table
> itself that isn't on the objstack, but I don't see why that would make
> a difference.

Hi Simon,
We have some perf test cases, in gdb.perf, but they may not cover the
path we are discussing here.  If you want to run them, do these things,

$ cd gdb/testsuite
$ make -j10 build-perf RUNTESTFLAGS="MONSTER=y gmonster1.exp"

// this takes a while to generate many executable files,

$ make  check-perf RUNTESTFLAGS="MONSTER=y gmonster1-null-lookup.exp gmonster1-print-cerr.exp gmonster1-runto-main.exp gmonster1-pervasive-typedef.exp gmonster1-ptype-string.exp gmonster1-select-file.exp"  GDB_PERFTEST_MODE=run

// it takes one hour on my aarch64-linux box

You can get the performance number in perftest.sum.

I run these perf tests, with and without Simon's patch (htab_t ->
std::unordered_map), on aarch64-linux, I don't see speed and space
change.  Again, these existing test cases may not cover the path.

gmonster1:gmonster-null-lookup cpu_time 10-cus 0.0008508 0.0013708
gmonster1:gmonster-null-lookup cpu_time 100-cus 0.0047922 0.0030584
gmonster1:gmonster-null-lookup cpu_time 1000-cus 0.0400274 0.0397188
gmonster1:gmonster-null-lookup cpu_time 10000-cus 0.3885292 0.3862456
gmonster1:gmonster-null-lookup wall_time 10-cus 0.000862598419189 0.00137987136841
gmonster1:gmonster-null-lookup wall_time 100-cus 0.00480117797852 0.00306539535522
gmonster1:gmonster-null-lookup wall_time 1000-cus 0.0400432109833 0.0397333621979
gmonster1:gmonster-null-lookup wall_time 10000-cus 0.388552331924 0.386282110214
gmonster1:gmonster-null-lookup vmsize 10-cus 35900 35896
gmonster1:gmonster-null-lookup vmsize 100-cus 64992 64972
gmonster1:gmonster-null-lookup vmsize 1000-cus 364748 364760
gmonster1:gmonster-null-lookup vmsize 10000-cus 3313284 3313360
gmonster1:gmonster-pervasive-typedef cpu_time 10-cus 0.0682848 0.0737696
gmonster1:gmonster-pervasive-typedef cpu_time 100-cus 0.5843324 0.6391266
gmonster1:gmonster-pervasive-typedef cpu_time 1000-cus 6.061932 6.621443
gmonster1:gmonster-pervasive-typedef cpu_time 10000-cus 62.6619226 67.1514486
gmonster1:gmonster-pervasive-typedef wall_time 10-cus 0.0683585643768 0.0737894058228
gmonster1:gmonster-pervasive-typedef wall_time 100-cus 0.585014867783 0.641395568848
gmonster1:gmonster-pervasive-typedef wall_time 1000-cus 6.07634234428 6.62292981148
gmonster1:gmonster-pervasive-typedef wall_time 10000-cus 62.6738821507 67.2341232777
gmonster1:gmonster-pervasive-typedef vmsize 10-cus 32381 32368
gmonster1:gmonster-pervasive-typedef vmsize 100-cus 41131 41084
gmonster1:gmonster-pervasive-typedef vmsize 1000-cus 129726 129553
gmonster1:gmonster-pervasive-typedef vmsize 10000-cus 1007898 1007878
gmonster1:gmonster-print-cerr cpu_time 10-cus 0.0011148 0.0011168
gmonster1:gmonster-print-cerr cpu_time 100-cus 0.0056498 0.0056002
gmonster1:gmonster-print-cerr cpu_time 1000-cus 0.0502508 0.0948982
gmonster1:gmonster-print-cerr cpu_time 10000-cus 0.4922956 0.4948586
gmonster1:gmonster-print-cerr wall_time 10-cus 0.00112357139587 0.00112380981445
gmonster1:gmonster-print-cerr wall_time 100-cus 0.00565934181213 0.00561075210571
gmonster1:gmonster-print-cerr wall_time 1000-cus 0.0502710342407 0.0949506282806
gmonster1:gmonster-print-cerr wall_time 10000-cus 0.492320823669 0.494928407669
gmonster1:gmonster-print-cerr vmsize 10-cus 32508 32500
gmonster1:gmonster-print-cerr vmsize 100-cus 41308 41300
gmonster1:gmonster-print-cerr vmsize 1000-cus 128944 128948
gmonster1:gmonster-print-cerr vmsize 10000-cus 993324 993316
gmonster1:gmonster-runto-main cpu_time 10-cus 0.0360472 0.0231514
gmonster1:gmonster-runto-main cpu_time 100-cus 0.5829484 0.5236148
gmonster1:gmonster-runto-main cpu_time 1000-cus 9.146062 7.9914552
gmonster1:gmonster-runto-main cpu_time 10000-cus 134.266728 134.1361918
gmonster1:gmonster-runto-main wall_time 10-cus 0.042032957077 0.0313341617584
gmonster1:gmonster-runto-main wall_time 100-cus 0.588344860077 0.530299949646
gmonster1:gmonster-runto-main wall_time 1000-cus 9.15567464828 7.99887242317
gmonster1:gmonster-runto-main wall_time 10000-cus 134.285585785 134.15385747
gmonster1:gmonster-runto-main vmsize 10-cus 32503 32496
gmonster1:gmonster-runto-main vmsize 100-cus 41308 41296
gmonster1:gmonster-runto-main vmsize 1000-cus 128960 128952
gmonster1:gmonster-runto-main vmsize 10000-cus 993336 993296
gmonster1:gmonster-select-file cpu_time 10-cus 0.0972798 0.1028992
gmonster1:gmonster-select-file cpu_time 100-cus 0.886951 0.9285964
gmonster1:gmonster-select-file cpu_time 1000-cus 9.166904 9.7175802
gmonster1:gmonster-select-file cpu_time 10000-cus 92.5872488 95.2634352
gmonster1:gmonster-select-file wall_time 10-cus 0.097380399704 0.103002214432
gmonster1:gmonster-select-file wall_time 100-cus 0.887106800079 0.929784965515
gmonster1:gmonster-select-file wall_time 1000-cus 9.17858901024 9.72046675682
gmonster1:gmonster-select-file wall_time 10000-cus 92.6326120853 95.3189713955
gmonster1:gmonster-select-file vmsize 10-cus 32367 32384
gmonster1:gmonster-select-file vmsize 100-cus 41064 41056
gmonster1:gmonster-select-file vmsize 1000-cus 128694 128703
gmonster1:gmonster-select-file vmsize 10000-cus 993052 993048
  
Pedro Alves Jan. 31, 2018, 3:32 p.m. UTC | #6
On 01/31/2018 03:46 AM, Simon Marchi wrote:
> On 2018-01-30 06:39, Pedro Alves wrote:
>> On 01/29/2018 01:15 AM, Simon Marchi wrote:
>>
>>> From what I understand, the only reason to have that private constructor is
>>> to construct a temporary partial_die_info object used to search in the htab,
>>> is that right?  If so, converting that htab_t to an std::unordered_map would
>>> remove the need for all this, since you don't need to construct an object
>>> to search it.  See the diff below that applies on top of this patch.
>>>
>>> It's not thoroughly tested and I am not sure of the validity of the
>>> per_cu->cu->partial_dies.empty () call in find_partial_die, but I think it
>>> should work.  Plus, it adds some type-safety, which I am a big fan of.
>>>
>>> But otherwise, the patch is fine with me.
>>
>> Careful here.  This could do with some benchmarking.  The DWARF reading code
>> is performance (both timing and memory) sensitive.  This is trading an open
>> addressing hash table (htab_t), for a node-based closed addressing hash table.
>> The keys/elements in the map are small so I'd expect this to make
>> a difference.  Also, this is trading a in-principle cache-friendly
>> obstack allocation scheme for the standard new allocator.
> 
> Ah, indeed.  I thought that unordered_map would be implemented the same way as htab_t, but I see it's not the case.  Doing some quick tests on a big binary, it increases the time reading the symbols from an average of 37 seconds to an average of 42 seconds.
> 
> I understand the different hash table implementation having an impact, but I don't really understand how the allocation scheme can have a meaningful impact.  The partial_die_info objects are still allocated on the obstack, aren't they?  So it's just the space for the table itself that isn't on the objstack, but I don't see why that would make a difference.

Well, that's the thing.  unordered_map is going to reach to the heap to
allocate the buckets and nodes.  And the heap is slow.  And there's a
memory size cost for each element too, along with that reducing
cache locality.

unordered_map is really inefficient for this use case.  We have a hash map
of small objects that is filled in once, and after initial fill, we don't
do random insert/remove of elements from this map.  I think.

> 
> If we want to have a data structure with the same kind of performance as htab_t but with type-safety in the future, is your vision that we'll have to implement it ourselves?  Should we make a wrapper around htab_t?

Well, my vision is to investigate alternatives.  :-)  There are many options
out there.  If you google around for hash table benchmarks you'll find several.
E.g.: <https://tessil.github.io/2016/08/29/benchmark-hopscotch-map.html>.

So there are good alternatives out there that we could import and use.
AFAIK, libiberty's htab_t is actually quite good, and GCC has a C++-ified
version of that.  See gcc/hash-table.h, gcc/hash-map.h, gcc/hash-set.h:

 /* This file implements a typed hash table.
    The implementation borrows from libiberty's htab_t in hashtab.h.

So sharing that code with GCC may make sense, as it's already in
"the family".

One thing I dislike about GCC's hash containers above is that
their API isn't modeled on the standard's, which can lead to
confusion if you're used to the standard containers, or if you're
experimenting/benchmarking different containers for some
use case.  You have things like:

  /* This function clears all entries in this hash table.  */
   void empty () { if (elements ()) empty_slow (); }

while in the std containers, empty() is a predicate...

It may be that GCC's hash/map/set could do with C++11-fication too.
Really I haven't investigated much.  I'm just aware of their
existence.

(A while ago I found reading this discussion quite educational:
  <https://bugs.ruby-lang.org/issues/12142>)

Thanks,
Pedro Alves
  

Patch

diff --git a/gdb/dwarf2read.c b/gdb/dwarf2read.c
index 8996f49bae..330ca87a1d 100644
--- a/gdb/dwarf2read.c
+++ b/gdb/dwarf2read.c
@@ -652,6 +652,8 @@  struct delayed_method_info
   struct die_info *die;
 };

+struct partial_die_info;
+
 /* Internal state when decoding a particular compilation unit.  */
 struct dwarf2_cu
 {
@@ -686,9 +688,9 @@  struct dwarf2_cu
      distinguish these in buildsym.c.  */
   struct pending **list_in_scope = nullptr;

-  /* Hash table holding all the loaded partial DIEs
-     with partial_die->offset.SECT_OFF as hash.  */
-  htab_t partial_dies = nullptr;
+  /* Hash map holding all the loaded partial DIEs
+     with their section offset as the key.  */
+  std::unordered_map<sect_offset, partial_die_info *> partial_dies;

   /* Storage for things with the same lifetime as this read-in compilation
      unit, including partial DIEs.  */
@@ -1493,36 +1495,6 @@  struct partial_die_info
     struct partial_die_info *die_parent = nullptr;
     struct partial_die_info *die_child = nullptr;
     struct partial_die_info *die_sibling = nullptr;
-
-    friend struct partial_die_info *
-    dwarf2_cu::find_partial_die (sect_offset sect_off);
-
-  private:
-    /* Only need to do look up in dwarf2_cu::find_partial_die.  */
-    partial_die_info (sect_offset sect_off)
-      : partial_die_info (sect_off, DW_TAG_padding, 0)
-    {
-    }
-
-    partial_die_info (sect_offset sect_off_, enum dwarf_tag tag_,
-		      int has_children_)
-      : sect_off (sect_off_), tag (tag_), has_children (has_children_)
-    {
-      is_external = 0;
-      is_declaration = 0;
-      has_type = 0;
-      has_specification = 0;
-      has_pc_info = 0;
-      may_be_inlined = 0;
-      main_subprogram = 0;
-      scope_set = 0;
-      has_byte_size = 0;
-      has_const_value = 0;
-      has_template_arguments = 0;
-      fixup_called = 0;
-      is_dwz = 0;
-      spec_is_dwz = 0;
-    }
   };

 /* This data structure holds the information of an abbrev.  */
@@ -2180,10 +2152,6 @@  static const gdb_byte *skip_one_die (const struct die_reader_specs *reader,
 				     const gdb_byte *info_ptr,
 				     struct abbrev_info *abbrev);

-static hashval_t partial_die_hash (const void *item);
-
-static int partial_die_eq (const void *item_lhs, const void *item_rhs);
-
 static struct dwarf2_per_cu_data *dwarf2_find_containing_comp_unit
   (sect_offset sect_off, unsigned int offset_in_dwz,
    struct dwarf2_per_objfile *dwarf2_per_objfile);
@@ -18259,14 +18227,7 @@  load_partial_dies (const struct die_reader_specs *reader,
   if (cu->per_cu->load_all_dies)
     load_all = 1;

-  cu->partial_dies
-    = htab_create_alloc_ex (cu->header.length / 12,
-			    partial_die_hash,
-			    partial_die_eq,
-			    NULL,
-			    &cu->comp_unit_obstack,
-			    hashtab_obstack_allocate,
-			    dummy_obstack_deallocate);
+  cu->partial_dies.clear ();

   while (1)
     {
@@ -18459,14 +18420,7 @@  load_partial_dies (const struct die_reader_specs *reader,
 	  || abbrev->tag == DW_TAG_variable
 	  || abbrev->tag == DW_TAG_namespace
 	  || part_die->is_declaration)
-	{
-	  void **slot;
-
-	  slot = htab_find_slot_with_hash (cu->partial_dies, part_die,
-					   to_underlying (part_die->sect_off),
-					   INSERT);
-	  *slot = part_die;
-	}
+	cu->partial_dies[part_die->sect_off] = part_die;

       /* For some DIEs we want to follow their children (if any).  For C
 	 we have no reason to follow the children of structures; for other
@@ -18512,8 +18466,22 @@  load_partial_dies (const struct die_reader_specs *reader,

 partial_die_info::partial_die_info (sect_offset sect_off_,
 				    struct abbrev_info *abbrev)
-  : partial_die_info (sect_off_, abbrev->tag, abbrev->has_children)
-{
+  : sect_off (sect_off_), tag (abbrev->tag), has_children (abbrev->has_children)
+{
+  is_external = 0;
+  is_declaration = 0;
+  has_type = 0;
+  has_specification = 0;
+  has_pc_info = 0;
+  may_be_inlined = 0;
+  main_subprogram = 0;
+  scope_set = 0;
+  has_byte_size = 0;
+  has_const_value = 0;
+  has_template_arguments = 0;
+  fixup_called = 0;
+  is_dwz = 0;
+  spec_is_dwz = 0;
 }

 /* Read a minimal amount of information into the minimal die structure.  */
@@ -18735,14 +18703,9 @@  read_partial_die (const struct die_reader_specs *reader,
 struct partial_die_info *
 dwarf2_cu::find_partial_die (sect_offset sect_off)
 {
-  struct partial_die_info *lookup_die = NULL;
-  struct partial_die_info part_die (sect_off);
-
-  lookup_die = ((struct partial_die_info *)
-		htab_find_with_hash (partial_dies, &part_die,
-				     to_underlying (sect_off)));
+  auto it = partial_dies.find (sect_off);

-  return lookup_die;
+  return it != partial_dies.end () ? it->second : nullptr;
 }

 /* Find a partial DIE at OFFSET, which may or may not be in CU,
@@ -18782,7 +18745,7 @@  find_partial_die (sect_offset sect_off, int offset_in_dwz, struct dwarf2_cu *cu)
       per_cu = dwarf2_find_containing_comp_unit (sect_off, offset_in_dwz,
 						 dwarf2_per_objfile);

-      if (per_cu->cu == NULL || per_cu->cu->partial_dies == NULL)
+      if (per_cu->cu == NULL || per_cu->cu->partial_dies.empty ())
 	load_partial_comp_unit (per_cu);

       per_cu->cu->last_used = 0;
@@ -25483,32 +25446,6 @@  dwarf2_clear_marks (struct dwarf2_per_cu_data *per_cu)
     }
 }

-/* Trivial hash function for partial_die_info: the hash value of a DIE
-   is its offset in .debug_info for this objfile.  */
-
-static hashval_t
-partial_die_hash (const void *item)
-{
-  const struct partial_die_info *part_die
-    = (const struct partial_die_info *) item;
-
-  return to_underlying (part_die->sect_off);
-}
-
-/* Trivial comparison function for partial_die_info structures: two DIEs
-   are equal if they have the same offset.  */
-
-static int
-partial_die_eq (const void *item_lhs, const void *item_rhs)
-{
-  const struct partial_die_info *part_die_lhs
-    = (const struct partial_die_info *) item_lhs;
-  const struct partial_die_info *part_die_rhs
-    = (const struct partial_die_info *) item_rhs;
-
-  return part_die_lhs->sect_off == part_die_rhs->sect_off;
-}
-
 static struct cmd_list_element *set_dwarf_cmdlist;
 static struct cmd_list_element *show_dwarf_cmdlist;