[v6,02/11] OpenMP/OpenACC: Rework clause expansion and nested struct handling

Message ID 2f053e71203451848eb43006e1c4891d0353579c.1671796515.git.julian@codesourcery.com
State New
Headers
Series OpenMP: C/C++ lvalue parsing, C/C++/Fortran "declare mapper" support |

Commit Message

Julian Brown Dec. 23, 2022, 12:12 p.m. UTC
  This patch is an extension and rewrite/rethink of the following two patches:

  "OpenMP/OpenACC: Add inspector class to unify mapped address analysis"
  https://gcc.gnu.org/pipermail/gcc-patches/2022-March/591977.html

  "OpenMP: Handle reference-typed struct members"
  https://gcc.gnu.org/pipermail/gcc-patches/2022-March/591978.html

The latter was reviewed here by Jakub:

  https://gcc.gnu.org/pipermail/gcc-patches/2022-May/595510.html with the

with the comment,

> Why isn't a reference to pointer handled that way too?

and that opened a whole can of worms... generally, C++ references were
not handled very consistently after the clause-processing code had been
extended several times already for both OpenACC and OpenMP, and many
cases of using C++ (and Fortran) references were broken.  Even some
cases not involving references were being mapped incorrectly.

At present a single clause may be turned into several mapping nodes,
or have its mapping type changed, in several places scattered through
the front- and middle-end.  The analysis relating to which particular
transformations are needed for some given expression has become quite hard
to follow.  Briefly, we manipulate clause types in the following places:

 1. During parsing, in c_omp_adjust_map_clauses.  Depending on a set of
    rules, we may change a FIRSTPRIVATE_POINTER (etc.) mapping into
    ATTACH_DETACH, or mark the decl addressable.

 2. In semantics.cc or c-typeck.cc, clauses are expanded in
    handle_omp_array_sections (called via {c_}finish_omp_clauses, or in
    finish_omp_clauses itself.  The two cases are for processing array
    sections (the former), or non-array sections (the latter).

 3. In gimplify.cc, we build sibling lists for struct accesses, which
    groups and sorts accesses along with their struct base, creating
    new ALLOC/RELEASE nodes for pointers.

 4. In gimplify.cc:gimplify_adjust_omp_clauses, mapping nodes may be
    adjusted or created.

This patch doesn't completely disrupt this scheme, though clause
types are no longer adjusted in c_omp_adjust_map_clauses (step 1).
Clause expansion in step 2 (for C and C++) now uses a single, unified
mechanism, parts of which are also reused for analysis in step 3.

Rather than the kind-of "ad-hoc" pattern matching on addresses used to
expand clauses used at present, a new method for analysing addresses is
introduced.  This does a recursive-descent tree walk on expression nodes,
and emits a vector of tokens describing each "part" of the address.
This tokenized address can then be translated directly into mapping nodes,
with the assurance that no part of the expression has been inadvertently
skipped or misinterpreted.  In this way, all the variations of ways
pointers, arrays, references and component accesses might be combined
can be teased apart into easily-understood cases - and we know we've
"parsed" the whole address before we start analysis, so the right code
paths can easily be selected.

For example, a simple access "arr[idx]" might parse as:

  base-decl access-indexed-array

or "mystruct->foo[x]" with a pointer "foo" component might parse as:

  base-decl access-pointer component-selector access-pointer

A key observation is that support for "array" bases, e.g. accesses
whose root nodes are not structures, but describe scalars or arrays,
and also *one-level deep* structure accesses, have first-class support
in gimplify and beyond.  Expressions that use deeper struct accesses
or e.g. multiple indirections were more problematic: some cases worked,
but lots of cases didn't.  This patch reimplements the support for those
in gimplify.cc, again using the new "address tokenization" support.

An expression like "mystruct->foo->bar[0:10]" used in a mapping node will
translate the right-hand access directly in the front-end.  The base for
the access will be "mystruct->foo".  This is handled recursively in
gimplify.cc -- there may be several accesses of "mystruct"'s members
on the same directive, so the sibling-list building machinery can be
used again.  (This was already being done for OpenACC, but the new
implementation differs somewhat in details, and is more robust.)

For OpenMP, in the case where the base pointer itself,
i.e. "mystruct->foo" here, is NOT mapped on the same directive, we
create a "fragile" mapping.  This turns the "foo" component access
into a zero-length allocation (which is a new feature for the runtime,
so support has been added there too).

A couple of changes have been made to how mapping clauses are turned
into mapping nodes:

The first change is based on the observation that it is probably never
correct to use GOMP_MAP_ALWAYS_POINTER for component accesses (e.g. for
references), because if the containing struct is already mapped on the
target then the host version of the pointer in question will be corrupted
if the struct is copied back from the target.  This patch removes all
such uses, across each of C, C++ and Fortran.

The second change is to the way that GOMP_MAP_ATTACH_DETACH nodes
are processed during sibling-list creation.  For OpenMP, for pointer
components, we must map the base pointer separately from an array section
that uses the base pointer, so e.g. we must have both "map(mystruct.base)"
and "map(mystruct.base[0:10])" mappings.  These create nodes such as:

  GOMP_MAP_TOFROM mystruct.base
  G_M_TOFROM *mystruct.base [len: 10*elemsize] G_M_ATTACH_DETACH mystruct.base

Instead of using the first of these directly when building the struct
sibling list then skipping the group using GOMP_MAP_ATTACH_DETACH,
leading to:

  GOMP_MAP_STRUCT mystruct [len: 1] GOMP_MAP_TOFROM mystruct.base

we now introduce a new "mini-pass", omp_resolve_clause_dependencies, that
drops the GOMP_MAP_TOFROM for the base pointer, marks the second group
as having had a base-pointer mapping, then omp_build_struct_sibling_lists
can create:

  GOMP_MAP_STRUCT mystruct [len: 1] GOMP_MAP_ALLOC mystruct.base [len: ptrsize]

This ends up working better in many cases, particularly those involving
references.  (The "alloc" space is immediately overwritten by a pointer
attachment, so this is mildly more efficient than a redundant TO mapping
at runtime also.)

There is support in the address tokenizer for "arbitrary" base expressions
which aren't rooted at a decl, but that is not used as present because
such addresses are disallowed at parse time.

In the front-ends, the address tokenization machinery is mostly only
used for clause expansion and not for diagnostics at present.  It could
be used for those too, which would allow more of my previous "address
inspector" implementation to be removed.

The new bits in gimplify.cc work with OpenACC also.

This version of the patch has been moved ahead of the Fortran
pointer-mapping changes in this series, so temporary XFAILs have been
added to tests as needed (to be removed again by the following patches).

2022-09-19  Julian Brown  <julian@codesourcery.com>

gcc/c-family/
	* c-common.h (omp_addr_token): Add forward declaration.
	(c_omp_address_inspector): New class.
	* c-omp.cc (c_omp_adjust_map_clauses): Mark decls addressable here, but
	do not change any mapping node types.
	(c_omp_address_inspector::unconverted_ref_origin,
	c_omp_address_inspector::component_access_p,
	c_omp_address_inspector::check_clause,
	c_omp_address_inspector::get_root_term,
	c_omp_address_inspector::map_supported_p,
	c_omp_address_inspector::get_origin,
	c_omp_address_inspector::maybe_unconvert_ref,
	c_omp_address_inspector::maybe_zero_length_array_section,
	c_omp_address_inspector::expand_array_base,
	c_omp_address_inspector::expand_component_selector,
	c_omp_address_inspector::expand_map_clause): New methods.
	(omp_expand_access_chain): New function.

gcc/c/
	* c-typeck.cc (handle_omp_array_sections_1,
	handle_omp_array_sections, c_finish_omp_clauses): Use
	c_omp_address_inspector class and OMP address tokenizer to analyze and
	expand map clause expressions.  Fix some diagnostics.

gcc/cp/
	* semantics.cc (cp_omp_address_inspector): New class, derived from
	c_omp_address_inspector.
	(handle_omp_array_sections_1, handle_omp_array_sections,
	finish_omp_clauses): Use cp_omp_address_inspector class and OMP address
	tokenizer to analyze and expand OpenMP map clause expressions.  Fix
	some diagnostics.

gcc/fortran/
	* trans-openmp.cc (gfc_trans_omp_array_section): Add OPENMP parameter.
	Use GOMP_MAP_ATTACH_DETACH instead of GOMP_MAP_ALWAYS_POINTER for
	derived type components.
	(gfc_trans_omp_clauses): Update calls to gfc_trans_omp_array_section.

gcc/
	* gimplify.cc (build_struct_comp_nodes): Don't process
	GOMP_MAP_ATTACH_DETACH "middle" nodes here.
	(omp_mapping_group): Add REPROCESS_STRUCT and FRAGILE booleans for
	nested struct handling.
	(omp_strip_components_and_deref, omp_strip_indirections): Remove
	functions.
	(omp_gather_mapping_groups_1): Initialise reprocess_struct and fragile
	fields.
	(omp_group_base): Handle GOMP_MAP_ATTACH_DETACH after GOMP_MAP_STRUCT.
	(omp_index_mapping_groups_1): Skip reprocess_struct groups.
	(omp_get_nonfirstprivate_group, omp_directive_maps_explicitly,
	omp_resolve_clause_dependencies, omp_expand_access_chain): New
	functions.
	(omp_accumulate_sibling_list): Add GROUP_MAP, ADDR_TOKENS, FRAGILE_P,
	REPROCESSING_STRUCT, ADDED_TAIL parameters.  Use OMP address tokenizer
	to analyze addresses.  Reimplement nested struct handling, and
	implement "fragile groups".
	(omp_build_struct_sibling_lists): Adjust for changes to
	omp_accumulate_sibling_list.  Recalculate bias for ATTACH_DETACH nodes
	after GOMP_MAP_STRUCT nodes.
	(gimplify_scan_omp_clauses): Call omp_resolve_clause_dependencies.  Use
	OMP address tokenizer.
	(gimplify_adjust_omp_clauses_1): Use build_fold_indirect_ref_loc
	instead of build_simple_mem_ref_loc.
	* omp-general.cc (omp-general.h, tree-pretty-print.h): Include.
	(omp_addr_tokenizer): New namespace.
	(omp_addr_tokenizer::omp_addr_token): New.
	(omp_addr_tokenizer::omp_parse_component_selector,
	omp_addr_tokenizer::omp_parse_ref,
	omp_addr_tokenizer::omp_parse_pointer,
	omp_addr_tokenizer::omp_parse_access_method,
	omp_addr_tokenizer::omp_parse_access_methods,
	omp_addr_tokenizer::omp_parse_structure_base,
	omp_addr_tokenizer::omp_parse_structured_expr,
	omp_addr_tokenizer::omp_parse_array_expr,
	omp_addr_tokenizer::omp_access_chain_p,
	omp_addr_tokenizer::omp_accessed_addr): New functions.
	(omp_parse_expr, debug_omp_tokenized_addr): New functions.
	* omp-general.h (omp_addr_tokenizer::access_method_kinds,
	omp_addr_tokenizer::structure_base_kinds,
	omp_addr_tokenizer::token_type,
	omp_addr_tokenizer::omp_addr_token,
	omp_addr_tokenizer::omp_access_chain_p,
	omp_addr_tokenizer::omp_accessed_addr): New.
	(omp_addr_token, omp_parse_expr): New.
	* omp-low.cc (scan_sharing_clauses): Skip error check for references
	to pointers.
	* tree.h (OMP_CLAUSE_ATTACHMENT_MAPPING_ERASED): New macro.

gcc/testsuite/
	* c-c++-common/gomp/clauses-2.c: Fix error output.
	* c-c++-common/gomp/target-implicit-map-2.c: Adjust scan output.
	* c-c++-common/gomp/target-50.c: Adjust scan output.
	* g++.dg/gomp/static-component-1.C: New test.
	* gcc.dg/gomp/target-3.c: Adjust scan output.

libgomp/
	* target.c (gomp_map_fields_existing): Use gomp_map_0len_lookup.
	(gomp_attach_pointer): Allow attaching null pointers (or Fortran
	"unassociated" pointers).
	(gomp_map_vars_internal): Handle zero-sized struct members.  Add
	diagnostic for unmapped struct pointer members.
	* testsuite/libgomp.c++/class-array-1.C: New test.
	* testsuite/libgomp.c-c++-common/baseptrs-1.c: New test.
	* testsuite/libgomp.c-c++-common/baseptrs-2.c: New test.
	* testsuite/libgomp.c++/baseptrs-3.C: New test.
	* testsuite/libgomp.c++/baseptrs-4.C: New test.
	* testsuite/libgomp.c++/baseptrs-5.C: New test.
	* testsuite/libgomp.c++/target-48.C: New test.
	* testsuite/libgomp.c++/target-49.C: New test.
	* testsuite/libgomp.c/target-22.c: Add necessary explicit base pointer
	mappings.
	* testsuite/libgomp.fortran/struct-elem-map-1.f90: Add XFAIL.
---
 gcc/c-family/c-common.h                       |   70 +
 gcc/c-family/c-omp.cc                         |  766 +++-
 gcc/c/c-typeck.cc                             |  363 +-
 gcc/cp/semantics.cc                           |  585 ++-
 gcc/fortran/trans-openmp.cc                   |   34 +-
 gcc/gimplify.cc                               | 1033 +++++-
 gcc/omp-general.cc                            |  425 +++
 gcc/omp-general.h                             |   69 +
 gcc/omp-low.cc                                |    7 +-
 gcc/testsuite/c-c++-common/gomp/clauses-2.c   |    2 +-
 gcc/testsuite/c-c++-common/gomp/target-50.c   |    2 +-
 .../c-c++-common/gomp/target-implicit-map-2.c |    2 +-
 .../g++.dg/gomp/static-component-1.C          |   23 +
 gcc/testsuite/gcc.dg/gomp/target-3.c          |    2 +-
 gcc/tree.h                                    |    4 +
 libgomp/target.c                              |   31 +-
 libgomp/testsuite/libgomp.c++/baseptrs-3.C    |  275 ++
 libgomp/testsuite/libgomp.c++/baseptrs-4.C    | 3154 +++++++++++++++++
 libgomp/testsuite/libgomp.c++/baseptrs-5.C    |   62 +
 libgomp/testsuite/libgomp.c++/class-array-1.C |   59 +
 libgomp/testsuite/libgomp.c++/target-48.C     |   32 +
 libgomp/testsuite/libgomp.c++/target-49.C     |   37 +
 .../libgomp.c-c++-common/baseptrs-1.c         |   50 +
 .../libgomp.c-c++-common/baseptrs-2.c         |   70 +
 libgomp/testsuite/libgomp.c/target-22.c       |    3 +-
 .../libgomp.fortran/struct-elem-map-1.f90     |    3 +
 26 files changed, 6425 insertions(+), 738 deletions(-)
 create mode 100644 gcc/testsuite/g++.dg/gomp/static-component-1.C
 create mode 100644 libgomp/testsuite/libgomp.c++/baseptrs-3.C
 create mode 100644 libgomp/testsuite/libgomp.c++/baseptrs-4.C
 create mode 100644 libgomp/testsuite/libgomp.c++/baseptrs-5.C
 create mode 100644 libgomp/testsuite/libgomp.c++/class-array-1.C
 create mode 100644 libgomp/testsuite/libgomp.c++/target-48.C
 create mode 100644 libgomp/testsuite/libgomp.c++/target-49.C
 create mode 100644 libgomp/testsuite/libgomp.c-c++-common/baseptrs-1.c
 create mode 100644 libgomp/testsuite/libgomp.c-c++-common/baseptrs-2.c
  

Patch

diff --git a/gcc/c-family/c-common.h b/gcc/c-family/c-common.h
index f9d0d2945a56..d935d4b3d7d9 100644
--- a/gcc/c-family/c-common.h
+++ b/gcc/c-family/c-common.h
@@ -1281,6 +1281,76 @@  extern tree c_omp_check_context_selector (location_t, tree);
 extern void c_omp_mark_declare_variant (location_t, tree, tree);
 extern void c_omp_adjust_map_clauses (tree, bool);
 
+namespace omp_addr_tokenizer { struct omp_addr_token; }
+typedef omp_addr_tokenizer::omp_addr_token omp_addr_token;
+
+class c_omp_address_inspector
+{
+  location_t loc;
+  tree root_term;
+  bool indirections;
+  int map_supported;
+
+protected:
+  tree orig;
+
+public:
+  c_omp_address_inspector (location_t loc, tree t)
+    : loc (loc), root_term (NULL_TREE), indirections (false),
+      map_supported (-1), orig (t)
+    {
+    }
+
+  ~c_omp_address_inspector ()
+    {
+    }
+
+  virtual bool processing_template_decl_p ()
+    {
+      return false;
+    }
+
+  virtual void emit_unmappable_type_notes (tree)
+    {
+    }
+
+  virtual tree convert_from_reference (tree)
+    {
+      gcc_unreachable ();
+    }
+
+  virtual tree build_array_ref (location_t loc, tree arr, tree idx)
+    {
+      tree eltype = TREE_TYPE (TREE_TYPE (arr));
+      return build4_loc (loc, ARRAY_REF, eltype, arr, idx, NULL_TREE,
+			 NULL_TREE);
+    }
+
+  virtual bool check_clause (tree);
+  tree get_root_term (bool);
+
+  tree get_address ()
+    {
+      return orig;
+    }
+
+  tree unconverted_ref_origin ();
+  bool component_access_p ();
+
+  bool map_supported_p ();
+
+  static tree get_origin (tree);
+  static tree maybe_unconvert_ref (tree);
+
+  bool maybe_zero_length_array_section (tree);
+
+  tree expand_array_base (tree, vec<omp_addr_token *> &, tree, unsigned *,
+			  bool, bool);
+  tree expand_component_selector (tree, vec<omp_addr_token *> &, tree,
+				  unsigned *, bool);
+  tree expand_map_clause (tree, tree, vec<omp_addr_token *> &, bool);
+};
+
 enum c_omp_directive_kind {
   C_OMP_DIR_STANDALONE,
   C_OMP_DIR_CONSTRUCT,
diff --git a/gcc/c-family/c-omp.cc b/gcc/c-family/c-omp.cc
index 2ab991126963..d32c2a977304 100644
--- a/gcc/c-family/c-omp.cc
+++ b/gcc/c-family/c-omp.cc
@@ -3019,8 +3019,9 @@  struct map_clause
     decl_mapped (false), omp_declare_target (false) { }
 };
 
-/* Adjust map clauses after normal clause parsing, mainly to turn specific
-   base-pointer map cases into attach/detach and mark them addressable.  */
+/* Adjust map clauses after normal clause parsing, mainly to mark specific
+   base-pointer map cases addressable that may be turned into attach/detach
+   operations during gimplification.  */
 void
 c_omp_adjust_map_clauses (tree clauses, bool is_target)
 {
@@ -3036,7 +3037,6 @@  c_omp_adjust_map_clauses (tree clauses, bool is_target)
 	    && POINTER_TYPE_P (TREE_TYPE (OMP_CLAUSE_DECL (c))))
 	  {
 	    tree ptr = OMP_CLAUSE_DECL (c);
-	    OMP_CLAUSE_SET_MAP_KIND (c, GOMP_MAP_ATTACH_DETACH);
 	    c_common_mark_addressable_vec (ptr);
 	  }
       return;
@@ -3049,7 +3049,7 @@  c_omp_adjust_map_clauses (tree clauses, bool is_target)
 	&& DECL_P (OMP_CLAUSE_DECL (c)))
       {
 	/* If this is for a target construct, the firstprivate pointer
-	   is changed to attach/detach if either is true:
+	   is marked addressable if either is true:
 	   (1) the base-pointer is mapped in this same construct, or
 	   (2) the base-pointer is a variable place on the device by
 	       "declare target" directives.
@@ -3091,11 +3091,765 @@  c_omp_adjust_map_clauses (tree clauses, bool is_target)
 
       if (mc.firstprivate_ptr_p
 	  && (mc.decl_mapped || mc.omp_declare_target))
+	c_common_mark_addressable_vec (OMP_CLAUSE_DECL (mc.clause));
+    }
+}
+
+/* Maybe strip off an indirection from a "converted" reference, then find the
+   origin of a pointer (i.e. without any offset).  */
+
+tree
+c_omp_address_inspector::unconverted_ref_origin ()
+{
+  tree t = orig;
+
+  /* We may have a reference-typed component access at the outermost level
+     that has had convert_from_reference called on it.  Get the un-dereferenced
+     reference itself.  */
+  t = maybe_unconvert_ref (t);
+
+  /* Find base pointer for POINTER_PLUS_EXPR, etc.  */
+  t = get_origin (t);
+
+  return t;
+}
+
+/* Return TRUE if the address is a component access.  */
+
+bool
+c_omp_address_inspector::component_access_p ()
+{
+  tree t = maybe_unconvert_ref (orig);
+
+  t = get_origin (t);
+
+  return TREE_CODE (t) == COMPONENT_REF;
+}
+
+/* Perform various checks on the address, as described by clause CLAUSE (we
+   only use its code and location here).  */
+
+bool
+c_omp_address_inspector::check_clause (tree clause)
+{
+  tree t = unconverted_ref_origin ();
+
+  if (TREE_CODE (t) != COMPONENT_REF)
+    return true;
+
+  if (TREE_CODE (TREE_OPERAND (t, 1)) == FIELD_DECL
+      && DECL_BIT_FIELD (TREE_OPERAND (t, 1)))
+    {
+      error_at (OMP_CLAUSE_LOCATION (clause),
+		"bit-field %qE in %qs clause",
+		t, omp_clause_code_name[OMP_CLAUSE_CODE (clause)]);
+      return false;
+    }
+  else if (!processing_template_decl_p ()
+	   && !omp_mappable_type (TREE_TYPE (t)))
+    {
+      error_at (OMP_CLAUSE_LOCATION (clause),
+		"%qE does not have a mappable type in %qs clause",
+		t, omp_clause_code_name[OMP_CLAUSE_CODE (clause)]);
+      emit_unmappable_type_notes (TREE_TYPE (t));
+      return false;
+    }
+  else if (TREE_TYPE (t) && TYPE_ATOMIC (TREE_TYPE (t)))
+    {
+      error_at (OMP_CLAUSE_LOCATION (clause),
+		"%<_Atomic%> %qE in %qs clause", t,
+		omp_clause_code_name[OMP_CLAUSE_CODE (clause)]);
+      return false;
+    }
+
+  return true;
+}
+
+/* Find the "root term" for the address.  This is the innermost decl, etc.
+   of the access.  */
+
+tree
+c_omp_address_inspector::get_root_term (bool checking)
+{
+  if (root_term && !checking)
+    return root_term;
+
+  tree t = unconverted_ref_origin ();
+
+  while (TREE_CODE (t) == COMPONENT_REF)
+    {
+      if (checking
+	  && TREE_TYPE (TREE_OPERAND (t, 0))
+	  && TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 0))) == UNION_TYPE)
 	{
-	  OMP_CLAUSE_SET_MAP_KIND (mc.clause, GOMP_MAP_ATTACH_DETACH);
-	  c_common_mark_addressable_vec (OMP_CLAUSE_DECL (mc.clause));
+	  error_at (loc, "%qE is a member of a union", t);
+	  return error_mark_node;
+	}
+      t = TREE_OPERAND (t, 0);
+      while (TREE_CODE (t) == MEM_REF
+	     || TREE_CODE (t) == INDIRECT_REF
+	     || TREE_CODE (t) == ARRAY_REF)
+	{
+	  if (TREE_CODE (t) == MEM_REF
+	      || TREE_CODE (t) == INDIRECT_REF)
+	    indirections = true;
+	  t = TREE_OPERAND (t, 0);
+	  STRIP_NOPS (t);
+	  if (TREE_CODE (t) == POINTER_PLUS_EXPR)
+	    t = TREE_OPERAND (t, 0);
 	}
     }
+
+  root_term = t;
+
+  return t;
+}
+
+/* Return TRUE if the address is supported in mapping clauses.  At present,
+   this means that the innermost expression is a DECL_P, but could be extended
+   to other types of expression in the future.  */
+
+bool
+c_omp_address_inspector::map_supported_p ()
+{
+  /* If we've already decided if the mapped address is supported, return
+     that.  */
+  if (map_supported != -1)
+    return map_supported;
+
+  tree t = unconverted_ref_origin ();
+
+  STRIP_NOPS (t);
+
+  while (TREE_CODE (t) == INDIRECT_REF
+	 || TREE_CODE (t) == MEM_REF
+	 || TREE_CODE (t) == ARRAY_REF
+	 || TREE_CODE (t) == COMPONENT_REF
+	 || TREE_CODE (t) == COMPOUND_EXPR
+	 || TREE_CODE (t) == SAVE_EXPR
+	 || TREE_CODE (t) == POINTER_PLUS_EXPR
+	 || TREE_CODE (t) == NON_LVALUE_EXPR
+	 || TREE_CODE (t) == NOP_EXPR)
+    if (TREE_CODE (t) == COMPOUND_EXPR)
+      t = TREE_OPERAND (t, 1);
+    else
+      t = TREE_OPERAND (t, 0);
+
+  STRIP_NOPS (t);
+
+  map_supported = DECL_P (t);
+
+  return map_supported;
+}
+
+/* Get the origin of an address T, stripping off offsets and some other
+   bits.  */
+
+tree
+c_omp_address_inspector::get_origin (tree t)
+{
+  while (1)
+    {
+      if (TREE_CODE (t) == COMPOUND_EXPR)
+	{
+	  t = TREE_OPERAND (t, 1);
+	  STRIP_NOPS (t);
+	}
+      else if (TREE_CODE (t) == POINTER_PLUS_EXPR
+	       || TREE_CODE (t) == SAVE_EXPR)
+	t = TREE_OPERAND (t, 0);
+      else if (TREE_CODE (t) == INDIRECT_REF
+	       && TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 0))) == REFERENCE_TYPE)
+	t = TREE_OPERAND (t, 0);
+      else
+	break;
+    }
+  STRIP_NOPS (t);
+  return t;
+}
+
+/* For an address T that might be a reference that has had
+   "convert_from_reference" called on it, return the actual reference without
+   any indirection.  */
+
+tree
+c_omp_address_inspector::maybe_unconvert_ref (tree t)
+{
+  if (TREE_CODE (t) == INDIRECT_REF
+      && TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 0))) == REFERENCE_TYPE)
+    return TREE_OPERAND (t, 0);
+
+  return t;
+}
+
+/* Return TRUE if CLAUSE might describe a zero-length array section.  */
+
+bool
+c_omp_address_inspector::maybe_zero_length_array_section (tree clause)
+{
+  switch (OMP_CLAUSE_MAP_KIND (clause))
+    {
+    case GOMP_MAP_ALLOC:
+    case GOMP_MAP_IF_PRESENT:
+    case GOMP_MAP_TO:
+    case GOMP_MAP_FROM:
+    case GOMP_MAP_TOFROM:
+    case GOMP_MAP_ALWAYS_TO:
+    case GOMP_MAP_ALWAYS_FROM:
+    case GOMP_MAP_ALWAYS_TOFROM:
+    case GOMP_MAP_RELEASE:
+    case GOMP_MAP_DELETE:
+    case GOMP_MAP_FORCE_TO:
+    case GOMP_MAP_FORCE_FROM:
+    case GOMP_MAP_FORCE_TOFROM:
+    case GOMP_MAP_FORCE_PRESENT:
+      return true;
+    default:
+      return false;
+    }
+}
+
+/* Expand a chained access.  We only expect to see a quite limited range of
+   expression types here, because e.g. you can't have an array of
+   references.  See also gimplify.cc:omp_expand_access_chain.  */
+
+static tree
+omp_expand_access_chain (tree c, tree expr, vec<omp_addr_token *> &addr_tokens,
+			 unsigned *idx)
+{
+  using namespace omp_addr_tokenizer;
+  location_t loc = OMP_CLAUSE_LOCATION (c);
+  unsigned i = *idx;
+  tree c2 = NULL_TREE;
+
+  switch (addr_tokens[i]->u.access_kind)
+    {
+    case ACCESS_POINTER:
+    case ACCESS_POINTER_OFFSET:
+      {
+	tree virtual_origin
+	  = fold_convert_loc (loc, ptrdiff_type_node, addr_tokens[i]->expr);
+	tree data_addr = omp_accessed_addr (addr_tokens, i, expr);
+	c2 = build_omp_clause (loc, OMP_CLAUSE_MAP);
+	OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH);
+	OMP_CLAUSE_DECL (c2) = addr_tokens[i]->expr;
+	OMP_CLAUSE_SIZE (c2)
+	  = fold_build2_loc (loc, MINUS_EXPR, ptrdiff_type_node,
+			     fold_convert_loc (loc, ptrdiff_type_node,
+					       data_addr),
+			     virtual_origin);
+      }
+      break;
+
+    case ACCESS_INDEXED_ARRAY:
+      break;
+
+    default:
+      return error_mark_node;
+    }
+
+  if (c2)
+    {
+      OMP_CLAUSE_CHAIN (c2) = OMP_CLAUSE_CHAIN (c);
+      OMP_CLAUSE_CHAIN (c) = c2;
+      c = c2;
+    }
+
+  *idx = ++i;
+
+  if (i < addr_tokens.length ()
+      && addr_tokens[i]->type == ACCESS_METHOD)
+    return omp_expand_access_chain (c, expr, addr_tokens, idx);
+
+  return c;
+}
+
+/* Translate "array_base_decl access_method" to OMP mapping clauses.  */
+
+tree
+c_omp_address_inspector::expand_array_base (tree c,
+					    vec<omp_addr_token *> &addr_tokens,
+					    tree expr, unsigned *idx,
+					    bool target, bool decl_p)
+{
+  using namespace omp_addr_tokenizer;
+  location_t loc = OMP_CLAUSE_LOCATION (c);
+  int i = *idx;
+  tree decl = addr_tokens[i + 1]->expr;
+  bool declare_target_p = (decl_p
+			   && is_global_var (decl)
+			   && lookup_attribute ("omp declare target",
+						DECL_ATTRIBUTES (decl)));
+  bool implicit_p = (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
+		     && OMP_CLAUSE_MAP_IMPLICIT (c));
+  bool chain_p = omp_access_chain_p (addr_tokens, i + 1);
+  tree c2 = NULL_TREE, c3 = NULL_TREE;
+  unsigned consume_tokens = 2;
+
+  gcc_assert (i == 0);
+
+  switch (addr_tokens[i + 1]->u.access_kind)
+    {
+    case ACCESS_DIRECT:
+      if (decl_p && !target)
+	c_common_mark_addressable_vec (addr_tokens[i + 1]->expr);
+      break;
+
+    case ACCESS_REF:
+      {
+	/* Copy the referenced object.  */
+	tree obj = convert_from_reference (addr_tokens[i + 1]->expr);
+	OMP_CLAUSE_DECL (c) = obj;
+	OMP_CLAUSE_SIZE (c) = TYPE_SIZE_UNIT (TREE_TYPE (obj));
+
+	/* If we have a reference to a pointer, avoid using
+	   FIRSTPRIVATE_REFERENCE here in case the pointer is modified in the
+	   offload region (we can only do that if the pointer does not point
+	   to a mapped block).  We could avoid doing this if we don't have a
+	   FROM mapping...  */
+	bool ref_to_ptr = TREE_CODE (TREE_TYPE (obj)) == POINTER_TYPE;
+
+	if (target)
+	  {
+	    c2 = build_omp_clause (loc, OMP_CLAUSE_MAP);
+	    if (target
+		&& !ref_to_ptr
+		&& !declare_target_p
+		&& decl_p)
+	      OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_FIRSTPRIVATE_REFERENCE);
+	    else
+	      {
+		OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH);
+		if (decl_p)
+		  c_common_mark_addressable_vec (addr_tokens[i + 1]->expr);
+	      }
+	    OMP_CLAUSE_DECL (c2) = addr_tokens[i + 1]->expr;
+	    OMP_CLAUSE_SIZE (c2) = size_zero_node;
+
+	    if (ref_to_ptr)
+	      {
+		c3 = c2;
+		c2 = build_omp_clause (loc, OMP_CLAUSE_MAP);
+		OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ALLOC);
+		OMP_CLAUSE_DECL (c2) = addr_tokens[i + 1]->expr;
+		OMP_CLAUSE_SIZE (c2)
+		  = TYPE_SIZE_UNIT (TREE_TYPE (OMP_CLAUSE_DECL (c2)));
+	      }
+	  }
+      }
+      break;
+
+    case ACCESS_INDEXED_REF_TO_ARRAY:
+      {
+	tree virtual_origin
+	  = convert_from_reference (addr_tokens[i + 1]->expr);
+	virtual_origin = build_fold_addr_expr (virtual_origin);
+	virtual_origin = fold_convert_loc (loc, ptrdiff_type_node,
+					   virtual_origin);
+	tree data_addr = omp_accessed_addr (addr_tokens, i + 1, expr);
+	c2 = build_omp_clause (loc, OMP_CLAUSE_MAP);
+	if (decl_p && target && !declare_target_p)
+	  OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_FIRSTPRIVATE_REFERENCE);
+	else
+	  {
+	    if (decl_p)
+	      c_common_mark_addressable_vec (addr_tokens[i + 1]->expr);
+	    OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH);
+	  }
+	OMP_CLAUSE_DECL (c2) = addr_tokens[i + 1]->expr;
+	OMP_CLAUSE_SIZE (c2)
+	  = fold_build2_loc (loc, MINUS_EXPR, ptrdiff_type_node,
+			     fold_convert_loc (loc, ptrdiff_type_node,
+					       data_addr),
+			     virtual_origin);
+      }
+      break;
+
+    case ACCESS_INDEXED_ARRAY:
+      {
+	/* The code handling "firstprivatize_array_bases" in gimplify.cc is
+	   relevant here.  What do we need to create for arrays at this
+	   stage?  (This condition doesn't feel quite right.  FIXME?)  */
+	if (!target
+	    && (TREE_CODE (TREE_TYPE (addr_tokens[i + 1]->expr))
+		== ARRAY_TYPE))
+	  break;
+
+	tree virtual_origin
+	  = build_fold_addr_expr (addr_tokens[i + 1]->expr);
+	virtual_origin = fold_convert_loc (loc, ptrdiff_type_node,
+					   virtual_origin);
+	tree data_addr = omp_accessed_addr (addr_tokens, i + 1, expr);
+	c2 = build_omp_clause (loc, OMP_CLAUSE_MAP);
+	if (decl_p && target)
+	  OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_FIRSTPRIVATE_POINTER);
+	else
+	  {
+	    if (decl_p)
+	      c_common_mark_addressable_vec (addr_tokens[i + 1]->expr);
+	    OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH);
+	  }
+	OMP_CLAUSE_DECL (c2) = addr_tokens[i + 1]->expr;
+	OMP_CLAUSE_SIZE (c2)
+	  = fold_build2_loc (loc, MINUS_EXPR, ptrdiff_type_node,
+			     fold_convert_loc (loc, ptrdiff_type_node,
+					       data_addr),
+			     virtual_origin);
+      }
+      break;
+
+    case ACCESS_POINTER:
+    case ACCESS_POINTER_OFFSET:
+      {
+	unsigned last_access = i + 1;
+	tree virtual_origin;
+
+	if (chain_p
+	    && addr_tokens[i + 2]->type == ACCESS_METHOD
+	    && addr_tokens[i + 2]->u.access_kind == ACCESS_INDEXED_ARRAY)
+	  {
+	    /* !!! This seems wrong for ACCESS_POINTER_OFFSET.  */
+	    consume_tokens = 3;
+	    chain_p = omp_access_chain_p (addr_tokens, i + 2);
+	    last_access = i + 2;
+	    virtual_origin
+	      = build_array_ref (loc, addr_tokens[last_access]->expr,
+				 integer_zero_node);
+	    virtual_origin = build_fold_addr_expr (virtual_origin);
+	    virtual_origin = fold_convert_loc (loc, ptrdiff_type_node,
+					       virtual_origin);
+	  }
+	else
+	  virtual_origin = fold_convert_loc (loc, ptrdiff_type_node,
+					     addr_tokens[last_access]->expr);
+	tree data_addr = omp_accessed_addr (addr_tokens, last_access, expr);
+	c2 = build_omp_clause (loc, OMP_CLAUSE_MAP);
+	if (decl_p && target && !chain_p && !declare_target_p)
+	  OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_FIRSTPRIVATE_POINTER);
+	else
+	  {
+	    if (decl_p)
+	      c_common_mark_addressable_vec (addr_tokens[i + 1]->expr);
+	    OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH);
+	  }
+	OMP_CLAUSE_DECL (c2) = addr_tokens[i + 1]->expr;
+	OMP_CLAUSE_SIZE (c2)
+	  = fold_build2_loc (loc, MINUS_EXPR, ptrdiff_type_node,
+			     fold_convert_loc (loc, ptrdiff_type_node,
+					       data_addr),
+			     virtual_origin);
+      }
+      break;
+
+    case ACCESS_REF_TO_POINTER:
+    case ACCESS_REF_TO_POINTER_OFFSET:
+      {
+	unsigned last_access = i + 1;
+	tree virtual_origin;
+
+	if (chain_p
+	    && addr_tokens[i + 2]->type == ACCESS_METHOD
+	    && addr_tokens[i + 2]->u.access_kind == ACCESS_INDEXED_ARRAY)
+	  {
+	    /* !!! This seems wrong for ACCESS_POINTER_OFFSET.  */
+	    consume_tokens = 3;
+	    chain_p = omp_access_chain_p (addr_tokens, i + 2);
+	    last_access = i + 2;
+	    virtual_origin
+	      = build_array_ref (loc, addr_tokens[last_access]->expr,
+				 integer_zero_node);
+	    virtual_origin = build_fold_addr_expr (virtual_origin);
+	    virtual_origin = fold_convert_loc (loc, ptrdiff_type_node,
+					       virtual_origin);
+	  }
+	else
+	  {
+	    virtual_origin
+	      = convert_from_reference (addr_tokens[last_access]->expr);
+	    virtual_origin = fold_convert_loc (loc, ptrdiff_type_node,
+					       virtual_origin);
+	  }
+
+	tree data_addr = omp_accessed_addr (addr_tokens, last_access, expr);
+	c2 = build_omp_clause (loc, OMP_CLAUSE_MAP);
+	if (decl_p && target && !declare_target_p)
+	  OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_FIRSTPRIVATE_REFERENCE);
+	else
+	  {
+	    if (decl_p)
+	      c_common_mark_addressable_vec (addr_tokens[i + 1]->expr);
+	    OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH);
+	  }
+	OMP_CLAUSE_DECL (c2) = addr_tokens[i + 1]->expr;
+	OMP_CLAUSE_SIZE (c2)
+	  = fold_build2_loc (loc, MINUS_EXPR, ptrdiff_type_node,
+			     fold_convert_loc (loc, ptrdiff_type_node,
+					       data_addr),
+			     virtual_origin);
+      }
+      break;
+
+    default:
+      *idx = i + consume_tokens;
+      return error_mark_node;
+    }
+
+  if (c3)
+    {
+      OMP_CLAUSE_CHAIN (c3) = OMP_CLAUSE_CHAIN (c);
+      OMP_CLAUSE_CHAIN (c2) = c3;
+      OMP_CLAUSE_CHAIN (c) = c2;
+      if (implicit_p)
+	{
+	  OMP_CLAUSE_MAP_IMPLICIT (c2) = 1;
+	  OMP_CLAUSE_MAP_IMPLICIT (c3) = 1;
+	}
+      c = c3;
+    }
+  else if (c2)
+    {
+      OMP_CLAUSE_CHAIN (c2) = OMP_CLAUSE_CHAIN (c);
+      OMP_CLAUSE_CHAIN (c) = c2;
+      if (implicit_p)
+	OMP_CLAUSE_MAP_IMPLICIT (c2) = 1;
+      c = c2;
+    }
+
+  i += consume_tokens;
+  *idx = i;
+
+  if (target && chain_p)
+    return omp_expand_access_chain (c, expr, addr_tokens, idx);
+  else if (chain_p)
+    while (*idx < addr_tokens.length ()
+	   && addr_tokens[*idx]->type == ACCESS_METHOD)
+      (*idx)++;
+
+  return c;
+}
+
+/* Translate "component_selector access_method" to OMP mapping clauses.  */
+
+tree
+c_omp_address_inspector::expand_component_selector (tree c,
+						    vec<omp_addr_token *>
+						      &addr_tokens,
+						    tree expr, unsigned *idx,
+						    bool target)
+{
+  using namespace omp_addr_tokenizer;
+  location_t loc = OMP_CLAUSE_LOCATION (c);
+  unsigned i = *idx;
+  tree c2 = NULL_TREE, c3 = NULL_TREE;
+  bool chain_p = omp_access_chain_p (addr_tokens, i + 1);
+
+  switch (addr_tokens[i + 1]->u.access_kind)
+    {
+    case ACCESS_DIRECT:
+    case ACCESS_INDEXED_ARRAY:
+      break;
+
+    case ACCESS_REF:
+      {
+	/* Copy the referenced object.  */
+	tree obj = convert_from_reference (addr_tokens[i + 1]->expr);
+	OMP_CLAUSE_DECL (c) = obj;
+	OMP_CLAUSE_SIZE (c) = TYPE_SIZE_UNIT (TREE_TYPE (obj));
+
+	c2 = build_omp_clause (loc, OMP_CLAUSE_MAP);
+	OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH);
+	OMP_CLAUSE_DECL (c2) = addr_tokens[i + 1]->expr;
+	OMP_CLAUSE_SIZE (c2) = size_zero_node;
+      }
+      break;
+
+    case ACCESS_INDEXED_REF_TO_ARRAY:
+      {
+	tree virtual_origin
+	  = convert_from_reference (addr_tokens[i + 1]->expr);
+	virtual_origin = build_fold_addr_expr (virtual_origin);
+	virtual_origin = fold_convert_loc (loc, ptrdiff_type_node,
+					   virtual_origin);
+	tree data_addr = omp_accessed_addr (addr_tokens, i + 1, expr);
+
+	c2 = build_omp_clause (loc, OMP_CLAUSE_MAP);
+	OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH);
+	OMP_CLAUSE_DECL (c2) = addr_tokens[i + 1]->expr;
+	OMP_CLAUSE_SIZE (c2)
+	  = fold_build2_loc (loc, MINUS_EXPR, ptrdiff_type_node,
+			     fold_convert_loc (loc, ptrdiff_type_node,
+					       data_addr),
+			     virtual_origin);
+      }
+      break;
+
+    case ACCESS_POINTER:
+    case ACCESS_POINTER_OFFSET:
+      {
+	tree virtual_origin
+	  = fold_convert_loc (loc, ptrdiff_type_node,
+			      addr_tokens[i + 1]->expr);
+	tree data_addr = omp_accessed_addr (addr_tokens, i + 1, expr);
+
+	c2 = build_omp_clause (loc, OMP_CLAUSE_MAP);
+	OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH);
+	OMP_CLAUSE_DECL (c2) = addr_tokens[i + 1]->expr;
+	OMP_CLAUSE_SIZE (c2)
+	  = fold_build2_loc (loc, MINUS_EXPR, ptrdiff_type_node,
+			     fold_convert_loc (loc, ptrdiff_type_node,
+					       data_addr),
+			     virtual_origin);
+      }
+      break;
+
+    case ACCESS_REF_TO_POINTER:
+    case ACCESS_REF_TO_POINTER_OFFSET:
+      {
+	tree ptr = convert_from_reference (addr_tokens[i + 1]->expr);
+	tree virtual_origin = fold_convert_loc (loc, ptrdiff_type_node,
+						ptr);
+	tree data_addr = omp_accessed_addr (addr_tokens, i + 1, expr);
+
+	/* Attach the pointer...  */
+	c2 = build_omp_clause (OMP_CLAUSE_LOCATION (c), OMP_CLAUSE_MAP);
+	OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH);
+	OMP_CLAUSE_DECL (c2) = ptr;
+	OMP_CLAUSE_SIZE (c2)
+	  = fold_build2_loc (loc, MINUS_EXPR, ptrdiff_type_node,
+			     fold_convert_loc (loc, ptrdiff_type_node,
+					       data_addr),
+			     virtual_origin);
+
+	/* ...and also the reference.  */
+	c3 = build_omp_clause (OMP_CLAUSE_LOCATION (c), OMP_CLAUSE_MAP);
+	OMP_CLAUSE_SET_MAP_KIND (c3, GOMP_MAP_ATTACH_DETACH);
+	OMP_CLAUSE_DECL (c3) = addr_tokens[i + 1]->expr;
+	OMP_CLAUSE_SIZE (c3) = size_zero_node;
+      }
+      break;
+
+    default:
+      *idx = i + 2;
+      return error_mark_node;
+    }
+
+  if (c3)
+    {
+      OMP_CLAUSE_CHAIN (c3) = OMP_CLAUSE_CHAIN (c);
+      OMP_CLAUSE_CHAIN (c2) = c3;
+      OMP_CLAUSE_CHAIN (c) = c2;
+      c = c3;
+    }
+  else if (c2)
+    {
+      OMP_CLAUSE_CHAIN (c2) = OMP_CLAUSE_CHAIN (c);
+      OMP_CLAUSE_CHAIN (c) = c2;
+      c = c2;
+    }
+
+  i += 2;
+  *idx = i;
+
+  if (target && chain_p)
+    return omp_expand_access_chain (c, expr, addr_tokens, idx);
+  else if (chain_p)
+    while (*idx < addr_tokens.length ()
+	   && addr_tokens[*idx]->type == ACCESS_METHOD)
+      (*idx)++;
+
+  return c;
+}
+
+/* Expand a map clause into a group of mapping clauses, creating nodes to
+   attach/detach pointers and so forth as necessary.  */
+
+tree
+c_omp_address_inspector::expand_map_clause (tree c, tree expr,
+					    vec<omp_addr_token *> &addr_tokens,
+					    bool target)
+{
+  using namespace omp_addr_tokenizer;
+  unsigned i, length = addr_tokens.length ();
+
+  for (i = 0; i < length;)
+    {
+      int remaining = length - i;
+
+      if (remaining >= 2
+	  && addr_tokens[i]->type == ARRAY_BASE
+	  && addr_tokens[i]->u.structure_base_kind == BASE_DECL
+	  && addr_tokens[i + 1]->type == ACCESS_METHOD)
+	{
+	  c = expand_array_base (c, addr_tokens, expr, &i, target, true);
+	  if (c == error_mark_node)
+	    return error_mark_node;
+	}
+      else if (remaining >= 2
+	       && addr_tokens[i]->type == ARRAY_BASE
+	       && addr_tokens[i]->u.structure_base_kind == BASE_ARBITRARY_EXPR
+	       && addr_tokens[i + 1]->type == ACCESS_METHOD)
+	{
+	  c = expand_array_base (c, addr_tokens, expr, &i, target, false);
+	  if (c == error_mark_node)
+	    return error_mark_node;
+	}
+      else if (remaining >= 2
+	       && addr_tokens[i]->type == STRUCTURE_BASE
+	       && addr_tokens[i]->u.structure_base_kind == BASE_DECL
+	       && addr_tokens[i + 1]->type == ACCESS_METHOD)
+	{
+	  if (addr_tokens[i + 1]->u.access_kind == ACCESS_DIRECT)
+	    c_common_mark_addressable_vec (addr_tokens[i + 1]->expr);
+	  i += 2;
+	  while (addr_tokens[i]->type == ACCESS_METHOD)
+	    i++;
+	}
+      else if (remaining >= 2
+	       && addr_tokens[i]->type == STRUCTURE_BASE
+	       && addr_tokens[i]->u.structure_base_kind == BASE_ARBITRARY_EXPR
+	       && addr_tokens[i + 1]->type == ACCESS_METHOD)
+	{
+	  switch (addr_tokens[i + 1]->u.access_kind)
+	    {
+	    case ACCESS_DIRECT:
+	    case ACCESS_POINTER:
+	      i += 2;
+	      while (addr_tokens[i]->type == ACCESS_METHOD)
+		i++;
+	      break;
+	    default:
+	      return error_mark_node;
+	    }
+	}
+      else if (remaining >= 2
+	       && addr_tokens[i]->type == COMPONENT_SELECTOR
+	       && addr_tokens[i + 1]->type == ACCESS_METHOD)
+	{
+	  c = expand_component_selector (c, addr_tokens, expr, &i, target);
+	  /* We used 'expr', so these must have been the last tokens.  */
+	  gcc_assert (i == length);
+	  if (c == error_mark_node)
+	    return error_mark_node;
+	}
+      else if (remaining >= 3
+	       && addr_tokens[i]->type == COMPONENT_SELECTOR
+	       && addr_tokens[i + 1]->type == STRUCTURE_BASE
+	       && (addr_tokens[i + 1]->u.structure_base_kind
+		   == BASE_COMPONENT_EXPR)
+	       && addr_tokens[i + 2]->type == ACCESS_METHOD)
+	{
+	  i += 3;
+	  while (addr_tokens[i]->type == ACCESS_METHOD)
+	    i++;
+	}
+      else
+	break;
+    }
+
+  if (i == length)
+    return c;
+
+  return error_mark_node;
 }
 
 const struct c_omp_directive c_omp_directives[] = {
diff --git a/gcc/c/c-typeck.cc b/gcc/c/c-typeck.cc
index cedb4d0f8982..ca32d8cd5d21 100644
--- a/gcc/c/c-typeck.cc
+++ b/gcc/c/c-typeck.cc
@@ -13638,6 +13638,7 @@  handle_omp_array_sections_1 (tree c, tree t, vec<tree> &types,
     {
       if (error_operand_p (t))
 	return error_mark_node;
+      c_omp_address_inspector ai (OMP_CLAUSE_LOCATION (c), t);
       ret = t;
       if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_AFFINITY
 	  && OMP_CLAUSE_CODE (c) != OMP_CLAUSE_DEPEND
@@ -13647,59 +13648,17 @@  handle_omp_array_sections_1 (tree c, tree t, vec<tree> &types,
 		    t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
 	  return error_mark_node;
 	}
-      while (TREE_CODE (t) == INDIRECT_REF)
-	{
-	  t = TREE_OPERAND (t, 0);
-	  STRIP_NOPS (t);
-	  if (TREE_CODE (t) == POINTER_PLUS_EXPR)
-	    t = TREE_OPERAND (t, 0);
-	}
-      while (TREE_CODE (t) == COMPOUND_EXPR)
-	{
-	  t = TREE_OPERAND (t, 1);
-	  STRIP_NOPS (t);
-	}
-      if (TREE_CODE (t) == COMPONENT_REF
-	  && (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
-	      || OMP_CLAUSE_CODE (c) == OMP_CLAUSE_TO
-	      || OMP_CLAUSE_CODE (c) == OMP_CLAUSE_FROM))
-	{
-	  if (DECL_BIT_FIELD (TREE_OPERAND (t, 1)))
-	    {
-	      error_at (OMP_CLAUSE_LOCATION (c),
-			"bit-field %qE in %qs clause",
-			t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
-	      return error_mark_node;
-	    }
-	  while (TREE_CODE (t) == COMPONENT_REF)
-	    {
-	      if (TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 0))) == UNION_TYPE)
-		{
-		  error_at (OMP_CLAUSE_LOCATION (c),
-			    "%qE is a member of a union", t);
-		  return error_mark_node;
-		}
-	      t = TREE_OPERAND (t, 0);
-	      while (TREE_CODE (t) == MEM_REF
-		     || TREE_CODE (t) == INDIRECT_REF
-		     || TREE_CODE (t) == ARRAY_REF)
-		{
-		  t = TREE_OPERAND (t, 0);
-		  STRIP_NOPS (t);
-		  if (TREE_CODE (t) == POINTER_PLUS_EXPR)
-		    t = TREE_OPERAND (t, 0);
-		}
-	      if (ort == C_ORT_ACC && TREE_CODE (t) == MEM_REF)
-		{
-		  if (maybe_ne (mem_ref_offset (t), 0))
-		    error_at (OMP_CLAUSE_LOCATION (c),
-			      "cannot dereference %qE in %qs clause", t,
-			      omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
-		  else
-		    t = TREE_OPERAND (t, 0);
-		}
-	    }
-	}
+      if (!ai.check_clause (c))
+	return error_mark_node;
+      else if (ai.component_access_p ()
+	       && (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
+		   || OMP_CLAUSE_CODE (c) == OMP_CLAUSE_TO
+		   || OMP_CLAUSE_CODE (c) == OMP_CLAUSE_FROM))
+	t = ai.get_root_term (true);
+      else
+	t = ai.unconverted_ref_origin ();
+      if (t == error_mark_node)
+	return error_mark_node;
       if (!VAR_P (t) && TREE_CODE (t) != PARM_DECL)
 	{
 	  if (DECL_P (t))
@@ -14231,55 +14190,27 @@  handle_omp_array_sections (tree c, enum c_omp_region_type ort)
       if (size)
 	size = c_fully_fold (size, false, NULL);
       OMP_CLAUSE_SIZE (c) = size;
-      if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP
-	  || (TREE_CODE (t) == COMPONENT_REF
-	      && TREE_CODE (TREE_TYPE (t)) == ARRAY_TYPE))
+
+      if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP)
 	return false;
-      gcc_assert (OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_FORCE_DEVICEPTR);
-      switch (OMP_CLAUSE_MAP_KIND (c))
+
+      auto_vec<omp_addr_token *, 10> addr_tokens;
+
+      if (!omp_parse_expr (addr_tokens, first))
+	return true;
+
+      c_omp_address_inspector ai (OMP_CLAUSE_LOCATION (c), t);
+
+      tree nc = ai.expand_map_clause (c, first, addr_tokens,
+				      (ort == C_ORT_OMP_TARGET
+				       || ort == C_ORT_ACC));
+      if (nc != error_mark_node)
 	{
-	case GOMP_MAP_ALLOC:
-	case GOMP_MAP_IF_PRESENT:
-	case GOMP_MAP_TO:
-	case GOMP_MAP_FROM:
-	case GOMP_MAP_TOFROM:
-	case GOMP_MAP_ALWAYS_TO:
-	case GOMP_MAP_ALWAYS_FROM:
-	case GOMP_MAP_ALWAYS_TOFROM:
-	case GOMP_MAP_RELEASE:
-	case GOMP_MAP_DELETE:
-	case GOMP_MAP_FORCE_TO:
-	case GOMP_MAP_FORCE_FROM:
-	case GOMP_MAP_FORCE_TOFROM:
-	case GOMP_MAP_FORCE_PRESENT:
-	  OMP_CLAUSE_MAP_MAYBE_ZERO_LENGTH_ARRAY_SECTION (c) = 1;
-	  break;
-	default:
-	  break;
+	  if (ai.maybe_zero_length_array_section (c))
+	    OMP_CLAUSE_MAP_MAYBE_ZERO_LENGTH_ARRAY_SECTION (c) = 1;
+
+	  return false;
 	}
-      tree c2 = build_omp_clause (OMP_CLAUSE_LOCATION (c), OMP_CLAUSE_MAP);
-      if (TREE_CODE (t) == COMPONENT_REF)
-	OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH);
-      else
-	OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_FIRSTPRIVATE_POINTER);
-      OMP_CLAUSE_MAP_IMPLICIT (c2) = OMP_CLAUSE_MAP_IMPLICIT (c);
-      if (OMP_CLAUSE_MAP_KIND (c2) != GOMP_MAP_FIRSTPRIVATE_POINTER
-	  && !c_mark_addressable (t))
-	return false;
-      OMP_CLAUSE_DECL (c2) = t;
-      t = build_fold_addr_expr (first);
-      t = fold_convert_loc (OMP_CLAUSE_LOCATION (c), ptrdiff_type_node, t);
-      tree ptr = OMP_CLAUSE_DECL (c2);
-      if (!POINTER_TYPE_P (TREE_TYPE (ptr)))
-	ptr = build_fold_addr_expr (ptr);
-      t = fold_build2_loc (OMP_CLAUSE_LOCATION (c), MINUS_EXPR,
-			   ptrdiff_type_node, t,
-			   fold_convert_loc (OMP_CLAUSE_LOCATION (c),
-					     ptrdiff_type_node, ptr));
-      t = c_fully_fold (t, false, NULL);
-      OMP_CLAUSE_SIZE (c2) = t;
-      OMP_CLAUSE_CHAIN (c2) = OMP_CLAUSE_CHAIN (c);
-      OMP_CLAUSE_CHAIN (c) = c2;
     }
   return false;
 }
@@ -14545,7 +14476,6 @@  c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
   tree ordered_clause = NULL_TREE;
   tree schedule_clause = NULL_TREE;
   bool oacc_async = false;
-  bool indir_component_ref_p = false;
   tree last_iterators = NULL_TREE;
   bool last_iterators_remove = false;
   tree *nogroup_seen = NULL;
@@ -15077,7 +15007,8 @@  c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
 			"%qE appears more than once in data clauses", t);
 	      remove = true;
 	    }
-	  else if (bitmap_bit_p (&map_head, DECL_UID (t)))
+	  else if (bitmap_bit_p (&map_head, DECL_UID (t))
+		   || bitmap_bit_p (&map_field_head, DECL_UID (t)))
 	    {
 	      if (ort == C_ORT_ACC)
 		error_at (OMP_CLAUSE_LOCATION (c),
@@ -15347,6 +15278,9 @@  c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
 	case OMP_CLAUSE_FROM:
 	case OMP_CLAUSE__CACHE_:
 	  {
+	    using namespace omp_addr_tokenizer;
+	    auto_vec<omp_addr_token *, 10> addr_tokens;
+
 	    t = OMP_CLAUSE_DECL (c);
 	    if (TREE_CODE (t) == TREE_LIST)
 	      {
@@ -15375,56 +15309,68 @@  c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
 		      }
 		    while (TREE_CODE (t) == ARRAY_REF)
 		      t = TREE_OPERAND (t, 0);
-		    if (TREE_CODE (t) == COMPONENT_REF
-			&& TREE_CODE (TREE_TYPE (t)) == ARRAY_TYPE)
+
+		    c_omp_address_inspector ai (OMP_CLAUSE_LOCATION (c), t);
+
+		    if (!omp_parse_expr (addr_tokens, t))
 		      {
-			do
-			  {
-			    t = TREE_OPERAND (t, 0);
-			    if (TREE_CODE (t) == MEM_REF
-				|| TREE_CODE (t) == INDIRECT_REF)
-			      {
-				t = TREE_OPERAND (t, 0);
-				STRIP_NOPS (t);
-				if (TREE_CODE (t) == POINTER_PLUS_EXPR)
-				  t = TREE_OPERAND (t, 0);
-			      }
-			  }
-			while (TREE_CODE (t) == COMPONENT_REF
-			       || TREE_CODE (t) == ARRAY_REF);
+			sorry_at (OMP_CLAUSE_LOCATION (c),
+				  "unsupported map expression %qE",
+				  OMP_CLAUSE_DECL (c));
+			remove = true;
+			break;
+		      }
+
+		    /* This check is to determine if this will be the only map
+		       clause created for this node.  Otherwise, we'll check
+		       the following FIRSTPRIVATE_POINTER or ATTACH_DETACH
+		       node on the next iteration(s) of the loop.   */
+		    if (addr_tokens.length () >= 4
+			&& addr_tokens[0]->type == STRUCTURE_BASE
+			&& addr_tokens[0]->u.structure_base_kind == BASE_DECL
+			&& addr_tokens[1]->type == ACCESS_METHOD
+			&& addr_tokens[2]->type == COMPONENT_SELECTOR
+			&& addr_tokens[3]->type == ACCESS_METHOD
+			&& (addr_tokens[3]->u.access_kind == ACCESS_DIRECT
+			    || (addr_tokens[3]->u.access_kind
+				== ACCESS_INDEXED_ARRAY)))
+		      {
+			tree rt = addr_tokens[1]->expr;
+
+			gcc_assert (DECL_P (rt));
 
 			if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
 			    && OMP_CLAUSE_MAP_IMPLICIT (c)
-			    && (bitmap_bit_p (&map_head, DECL_UID (t))
-				|| bitmap_bit_p (&map_field_head, DECL_UID (t))
+			    && (bitmap_bit_p (&map_head, DECL_UID (rt))
+				|| bitmap_bit_p (&map_field_head, DECL_UID (rt))
 				|| bitmap_bit_p (&map_firstprivate_head,
-						 DECL_UID (t))))
+						 DECL_UID (rt))))
 			  {
 			    remove = true;
 			    break;
 			  }
-			if (bitmap_bit_p (&map_field_head, DECL_UID (t)))
+			if (bitmap_bit_p (&map_field_head, DECL_UID (rt)))
 			  break;
-			if (bitmap_bit_p (&map_head, DECL_UID (t)))
+			if (bitmap_bit_p (&map_head, DECL_UID (rt)))
 			  {
 			    if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP)
 			      error_at (OMP_CLAUSE_LOCATION (c),
 					"%qD appears more than once in motion "
-					"clauses", t);
+					"clauses", rt);
 			    else if (ort == C_ORT_ACC)
 			      error_at (OMP_CLAUSE_LOCATION (c),
 					"%qD appears more than once in data "
-					"clauses", t);
+					"clauses", rt);
 			    else
 			      error_at (OMP_CLAUSE_LOCATION (c),
 					"%qD appears more than once in map "
-					"clauses", t);
+					"clauses", rt);
 			    remove = true;
 			  }
 			else
 			  {
-			    bitmap_set_bit (&map_head, DECL_UID (t));
-			    bitmap_set_bit (&map_field_head, DECL_UID (t));
+			    bitmap_set_bit (&map_head, DECL_UID (rt));
+			    bitmap_set_bit (&map_field_head, DECL_UID (rt));
 			  }
 		      }
 		  }
@@ -15441,6 +15387,14 @@  c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
 		  OMP_CLAUSE_SIZE (c) = size_zero_node;
 		break;
 	      }
+	    else if (!omp_parse_expr (addr_tokens, t))
+	      {
+		sorry_at (OMP_CLAUSE_LOCATION (c),
+			  "unsupported map expression %qE",
+			  OMP_CLAUSE_DECL (c));
+		remove = true;
+		break;
+	      }
 	    if (t == error_mark_node)
 	      {
 		remove = true;
@@ -15459,96 +15413,42 @@  c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
 		 bias) to zero here, so it is not set erroneously to the pointer
 		 size later on in gimplify.cc.  */
 	      OMP_CLAUSE_SIZE (c) = size_zero_node;
-	    while (TREE_CODE (t) == INDIRECT_REF
-		   || TREE_CODE (t) == ARRAY_REF)
+
+	    c_omp_address_inspector ai (OMP_CLAUSE_LOCATION (c), t);
+
+	    if (!ai.check_clause (c))
 	      {
-		t = TREE_OPERAND (t, 0);
-		STRIP_NOPS (t);
-		if (TREE_CODE (t) == POINTER_PLUS_EXPR)
-		  t = TREE_OPERAND (t, 0);
-	      }
-	    while (TREE_CODE (t) == COMPOUND_EXPR)
-	      {
-		t = TREE_OPERAND (t, 1);
-		STRIP_NOPS (t);
-	      }
-	    indir_component_ref_p = false;
-	    if (TREE_CODE (t) == COMPONENT_REF
-		&& (TREE_CODE (TREE_OPERAND (t, 0)) == MEM_REF
-		    || TREE_CODE (TREE_OPERAND (t, 0)) == INDIRECT_REF
-		    || TREE_CODE (TREE_OPERAND (t, 0)) == ARRAY_REF))
-	      {
-		t = TREE_OPERAND (TREE_OPERAND (t, 0), 0);
-		indir_component_ref_p = true;
-		STRIP_NOPS (t);
-		if (TREE_CODE (t) == POINTER_PLUS_EXPR)
-		  t = TREE_OPERAND (t, 0);
+		remove = true;
+		break;
 	      }
 
-	    if (TREE_CODE (t) == COMPONENT_REF
-		&& OMP_CLAUSE_CODE (c) != OMP_CLAUSE__CACHE_)
+	    if (!ai.map_supported_p ())
 	      {
-		if (DECL_BIT_FIELD (TREE_OPERAND (t, 1)))
-		  {
-		    error_at (OMP_CLAUSE_LOCATION (c),
-			      "bit-field %qE in %qs clause",
-			      t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
-		    remove = true;
-		  }
-		else if (!omp_mappable_type (TREE_TYPE (t)))
-		  {
-		    error_at (OMP_CLAUSE_LOCATION (c),
-			      "%qE does not have a mappable type in %qs clause",
-			      t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
-		    remove = true;
-		  }
-		else if (TYPE_ATOMIC (TREE_TYPE (t)))
-		  {
-		    error_at (OMP_CLAUSE_LOCATION (c),
-			      "%<_Atomic%> %qE in %qs clause", t,
-			      omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
-		    remove = true;
-		  }
-		while (TREE_CODE (t) == COMPONENT_REF)
-		  {
-		    if (TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 0)))
-			== UNION_TYPE)
-		      {
-			error_at (OMP_CLAUSE_LOCATION (c),
-				  "%qE is a member of a union", t);
-			remove = true;
-			break;
-		      }
-		    t = TREE_OPERAND (t, 0);
-		    if (TREE_CODE (t) == MEM_REF)
-		      {
-			if (maybe_ne (mem_ref_offset (t), 0))
-			  error_at (OMP_CLAUSE_LOCATION (c),
-				    "cannot dereference %qE in %qs clause", t,
-				    omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
-			else
-			  t = TREE_OPERAND (t, 0);
-		      }
-		    while (TREE_CODE (t) == MEM_REF
-			   || TREE_CODE (t) == INDIRECT_REF
-			   || TREE_CODE (t) == ARRAY_REF)
-		      {
-			t = TREE_OPERAND (t, 0);
-			STRIP_NOPS (t);
-			if (TREE_CODE (t) == POINTER_PLUS_EXPR)
-			  t = TREE_OPERAND (t, 0);
-		      }
-		  }
-		if (remove)
-		  break;
-		if (VAR_P (t) || TREE_CODE (t) == PARM_DECL)
-		  {
-		    if (bitmap_bit_p (&map_field_head, DECL_UID (t))
-			|| (ort != C_ORT_ACC
-			    && bitmap_bit_p (&map_head, DECL_UID (t))))
-		      break;
-		  }
+		sorry_at (OMP_CLAUSE_LOCATION (c),
+			  "unsupported map expression %qE",
+			  OMP_CLAUSE_DECL (c));
+		remove = true;
+		break;
 	      }
+
+	    gcc_assert ((addr_tokens[0]->type == ARRAY_BASE
+			 || addr_tokens[0]->type == STRUCTURE_BASE)
+			&& addr_tokens[1]->type == ACCESS_METHOD);
+
+	    t = addr_tokens[1]->expr;
+
+	    if (addr_tokens[0]->u.structure_base_kind != BASE_DECL)
+	      goto skip_decl_checks;
+
+	    /* For OpenMP, we can access a struct "t" and "t.d" on the same
+	       mapping.  OpenACC allows multiple fields of the same structure
+	       to be written.  */
+	    if (addr_tokens[0]->type == STRUCTURE_BASE
+		&& (bitmap_bit_p (&map_field_head, DECL_UID (t))
+		    || (ort != C_ORT_ACC
+			&& bitmap_bit_p (&map_head, DECL_UID (t)))))
+	      goto skip_decl_checks;
+
 	    if (!VAR_P (t) && TREE_CODE (t) != PARM_DECL)
 	      {
 		error_at (OMP_CLAUSE_LOCATION (c),
@@ -15566,7 +15466,6 @@  c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
 	    else if ((OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP
 		      || (OMP_CLAUSE_MAP_KIND (c)
 			  != GOMP_MAP_FIRSTPRIVATE_POINTER))
-		     && !indir_component_ref_p
 		     && !c_mark_addressable (t))
 	      remove = true;
 	    else if (!(OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
@@ -15612,15 +15511,11 @@  c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
 		    remove = true;
 		  }
 		else if (bitmap_bit_p (&map_head, DECL_UID (t))
-			 && !bitmap_bit_p (&map_field_head, DECL_UID (t)))
+			 && !bitmap_bit_p (&map_field_head, DECL_UID (t))
+			 && ort == C_ORT_ACC)
 		  {
-		    if (ort == C_ORT_ACC)
-		      error_at (OMP_CLAUSE_LOCATION (c),
-				"%qD appears more than once in data clauses",
-				t);
-		    else
-		      error_at (OMP_CLAUSE_LOCATION (c),
-				"%qD appears both in data and map clauses", t);
+		    error_at (OMP_CLAUSE_LOCATION (c),
+			      "%qD appears more than once in data clauses", t);
 		    remove = true;
 		  }
 		else
@@ -15658,13 +15553,37 @@  c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
 			    "%qD appears both in data and map clauses", t);
 		remove = true;
 	      }
-	    else
+	    else if (!omp_access_chain_p (addr_tokens, 1))
 	      {
 		bitmap_set_bit (&map_head, DECL_UID (t));
 		if (t != OMP_CLAUSE_DECL (c)
 		    && TREE_CODE (OMP_CLAUSE_DECL (c)) == COMPONENT_REF)
 		  bitmap_set_bit (&map_field_head, DECL_UID (t));
 	      }
+
+	  skip_decl_checks:
+	    /* If we call omp_expand_map_clause in handle_omp_array_sections,
+	       the containing loop (here) iterates through the new nodes
+	       created by that expansion.  Avoid expanding those again (just
+	       by checking the node type).  */
+	    if (!remove
+		&& ort != C_ORT_DECLARE_SIMD
+		&& (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP
+		    || (OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_FIRSTPRIVATE_POINTER
+			&& (OMP_CLAUSE_MAP_KIND (c)
+			    != GOMP_MAP_FIRSTPRIVATE_REFERENCE)
+			&& OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_ALWAYS_POINTER
+			&& OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_ATTACH_DETACH)))
+	      {
+		grp_start_p = pc;
+		grp_sentinel = OMP_CLAUSE_CHAIN (c);
+		tree nc = ai.expand_map_clause (c, OMP_CLAUSE_DECL (c),
+						addr_tokens,
+						(ort == C_ORT_OMP_TARGET
+						 || ort == C_ORT_ACC));
+		if (nc != error_mark_node)
+		  c = nc;
+	      }
 	  }
 	  break;
 
diff --git a/gcc/cp/semantics.cc b/gcc/cp/semantics.cc
index 3032a50624a0..d6cf1cee6be2 100644
--- a/gcc/cp/semantics.cc
+++ b/gcc/cp/semantics.cc
@@ -5093,6 +5093,54 @@  omp_privatize_field (tree t, bool shared)
   return v;
 }
 
+/* C++ specialisation of the c_omp_address_inspector class.  */
+
+class cp_omp_address_inspector : public c_omp_address_inspector
+{
+public:
+  cp_omp_address_inspector (location_t loc, tree t)
+    : c_omp_address_inspector (loc, t)
+    {
+    }
+
+  ~cp_omp_address_inspector ()
+    {
+    }
+
+  bool processing_template_decl_p ()
+    {
+      return processing_template_decl;
+    }
+
+  void emit_unmappable_type_notes (tree t)
+    {
+      if (TREE_TYPE (t) != error_mark_node
+	  && !COMPLETE_TYPE_P (TREE_TYPE (t)))
+	cxx_incomplete_type_inform (TREE_TYPE (t));
+    }
+
+  tree convert_from_reference (tree x)
+    {
+      return ::convert_from_reference (x);
+    }
+
+  tree build_array_ref (location_t loc, tree arr, tree idx)
+    {
+      return ::build_array_ref (loc, arr, idx);
+    }
+
+  bool check_clause (tree clause)
+    {
+      if (TREE_CODE (orig) == COMPONENT_REF
+	  && invalid_nonstatic_memfn_p (EXPR_LOCATION (orig), orig,
+					tf_warning_or_error))
+	return false;
+      if (!c_omp_address_inspector::check_clause (clause))
+	return false;
+      return true;
+    }
+};
+
 /* Helper function for handle_omp_array_sections.  Called recursively
    to handle multiple array-section-subscripts.  C is the clause,
    T current expression (initially OMP_CLAUSE_DECL), which is either
@@ -5123,59 +5171,22 @@  handle_omp_array_sections_1 (tree c, tree t, vec<tree> &types,
     {
       if (error_operand_p (t))
 	return error_mark_node;
-      if (REFERENCE_REF_P (t)
-	  && TREE_CODE (TREE_OPERAND (t, 0)) == COMPONENT_REF)
-	t = TREE_OPERAND (t, 0);
-      ret = t;
-      while (TREE_CODE (t) == INDIRECT_REF)
-	{
-	  t = TREE_OPERAND (t, 0);
-	  STRIP_NOPS (t);
-	  if (TREE_CODE (t) == POINTER_PLUS_EXPR)
-	    t = TREE_OPERAND (t, 0);
-	}
-      while (TREE_CODE (t) == COMPOUND_EXPR)
-	{
-	  t = TREE_OPERAND (t, 1);
-	  STRIP_NOPS (t);
-	}
-      if (TREE_CODE (t) == COMPONENT_REF
-	  && (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
-	      || OMP_CLAUSE_CODE (c) == OMP_CLAUSE_TO
-	      || OMP_CLAUSE_CODE (c) == OMP_CLAUSE_FROM)
-	  && !type_dependent_expression_p (t))
-	{
-	  if (TREE_CODE (TREE_OPERAND (t, 1)) == FIELD_DECL
-	      && DECL_BIT_FIELD (TREE_OPERAND (t, 1)))
-	    {
-	      error_at (OMP_CLAUSE_LOCATION (c),
-			"bit-field %qE in %qs clause",
-			t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
-	      return error_mark_node;
-	    }
-	  while (TREE_CODE (t) == COMPONENT_REF)
-	    {
-	      if (TREE_TYPE (TREE_OPERAND (t, 0))
-		  && TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 0))) == UNION_TYPE)
-		{
-		  error_at (OMP_CLAUSE_LOCATION (c),
-			    "%qE is a member of a union", t);
-		  return error_mark_node;
-		}
-	      t = TREE_OPERAND (t, 0);
-	      while (TREE_CODE (t) == MEM_REF
-		     || TREE_CODE (t) == INDIRECT_REF
-		     || TREE_CODE (t) == ARRAY_REF)
-		{
-		  t = TREE_OPERAND (t, 0);
-		  STRIP_NOPS (t);
-		  if (TREE_CODE (t) == POINTER_PLUS_EXPR)
-		    t = TREE_OPERAND (t, 0);
-		}
-	    }
-	  if (REFERENCE_REF_P (t))
-	    t = TREE_OPERAND (t, 0);
-	}
+
+      cp_omp_address_inspector ai (OMP_CLAUSE_LOCATION (c), t);
+      tree t_refto = ai.maybe_unconvert_ref (t);
+
+      if (!ai.check_clause (c))
+	return error_mark_node;
+      else if (ai.component_access_p ()
+	       && (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
+		   || OMP_CLAUSE_CODE (c) == OMP_CLAUSE_TO
+		   || OMP_CLAUSE_CODE (c) == OMP_CLAUSE_FROM))
+	t = ai.get_root_term (true);
+      else
+	t = ai.unconverted_ref_origin ();
+      if (t == error_mark_node)
+	return error_mark_node;
+      ret = t_refto;
       if (TREE_CODE (t) == FIELD_DECL)
 	ret = finish_non_static_data_member (t, NULL_TREE, NULL_TREE);
       else if (!VAR_P (t) && TREE_CODE (t) != PARM_DECL)
@@ -5509,7 +5520,7 @@  handle_omp_array_sections_1 (tree c, tree t, vec<tree> &types,
 /* Handle array sections for clause C.  */
 
 static bool
-handle_omp_array_sections (tree c, enum c_omp_region_type ort)
+handle_omp_array_sections (tree &c, enum c_omp_region_type ort)
 {
   bool maybe_zero_len = false;
   unsigned int first_non_one = 0;
@@ -5720,111 +5731,72 @@  handle_omp_array_sections (tree c, enum c_omp_region_type ort)
 	  OMP_CLAUSE_SIZE (c) = size;
 	  if (TREE_CODE (t) == FIELD_DECL)
 	    t = finish_non_static_data_member (t, NULL_TREE, NULL_TREE);
-	  if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP
-	      || (TREE_CODE (t) == COMPONENT_REF
-		  && TREE_CODE (TREE_TYPE (t)) == ARRAY_TYPE))
+
+	  if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP)
 	    return false;
-	  switch (OMP_CLAUSE_MAP_KIND (c))
-	    {
-	    case GOMP_MAP_ALLOC:
-	    case GOMP_MAP_IF_PRESENT:
-	    case GOMP_MAP_TO:
-	    case GOMP_MAP_FROM:
-	    case GOMP_MAP_TOFROM:
-	    case GOMP_MAP_ALWAYS_TO:
-	    case GOMP_MAP_ALWAYS_FROM:
-	    case GOMP_MAP_ALWAYS_TOFROM:
-	    case GOMP_MAP_RELEASE:
-	    case GOMP_MAP_DELETE:
-	    case GOMP_MAP_FORCE_TO:
-	    case GOMP_MAP_FORCE_FROM:
-	    case GOMP_MAP_FORCE_TOFROM:
-	    case GOMP_MAP_FORCE_PRESENT:
-	      OMP_CLAUSE_MAP_MAYBE_ZERO_LENGTH_ARRAY_SECTION (c) = 1;
-	      break;
-	    default:
-	      break;
-	    }
-	  bool reference_always_pointer = true;
-	  tree c2 = build_omp_clause (OMP_CLAUSE_LOCATION (c),
-				      OMP_CLAUSE_MAP);
-	  if (TREE_CODE (t) == COMPONENT_REF)
-	    {
-	      OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH);
 
-	      if ((ort & C_ORT_OMP_DECLARE_SIMD) == C_ORT_OMP
-		  && TYPE_REF_P (TREE_TYPE (t)))
+	  if (TREE_CODE (first) == INDIRECT_REF)
+	    {
+	      /* Detect and skip adding extra nodes for pointer-to-member
+		 mappings.  These are unsupported for now.  */
+	      tree tmp = TREE_OPERAND (first, 0);
+
+	      if (TREE_CODE (tmp) == NON_LVALUE_EXPR)
+		tmp = TREE_OPERAND (tmp, 0);
+
+	      if (TREE_CODE (tmp) == INDIRECT_REF)
+		tmp = TREE_OPERAND (tmp, 0);
+
+	      if (TREE_CODE (tmp) == POINTER_PLUS_EXPR)
 		{
-		  if (TREE_CODE (TREE_TYPE (TREE_TYPE (t))) == ARRAY_TYPE)
-		    OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ALWAYS_POINTER);
-		  else
-		    t = convert_from_reference (t);
-
-		  reference_always_pointer = false;
+		  tree offset = TREE_OPERAND (tmp, 1);
+		  STRIP_NOPS (offset);
+		  if (TYPE_PTRMEM_P (TREE_TYPE (offset)))
+		    {
+		      sorry_at (OMP_CLAUSE_LOCATION (c),
+				"pointer-to-member mapping %qE not supported",
+				OMP_CLAUSE_DECL (c));
+		      return true;
+		    }
 		}
 	    }
-	  else if (REFERENCE_REF_P (t)
-		   && TREE_CODE (TREE_OPERAND (t, 0)) == COMPONENT_REF)
-	    {
-	      gomp_map_kind k;
-	      if ((ort & C_ORT_OMP_DECLARE_SIMD) == C_ORT_OMP
-		  && TREE_CODE (TREE_TYPE (t)) == POINTER_TYPE)
-		k = GOMP_MAP_ATTACH_DETACH;
-	      else
-		{
-		  t = TREE_OPERAND (t, 0);
-		  k = (ort == C_ORT_ACC
-		       ? GOMP_MAP_ATTACH_DETACH : GOMP_MAP_ALWAYS_POINTER);
-		}
-	      OMP_CLAUSE_SET_MAP_KIND (c2, k);
-	    }
-	  else
-	    OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_FIRSTPRIVATE_POINTER);
-	  OMP_CLAUSE_MAP_IMPLICIT (c2) = OMP_CLAUSE_MAP_IMPLICIT (c);
-	  if (OMP_CLAUSE_MAP_KIND (c2) != GOMP_MAP_FIRSTPRIVATE_POINTER
-	      && !cxx_mark_addressable (t))
-	    return false;
-	  OMP_CLAUSE_DECL (c2) = t;
-	  t = build_fold_addr_expr (first);
-	  t = fold_convert_loc (OMP_CLAUSE_LOCATION (c),
-				ptrdiff_type_node, t);
-	  tree ptr = OMP_CLAUSE_DECL (c2);
-	  ptr = convert_from_reference (ptr);
-	  if (!INDIRECT_TYPE_P (TREE_TYPE (ptr)))
-	    ptr = build_fold_addr_expr (ptr);
-	  t = fold_build2_loc (OMP_CLAUSE_LOCATION (c), MINUS_EXPR,
-			       ptrdiff_type_node, t,
-			       fold_convert_loc (OMP_CLAUSE_LOCATION (c),
-						 ptrdiff_type_node, ptr));
-	  OMP_CLAUSE_SIZE (c2) = t;
-	  OMP_CLAUSE_CHAIN (c2) = OMP_CLAUSE_CHAIN (c);
-	  OMP_CLAUSE_CHAIN (c) = c2;
 
-	  ptr = OMP_CLAUSE_DECL (c2);
-	  if (reference_always_pointer
-	      && OMP_CLAUSE_MAP_KIND (c2) != GOMP_MAP_FIRSTPRIVATE_POINTER
-	      && TYPE_REF_P (TREE_TYPE (ptr))
-	      && INDIRECT_TYPE_P (TREE_TYPE (TREE_TYPE (ptr))))
+	  /* FIRST represents the first item of data that we are mapping.
+	     E.g. if we're mapping an array, FIRST might resemble
+	     "foo.bar.myarray[0]".  */
+
+	  auto_vec<omp_addr_token *, 10> addr_tokens;
+
+	  if (!omp_parse_expr (addr_tokens, first))
+	    return true;
+
+	  cp_omp_address_inspector ai (OMP_CLAUSE_LOCATION (c), t);
+
+	  tree nc = ai.expand_map_clause (c, first, addr_tokens,
+					  (ort == C_ORT_OMP_TARGET
+					   || ort == C_ORT_ACC));
+	  if (nc != error_mark_node)
 	    {
-	      tree c3 = build_omp_clause (OMP_CLAUSE_LOCATION (c),
-					  OMP_CLAUSE_MAP);
-	      OMP_CLAUSE_SET_MAP_KIND (c3, OMP_CLAUSE_MAP_KIND (c2));
-	      OMP_CLAUSE_MAP_IMPLICIT (c2) = OMP_CLAUSE_MAP_IMPLICIT (c);
-	      OMP_CLAUSE_DECL (c3) = ptr;
-	      if (OMP_CLAUSE_MAP_KIND (c2) == GOMP_MAP_ALWAYS_POINTER
-		  || OMP_CLAUSE_MAP_KIND (c2) == GOMP_MAP_ATTACH_DETACH)
-		{
-		  OMP_CLAUSE_DECL (c2) = build_simple_mem_ref (ptr);
-		  OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ALWAYS_POINTER);
-		}
-	      else
-		OMP_CLAUSE_DECL (c2) = convert_from_reference (ptr);
-	      OMP_CLAUSE_SIZE (c3) = size_zero_node;
-	      OMP_CLAUSE_CHAIN (c3) = OMP_CLAUSE_CHAIN (c2);
-	      OMP_CLAUSE_CHAIN (c2) = c3;
+	      using namespace omp_addr_tokenizer;
+
+	      if (ai.maybe_zero_length_array_section (c))
+		OMP_CLAUSE_MAP_MAYBE_ZERO_LENGTH_ARRAY_SECTION (c) = 1;
+
+	      /* !!! If we're accessing a base decl via chained access
+		 methods (e.g. multiple indirections), duplicate clause
+		 detection won't work properly.  Skip it in that case.  */
+	      if ((addr_tokens[0]->type == STRUCTURE_BASE
+		   || addr_tokens[0]->type == ARRAY_BASE)
+		  && addr_tokens[0]->u.structure_base_kind == BASE_DECL
+		  && addr_tokens[1]->type == ACCESS_METHOD
+		  && omp_access_chain_p (addr_tokens, 1))
+		c = nc;
+
+	      return false;
 	    }
 	}
     }
+
   return false;
 }
 
@@ -7200,7 +7172,8 @@  finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
 			"%qD appears more than once in data clauses", t);
 	      remove = true;
 	    }
-	  else if (bitmap_bit_p (&map_head, DECL_UID (t)))
+	  else if (bitmap_bit_p (&map_head, DECL_UID (t))
+		   || bitmap_bit_p (&map_field_head, DECL_UID (t)))
 	    {
 	      if (ort == C_ORT_ACC)
 		error_at (OMP_CLAUSE_LOCATION (c),
@@ -8021,6 +7994,9 @@  finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
 	case OMP_CLAUSE_FROM:
 	case OMP_CLAUSE__CACHE_:
 	  {
+	    using namespace omp_addr_tokenizer;
+	    auto_vec<omp_addr_token *, 10> addr_tokens;
+
 	    t = OMP_CLAUSE_DECL (c);
 	    if (TREE_CODE (t) == TREE_LIST)
 	      {
@@ -8047,58 +8023,73 @@  finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
 		      }
 		    while (TREE_CODE (t) == ARRAY_REF)
 		      t = TREE_OPERAND (t, 0);
-		    if (TREE_CODE (t) == COMPONENT_REF
-			&& TREE_CODE (TREE_TYPE (t)) == ARRAY_TYPE)
+
+		    if (type_dependent_expression_p (t))
+		      break;
+
+		    cp_omp_address_inspector ai (OMP_CLAUSE_LOCATION (c), t);
+
+		    if (!ai.map_supported_p ()
+			|| !omp_parse_expr (addr_tokens, t))
 		      {
-			do
-			  {
-			    t = TREE_OPERAND (t, 0);
-			    if (REFERENCE_REF_P (t))
-			      t = TREE_OPERAND (t, 0);
-			    if (TREE_CODE (t) == MEM_REF
-				|| TREE_CODE (t) == INDIRECT_REF)
-			      {
-				t = TREE_OPERAND (t, 0);
-				STRIP_NOPS (t);
-				if (TREE_CODE (t) == POINTER_PLUS_EXPR)
-				  t = TREE_OPERAND (t, 0);
-			      }
-			  }
-			while (TREE_CODE (t) == COMPONENT_REF
-			       || TREE_CODE (t) == ARRAY_REF);
+			sorry_at (OMP_CLAUSE_LOCATION (c),
+				  "unsupported map expression %qE",
+				  OMP_CLAUSE_DECL (c));
+			remove = true;
+			break;
+		      }
+
+		    /* This check is to determine if this will be the only map
+		       clause created for this node.  Otherwise, we'll check
+		       the following FIRSTPRIVATE_POINTER,
+		       FIRSTPRIVATE_REFERENCE or ATTACH_DETACH node on the next
+		       iteration(s) of the loop.   */
+		    if (addr_tokens.length () >= 4
+			&& addr_tokens[0]->type == STRUCTURE_BASE
+			&& addr_tokens[0]->u.structure_base_kind == BASE_DECL
+			&& addr_tokens[1]->type == ACCESS_METHOD
+			&& addr_tokens[2]->type == COMPONENT_SELECTOR
+			&& addr_tokens[3]->type == ACCESS_METHOD
+			&& (addr_tokens[3]->u.access_kind == ACCESS_DIRECT
+			    || (addr_tokens[3]->u.access_kind
+				== ACCESS_INDEXED_ARRAY)))
+		      {
+			tree rt = addr_tokens[1]->expr;
+
+			gcc_assert (DECL_P (rt));
 
 			if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
 			    && OMP_CLAUSE_MAP_IMPLICIT (c)
-			    && (bitmap_bit_p (&map_head, DECL_UID (t))
-				|| bitmap_bit_p (&map_field_head, DECL_UID (t))
+			    && (bitmap_bit_p (&map_head, DECL_UID (rt))
+				|| bitmap_bit_p (&map_field_head, DECL_UID (rt))
 				|| bitmap_bit_p (&map_firstprivate_head,
-						 DECL_UID (t))))
+						 DECL_UID (rt))))
 			  {
 			    remove = true;
 			    break;
 			  }
-			if (bitmap_bit_p (&map_field_head, DECL_UID (t)))
+			if (bitmap_bit_p (&map_field_head, DECL_UID (rt)))
 			  break;
-			if (bitmap_bit_p (&map_head, DECL_UID (t)))
+			if (bitmap_bit_p (&map_head, DECL_UID (rt)))
 			  {
 			    if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP)
 			      error_at (OMP_CLAUSE_LOCATION (c),
 					"%qD appears more than once in motion"
-					" clauses", t);
+					" clauses", rt);
 			    else if (ort == C_ORT_ACC)
 			      error_at (OMP_CLAUSE_LOCATION (c),
 					"%qD appears more than once in data"
-					" clauses", t);
+					" clauses", rt);
 			    else
 			      error_at (OMP_CLAUSE_LOCATION (c),
 					"%qD appears more than once in map"
-					" clauses", t);
+					" clauses", rt);
 			    remove = true;
 			  }
 			else
 			  {
-			    bitmap_set_bit (&map_head, DECL_UID (t));
-			    bitmap_set_bit (&map_field_head, DECL_UID (t));
+			    bitmap_set_bit (&map_head, DECL_UID (rt));
+			    bitmap_set_bit (&map_field_head, DECL_UID (rt));
 			  }
 		      }
 		  }
@@ -8115,6 +8106,16 @@  finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
 		  OMP_CLAUSE_SIZE (c) = size_zero_node;
 		break;
 	      }
+	    else if (type_dependent_expression_p (t))
+	      break;
+	    else if (!omp_parse_expr (addr_tokens, t))
+	      {
+		sorry_at (OMP_CLAUSE_LOCATION (c),
+			  "unsupported map expression %qE",
+			  OMP_CLAUSE_DECL (c));
+		remove = true;
+		break;
+	      }
 	    if (t == error_mark_node)
 	      {
 		remove = true;
@@ -8133,110 +8134,50 @@  finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
 		 bias) to zero here, so it is not set erroneously to the pointer
 		 size later on in gimplify.cc.  */
 	      OMP_CLAUSE_SIZE (c) = size_zero_node;
-	    if (REFERENCE_REF_P (t)
-		&& TREE_CODE (TREE_OPERAND (t, 0)) == COMPONENT_REF)
+
+	    cp_omp_address_inspector ai (OMP_CLAUSE_LOCATION (c), t);
+
+	    if (!ai.check_clause (c))
 	      {
-		t = TREE_OPERAND (t, 0);
-		if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
-		    && OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_ATTACH_DETACH)
-		  OMP_CLAUSE_DECL (c) = t;
+		remove = true;
+		break;
 	      }
-	    while (TREE_CODE (t) == INDIRECT_REF
-		   || TREE_CODE (t) == ARRAY_REF)
+
+	    if (!ai.map_supported_p ())
 	      {
-		t = TREE_OPERAND (t, 0);
-		STRIP_NOPS (t);
-		if (TREE_CODE (t) == POINTER_PLUS_EXPR)
-		  t = TREE_OPERAND (t, 0);
+		sorry_at (OMP_CLAUSE_LOCATION (c),
+			  "unsupported map expression %qE",
+			  OMP_CLAUSE_DECL (c));
+		remove = true;
+		break;
 	      }
-	    while (TREE_CODE (t) == COMPOUND_EXPR)
-	      {
-		t = TREE_OPERAND (t, 1);
-		STRIP_NOPS (t);
-	      }
-	    if (TREE_CODE (t) == COMPONENT_REF
-		&& invalid_nonstatic_memfn_p (EXPR_LOCATION (t), t,
-					      tf_warning_or_error))
-	      remove = true;
-	    indir_component_ref_p = false;
-	    if (TREE_CODE (t) == COMPONENT_REF
-		&& (TREE_CODE (TREE_OPERAND (t, 0)) == INDIRECT_REF
-		    || TREE_CODE (TREE_OPERAND (t, 0)) == ARRAY_REF))
-	      {
-		t = TREE_OPERAND (TREE_OPERAND (t, 0), 0);
-		indir_component_ref_p = true;
-		STRIP_NOPS (t);
-		if (TREE_CODE (t) == POINTER_PLUS_EXPR)
-		  t = TREE_OPERAND (t, 0);
-	      }
-	    if (TREE_CODE (t) == COMPONENT_REF
-		&& OMP_CLAUSE_CODE (c) != OMP_CLAUSE__CACHE_)
-	      {
-		if (type_dependent_expression_p (t))
-		  break;
-		if (TREE_CODE (TREE_OPERAND (t, 1)) == FIELD_DECL
-		    && DECL_BIT_FIELD (TREE_OPERAND (t, 1)))
-		  {
-		    error_at (OMP_CLAUSE_LOCATION (c),
-			      "bit-field %qE in %qs clause",
-			      t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
-		    remove = true;
-		  }
-		else if (!omp_mappable_type (TREE_TYPE (t)))
-		  {
-		    error_at (OMP_CLAUSE_LOCATION (c),
-			      "%qE does not have a mappable type in %qs clause",
-			      t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
-		    if (TREE_TYPE (t) != error_mark_node
-			&& !COMPLETE_TYPE_P (TREE_TYPE (t)))
-		      cxx_incomplete_type_inform (TREE_TYPE (t));
-		    remove = true;
-		  }
-		while (TREE_CODE (t) == COMPONENT_REF)
-		  {
-		    if (TREE_TYPE (TREE_OPERAND (t, 0))
-			&& (TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 0)))
-			    == UNION_TYPE))
-		      {
-			error_at (OMP_CLAUSE_LOCATION (c),
-				  "%qE is a member of a union", t);
-			remove = true;
-			break;
-		      }
-		    t = TREE_OPERAND (t, 0);
-		    if (TREE_CODE (t) == MEM_REF)
-		      {
-			if (maybe_ne (mem_ref_offset (t), 0))
-			  error_at (OMP_CLAUSE_LOCATION (c),
-				    "cannot dereference %qE in %qs clause", t,
-				    omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
-			else
-			  t = TREE_OPERAND (t, 0);
-		      }
-		    while (TREE_CODE (t) == MEM_REF
-			   || TREE_CODE (t) == INDIRECT_REF
-			   || TREE_CODE (t) == ARRAY_REF)
-		      {
-			t = TREE_OPERAND (t, 0);
-			STRIP_NOPS (t);
-			if (TREE_CODE (t) == POINTER_PLUS_EXPR)
-			  t = TREE_OPERAND (t, 0);
-		      }
-		  }
-		if (remove)
-		  break;
-		if (REFERENCE_REF_P (t))
-		  t = TREE_OPERAND (t, 0);
-		if (VAR_P (t) || TREE_CODE (t) == PARM_DECL)
-		  {
-		    if (bitmap_bit_p (&map_field_head, DECL_UID (t))
-			|| (ort != C_ORT_ACC
-			    && bitmap_bit_p (&map_head, DECL_UID (t))))
-		      goto handle_map_references;
-		  }
-	      }
-	    if (!processing_template_decl
-		&& TREE_CODE (t) == FIELD_DECL)
+
+	    gcc_assert ((addr_tokens[0]->type == ARRAY_BASE
+			 || addr_tokens[0]->type == STRUCTURE_BASE)
+			&& addr_tokens[1]->type == ACCESS_METHOD);
+
+	    t = addr_tokens[1]->expr;
+
+	    /* This is used to prevent cxx_mark_addressable from being called
+	       on 'this' for expressions like 'this->a', i.e. typical member
+	       accesses.  */
+	    indir_component_ref_p
+	      = (addr_tokens[0]->type == STRUCTURE_BASE
+		 && addr_tokens[1]->u.access_kind != ACCESS_DIRECT);
+
+	    if (addr_tokens[0]->u.structure_base_kind != BASE_DECL)
+	      goto skip_decl_checks;
+
+	    /* For OpenMP, we can access a struct "t" and "t.d" on the same
+	       mapping.  OpenACC allows multiple fields of the same structure
+	       to be written.  */
+	    if (addr_tokens[0]->type == STRUCTURE_BASE
+		&& (bitmap_bit_p (&map_field_head, DECL_UID (t))
+		    || (ort != C_ORT_ACC
+			&& bitmap_bit_p (&map_head, DECL_UID (t)))))
+	      goto skip_decl_checks;
+
+	    if (!processing_template_decl && TREE_CODE (t) == FIELD_DECL)
 	      {
 		OMP_CLAUSE_DECL (c)
 		  = finish_non_static_data_member (t, NULL_TREE, NULL_TREE);
@@ -8274,12 +8215,17 @@  finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
 			 || (OMP_CLAUSE_MAP_KIND (c)
 			     != GOMP_MAP_FIRSTPRIVATE_POINTER))
 		     && !indir_component_ref_p
+		     && (t != current_class_ptr
+			 || OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP
+			 || OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_ATTACH_DETACH)
 		     && !cxx_mark_addressable (t))
 	      remove = true;
 	    else if (!(OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
 		       && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_POINTER
 			   || (OMP_CLAUSE_MAP_KIND (c)
-			       == GOMP_MAP_FIRSTPRIVATE_POINTER)))
+			       == GOMP_MAP_FIRSTPRIVATE_POINTER)
+			   || (OMP_CLAUSE_MAP_KIND (c)
+			       == GOMP_MAP_ATTACH_DETACH)))
 		     && t == OMP_CLAUSE_DECL (c)
 		     && !type_dependent_expression_p (t)
 		     && !omp_mappable_type (TYPE_REF_P (TREE_TYPE (t))
@@ -8323,20 +8269,20 @@  finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
 		    remove = true;
 		  }
 		else if (bitmap_bit_p (&map_head, DECL_UID (t))
-			 && !bitmap_bit_p (&map_field_head, DECL_UID (t)))
+			 && !bitmap_bit_p (&map_field_head, DECL_UID (t))
+			 && ort == C_ORT_ACC)
 		  {
-		    if (ort == C_ORT_ACC)
-		      error_at (OMP_CLAUSE_LOCATION (c),
-				"%qD appears more than once in data clauses",
-				t);
-		    else
-		      error_at (OMP_CLAUSE_LOCATION (c),
-				"%qD appears both in data and map clauses", t);
+		    error_at (OMP_CLAUSE_LOCATION (c),
+			      "%qD appears more than once in data clauses", t);
 		    remove = true;
 		  }
 		else
 		  bitmap_set_bit (&map_firstprivate_head, DECL_UID (t));
 	      }
+	    else if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
+		     && (OMP_CLAUSE_MAP_KIND (c)
+			 == GOMP_MAP_FIRSTPRIVATE_REFERENCE))
+	      bitmap_set_bit (&map_firstprivate_head, DECL_UID (t));
 	    else if (bitmap_bit_p (&map_head, DECL_UID (t))
 		     && !bitmap_bit_p (&map_field_head, DECL_UID (t)))
 	      {
@@ -8369,7 +8315,7 @@  finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
 			    "%qD appears both in data and map clauses", t);
 		remove = true;
 	      }
-	    else
+	    else if (!omp_access_chain_p (addr_tokens, 1))
 	      {
 		bitmap_set_bit (&map_head, DECL_UID (t));
 
@@ -8383,49 +8329,31 @@  finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
 								    0))))))
 		  bitmap_set_bit (&map_field_head, DECL_UID (t));
 	      }
-	  handle_map_references:
+
+	  skip_decl_checks:
+	    /* If we call omp_expand_map_clause in handle_omp_array_sections,
+	       the containing loop (here) iterates through the new nodes
+	       created by that expansion.  Avoid expanding those again (just
+	       by checking the node type).  */
 	    if (!remove
 		&& !processing_template_decl
 		&& ort != C_ORT_DECLARE_SIMD
-		&& TYPE_REF_P (TREE_TYPE (OMP_CLAUSE_DECL (c))))
+		&& (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP
+		    || ((OMP_CLAUSE_MAP_KIND (c)
+			 != GOMP_MAP_FIRSTPRIVATE_POINTER)
+			&& (OMP_CLAUSE_MAP_KIND (c)
+			    != GOMP_MAP_FIRSTPRIVATE_REFERENCE)
+			&& OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_ALWAYS_POINTER
+			&& OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_ATTACH_DETACH)))
 	      {
-		t = OMP_CLAUSE_DECL (c);
-		if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP)
-		  {
-		    OMP_CLAUSE_DECL (c) = build_simple_mem_ref (t);
-		    if (OMP_CLAUSE_SIZE (c) == NULL_TREE)
-		      OMP_CLAUSE_SIZE (c)
-			= TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (t)));
-		  }
-		else if (OMP_CLAUSE_MAP_KIND (c)
-			 != GOMP_MAP_FIRSTPRIVATE_POINTER
-			 && (OMP_CLAUSE_MAP_KIND (c)
-			     != GOMP_MAP_FIRSTPRIVATE_REFERENCE)
-			 && (OMP_CLAUSE_MAP_KIND (c)
-			     != GOMP_MAP_ALWAYS_POINTER)
-			 && (OMP_CLAUSE_MAP_KIND (c)
-			     != GOMP_MAP_ATTACH_DETACH))
-		  {
-		    grp_start_p = pc;
-		    grp_sentinel = OMP_CLAUSE_CHAIN (c);
-
-		    tree c2 = build_omp_clause (OMP_CLAUSE_LOCATION (c),
-						OMP_CLAUSE_MAP);
-		    if (TREE_CODE (t) == COMPONENT_REF)
-		      OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ALWAYS_POINTER);
-		    else
-		      OMP_CLAUSE_SET_MAP_KIND (c2,
-					       GOMP_MAP_FIRSTPRIVATE_REFERENCE);
-		    OMP_CLAUSE_DECL (c2) = t;
-		    OMP_CLAUSE_SIZE (c2) = size_zero_node;
-		    OMP_CLAUSE_CHAIN (c2) = OMP_CLAUSE_CHAIN (c);
-		    OMP_CLAUSE_CHAIN (c) = c2;
-		    OMP_CLAUSE_DECL (c) = build_simple_mem_ref (t);
-		    if (OMP_CLAUSE_SIZE (c) == NULL_TREE)
-		      OMP_CLAUSE_SIZE (c)
-			= TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (t)));
-		    c = c2;
-		  }
+		grp_start_p = pc;
+		grp_sentinel = OMP_CLAUSE_CHAIN (c);
+		tree nc = ai.expand_map_clause (c, OMP_CLAUSE_DECL (c),
+						addr_tokens,
+						(ort == C_ORT_OMP_TARGET
+						 || ort == C_ORT_ACC));
+		if (nc != error_mark_node)
+		  c = nc;
 	      }
 	  }
 	  break;
@@ -8824,7 +8752,8 @@  finish_omp_clauses (tree clauses, enum c_omp_region_type ort)
 	  if (grp_start_p)
 	    {
 	      /* If we found a clause to remove, we want to remove the whole
-		 expanded group, otherwise gimplify can get confused.  */
+		 expanded group, otherwise gimplify
+		 (omp_resolve_clause_dependencies) can get confused.  */
 	      *grp_start_p = grp_sentinel;
 	      pc = grp_start_p;
 	      grp_start_p = NULL;
diff --git a/gcc/fortran/trans-openmp.cc b/gcc/fortran/trans-openmp.cc
index 45b136d1acbe..82cad05b4748 100644
--- a/gcc/fortran/trans-openmp.cc
+++ b/gcc/fortran/trans-openmp.cc
@@ -2404,8 +2404,9 @@  static vec<tree, va_heap, vl_embed> *doacross_steps;
 
 static void
 gfc_trans_omp_array_section (stmtblock_t *block, gfc_omp_namelist *n,
-			     tree decl, bool element, gomp_map_kind ptr_kind,
-			     tree &node, tree &node2, tree &node3, tree &node4)
+			     tree decl, bool element, bool openmp,
+			     gomp_map_kind ptr_kind, tree &node, tree &node2,
+			     tree &node3, tree &node4)
 {
   gfc_se se;
   tree ptr, ptr2;
@@ -2497,7 +2498,7 @@  gfc_trans_omp_array_section (stmtblock_t *block, gfc_omp_namelist *n,
 	 struct - and adding an 'alloc: for the 'desc.data' pointer, which
 	 would break as the 'desc' (the descriptor) is also mapped
 	 (see node4 above).  */
-      if (ptr_kind == GOMP_MAP_ATTACH_DETACH)
+      if (ptr_kind == GOMP_MAP_ATTACH_DETACH && !openmp)
 	STRIP_NOPS (OMP_CLAUSE_DECL (node3));
     }
   else
@@ -2515,7 +2516,7 @@  gfc_trans_omp_array_section (stmtblock_t *block, gfc_omp_namelist *n,
 			       decl, offset, NULL_TREE, NULL_TREE);
 	  OMP_CLAUSE_DECL (node) = offset;
 
-	  if (ptr_kind == GOMP_MAP_ALWAYS_POINTER)
+	  if (ptr_kind == GOMP_MAP_ATTACH_DETACH && openmp)
 	    return;
 	}
       else
@@ -3361,8 +3362,9 @@  gfc_trans_omp_clauses (stmtblock_t *block, gfc_omp_clauses *clauses,
 		      && !(POINTER_TYPE_P (type)
 			   && GFC_DESCRIPTOR_TYPE_P (TREE_TYPE (type))))
 		    k = GOMP_MAP_FIRSTPRIVATE_POINTER;
-		  gfc_trans_omp_array_section (block, n, decl, element, k,
-					       node, node2, node3, node4);
+		  gfc_trans_omp_array_section (block, n, decl, element,
+					       !openacc, k, node, node2,
+					       node3, node4);
 		}
 	      else if (n->expr
 		       && n->expr->expr_type == EXPR_VARIABLE
@@ -3388,10 +3390,7 @@  gfc_trans_omp_clauses (stmtblock_t *block, gfc_omp_clauses *clauses,
 		    {
 		      node2 = build_omp_clause (input_location,
 						OMP_CLAUSE_MAP);
-		      gomp_map_kind kind
-			= (openacc ? GOMP_MAP_ATTACH_DETACH
-				   : GOMP_MAP_ALWAYS_POINTER);
-		      OMP_CLAUSE_SET_MAP_KIND (node2, kind);
+		      OMP_CLAUSE_SET_MAP_KIND (node2, GOMP_MAP_ATTACH_DETACH);
 		      OMP_CLAUSE_DECL (node2)
 			= POINTER_TYPE_P (TREE_TYPE (se.expr))
 			  ? se.expr
@@ -3476,9 +3475,7 @@  gfc_trans_omp_clauses (stmtblock_t *block, gfc_omp_clauses *clauses,
 			  node2 = build_omp_clause (input_location,
 						    OMP_CLAUSE_MAP);
 			  OMP_CLAUSE_SET_MAP_KIND (node2,
-						   openacc
-						   ? GOMP_MAP_ATTACH_DETACH
-						   : GOMP_MAP_ALWAYS_POINTER);
+						   GOMP_MAP_ATTACH_DETACH);
 			  OMP_CLAUSE_DECL (node2) = build_fold_addr_expr (data);
 			  OMP_CLAUSE_SIZE (node2) = size_int (0);
 			}
@@ -3563,9 +3560,7 @@  gfc_trans_omp_clauses (stmtblock_t *block, gfc_omp_clauses *clauses,
 			  node3 = build_omp_clause (input_location,
 						    OMP_CLAUSE_MAP);
 			  OMP_CLAUSE_SET_MAP_KIND (node3,
-						   openacc
-						   ? GOMP_MAP_ATTACH_DETACH
-						   : GOMP_MAP_ALWAYS_POINTER);
+						   GOMP_MAP_ATTACH_DETACH);
 			  OMP_CLAUSE_DECL (node3)
 			    = gfc_conv_descriptor_data_get (inner);
 			  /* Similar to gfc_trans_omp_array_section (details
@@ -3588,11 +3583,10 @@  gfc_trans_omp_clauses (stmtblock_t *block, gfc_omp_clauses *clauses,
 		    {
 		      /* An array element or section.  */
 		      bool element = lastref->u.ar.type == AR_ELEMENT;
-		      gomp_map_kind kind = (openacc ? GOMP_MAP_ATTACH_DETACH
-						    : GOMP_MAP_ALWAYS_POINTER);
+		      gomp_map_kind kind = GOMP_MAP_ATTACH_DETACH;
 		      gfc_trans_omp_array_section (block, n, inner, element,
-						   kind, node, node2, node3,
-						   node4);
+						   !openacc, kind, node, node2,
+						   node3, node4);
 		    }
 		  else
 		    gcc_unreachable ();
diff --git a/gcc/gimplify.cc b/gcc/gimplify.cc
index c9c800a58509..f4eb092f8771 100644
--- a/gcc/gimplify.cc
+++ b/gcc/gimplify.cc
@@ -8887,8 +8887,7 @@  build_omp_struct_comp_nodes (enum tree_code code, tree grp_start, tree grp_end,
 
   if (grp_mid
       && OMP_CLAUSE_CODE (grp_mid) == OMP_CLAUSE_MAP
-      && (OMP_CLAUSE_MAP_KIND (grp_mid) == GOMP_MAP_ALWAYS_POINTER
-	  || OMP_CLAUSE_MAP_KIND (grp_mid) == GOMP_MAP_ATTACH_DETACH))
+      && OMP_CLAUSE_MAP_KIND (grp_mid) == GOMP_MAP_ALWAYS_POINTER)
     {
       tree c3
 	= build_omp_clause (OMP_CLAUSE_LOCATION (grp_end), OMP_CLAUSE_MAP);
@@ -8968,6 +8967,12 @@  struct omp_mapping_group {
   /* If we've removed the group but need to reindex, mark the group as
      deleted.  */
   bool deleted;
+  /* The group points to an already-created "GOMP_MAP_STRUCT
+     GOMP_MAP_ATTACH_DETACH" pair.  */
+  bool reprocess_struct;
+  /* The group should use "zero-length" allocations for pointers that are not
+     mapped "to" on the same directive.  */
+  bool fragile;
   struct omp_mapping_group *sibling;
   struct omp_mapping_group *next;
 };
@@ -9009,38 +9014,6 @@  omp_get_base_pointer (tree expr)
   return NULL_TREE;
 }
 
-/* Remove COMPONENT_REFS and indirections from EXPR.  */
-
-static tree
-omp_strip_components_and_deref (tree expr)
-{
-  while (TREE_CODE (expr) == COMPONENT_REF
-	 || TREE_CODE (expr) == INDIRECT_REF
-	 || (TREE_CODE (expr) == MEM_REF
-	     && integer_zerop (TREE_OPERAND (expr, 1)))
-	 || TREE_CODE (expr) == POINTER_PLUS_EXPR
-	 || TREE_CODE (expr) == COMPOUND_EXPR)
-      if (TREE_CODE (expr) == COMPOUND_EXPR)
-	expr = TREE_OPERAND (expr, 1);
-      else
-	expr = TREE_OPERAND (expr, 0);
-
-  STRIP_NOPS (expr);
-
-  return expr;
-}
-
-static tree
-omp_strip_indirections (tree expr)
-{
-  while (TREE_CODE (expr) == INDIRECT_REF
-	 || (TREE_CODE (expr) == MEM_REF
-	     && integer_zerop (TREE_OPERAND (expr, 1))))
-    expr = TREE_OPERAND (expr, 0);
-
-  return expr;
-}
-
 /* An attach or detach operation depends directly on the address being
    attached/detached.  Return that address, or none if there are no
    attachments/detachments.  */
@@ -9242,6 +9215,8 @@  omp_gather_mapping_groups_1 (tree *list_p, vec<omp_mapping_group> *groups,
       grp.mark = UNVISITED;
       grp.sibling = NULL;
       grp.deleted = false;
+      grp.reprocess_struct = false;
+      grp.fragile = false;
       grp.next = NULL;
       groups->safe_push (grp);
 
@@ -9369,6 +9344,8 @@  omp_group_base (omp_mapping_group *grp, unsigned int *chained,
 	    *firstprivate = OMP_CLAUSE_DECL (node);
 	    node = OMP_CLAUSE_CHAIN (node);
 	  }
+	else if (OMP_CLAUSE_MAP_KIND (node) == GOMP_MAP_ATTACH_DETACH)
+	  node = OMP_CLAUSE_CHAIN (node);
 	*chained = num_mappings;
 	return node;
       }
@@ -9420,6 +9397,9 @@  omp_index_mapping_groups_1 (hash_map<tree_operand_hash,
       if (reindexing && !above_hwm)
 	continue;
 
+      if (grp->reprocess_struct)
+	continue;
+
       tree fpp;
       unsigned int chained;
       tree node = omp_group_base (grp, &chained, &fpp);
@@ -9913,6 +9893,89 @@  omp_lastprivate_for_combined_outer_constructs (struct gimplify_omp_ctx *octx,
     omp_notice_variable (octx, decl, true);
 }
 
+/* We might have indexed several groups for DECL, e.g. a "TO" mapping and also
+   a "FIRSTPRIVATE" mapping.  Return the one that isn't firstprivate, etc.  */
+
+static omp_mapping_group *
+omp_get_nonfirstprivate_group (hash_map<tree_operand_hash,
+					omp_mapping_group *> *grpmap,
+			       tree decl, bool allow_deleted = false)
+{
+  omp_mapping_group **to_group_p = grpmap->get (decl);
+
+  if (!to_group_p)
+    return NULL;
+
+  omp_mapping_group *to_group = *to_group_p;
+
+  for (; to_group; to_group = to_group->sibling)
+    {
+      tree grp_end = to_group->grp_end;
+      switch (OMP_CLAUSE_MAP_KIND (grp_end))
+	{
+	case GOMP_MAP_FIRSTPRIVATE_POINTER:
+	case GOMP_MAP_FIRSTPRIVATE_REFERENCE:
+	  break;
+
+	default:
+	  if (allow_deleted || !to_group->deleted)
+	    return to_group;
+	}
+    }
+
+  return NULL;
+}
+
+/* Return TRUE if the directive (whose clauses are described by the hash table
+   of mapping groups, GRPMAP) maps DECL explicitly.  If TO_SPECIFICALLY is
+   true, only count TO mappings.  If ALLOW_DELETED is true, ignore the
+   "deleted" flag for groups.  If CONTAINED_IN_STRUCT is true, also return
+   TRUE if DECL is mapped as a member of a whole-struct mapping.  */
+
+static bool
+omp_directive_maps_explicitly (hash_map<tree_operand_hash,
+					omp_mapping_group *> *grpmap,
+			       tree decl, omp_mapping_group **base_group,
+			       bool to_specifically, bool allow_deleted,
+			       bool contained_in_struct)
+{
+  omp_mapping_group *decl_group
+    = omp_get_nonfirstprivate_group (grpmap, decl, allow_deleted);
+
+  *base_group = NULL;
+
+  if (decl_group)
+    {
+      tree grp_first = *decl_group->grp_start;
+      /* We might be called during omp_build_struct_sibling_lists, when
+	 GOMP_MAP_STRUCT might have been inserted at the start of the group.
+	 Skip over that, and also possibly the node after it.  */
+      if (OMP_CLAUSE_MAP_KIND (grp_first) == GOMP_MAP_STRUCT)
+	{
+	  grp_first = OMP_CLAUSE_CHAIN (grp_first);
+	  if (OMP_CLAUSE_MAP_KIND (grp_first) == GOMP_MAP_FIRSTPRIVATE_POINTER
+	      || (OMP_CLAUSE_MAP_KIND (grp_first)
+		  == GOMP_MAP_FIRSTPRIVATE_REFERENCE)
+	      || OMP_CLAUSE_MAP_KIND (grp_first) == GOMP_MAP_ATTACH_DETACH)
+	    grp_first = OMP_CLAUSE_CHAIN (grp_first);
+	}
+      enum gomp_map_kind first_kind = OMP_CLAUSE_MAP_KIND (grp_first);
+      if (!to_specifically
+	  || GOMP_MAP_COPY_TO_P (first_kind)
+	  || first_kind == GOMP_MAP_ALLOC)
+	{
+	  *base_group = decl_group;
+	  return true;
+	}
+    }
+
+  if (contained_in_struct
+      && omp_mapped_by_containing_struct (grpmap, decl, base_group))
+    return true;
+
+  return false;
+}
+
 /* If we have mappings INNER and OUTER, where INNER is a component access and
    OUTER is a mapping of the whole containing struct, check that the mappings
    are compatible.  We'll be deleting the inner mapping, so we need to make
@@ -9979,6 +10042,257 @@  omp_check_mapping_compatibility (location_t loc,
   return false;
 }
 
+/* This function handles several cases where clauses on a mapping directive
+   can interact with each other.
+
+   If we have a FIRSTPRIVATE_POINTER node and we're also mapping the pointer
+   on the same directive, change the mapping of the first node to
+   ATTACH_DETACH.  We should have detected that this will happen already in
+   c-omp.cc:c_omp_adjust_map_clauses and marked the appropriate decl
+   as addressable.  (If we didn't, bail out.)
+
+   If we have a FIRSTPRIVATE_REFERENCE (for a reference to pointer) and we're
+   mapping the base pointer also, we may need to change the mapping type to
+   ATTACH_DETACH and synthesize an alloc node for the reference itself.
+
+   If we have an ATTACH_DETACH node, this is an array section with a pointer
+   base.  If we're mapping the base on the same directive too, we can drop its
+   mapping.  However, if we have a reference to pointer, make other appropriate
+   adjustments to the mapping nodes instead.
+
+   If we have a component access but we're also mapping the whole of the
+   containing struct, drop the former access.
+
+   If the expression is a component access, and we're also mapping a base
+   pointer used in that component access in the same expression, change the
+   mapping type of the latter to ALLOC (ready for processing by
+   omp_build_struct_sibling_lists).  */
+
+void
+omp_resolve_clause_dependencies (enum tree_code code,
+				 vec<omp_mapping_group> *groups,
+				 hash_map<tree_operand_hash,
+					  omp_mapping_group *> *grpmap)
+{
+  int i;
+  omp_mapping_group *grp;
+  bool repair_chain = false;
+
+  FOR_EACH_VEC_ELT (*groups, i, grp)
+    {
+      tree grp_end = grp->grp_end;
+      tree decl = OMP_CLAUSE_DECL (grp_end);
+
+      gcc_assert (OMP_CLAUSE_CODE (grp_end) == OMP_CLAUSE_MAP);
+
+      switch (OMP_CLAUSE_MAP_KIND (grp_end))
+	{
+	case GOMP_MAP_FIRSTPRIVATE_POINTER:
+	  {
+	    omp_mapping_group *to_group
+	      = omp_get_nonfirstprivate_group (grpmap, decl);
+
+	    if (!to_group || to_group == grp)
+	      continue;
+
+	    tree grp_first = *to_group->grp_start;
+	    enum gomp_map_kind first_kind = OMP_CLAUSE_MAP_KIND (grp_first);
+
+	    if ((GOMP_MAP_COPY_TO_P (first_kind)
+		 || first_kind == GOMP_MAP_ALLOC)
+		&& (OMP_CLAUSE_MAP_KIND (to_group->grp_end)
+		    != GOMP_MAP_FIRSTPRIVATE_POINTER))
+	      {
+		gcc_assert (TREE_ADDRESSABLE (OMP_CLAUSE_DECL (grp_end)));
+		OMP_CLAUSE_SET_MAP_KIND (grp_end, GOMP_MAP_ATTACH_DETACH);
+	      }
+	  }
+	  break;
+
+	case GOMP_MAP_FIRSTPRIVATE_REFERENCE:
+	  {
+	    tree ptr = build_fold_indirect_ref (decl);
+
+	    omp_mapping_group *to_group
+	      = omp_get_nonfirstprivate_group (grpmap, ptr);
+
+	    if (!to_group || to_group == grp)
+	      continue;
+
+	    tree grp_first = *to_group->grp_start;
+	    enum gomp_map_kind first_kind = OMP_CLAUSE_MAP_KIND (grp_first);
+
+	    if (GOMP_MAP_COPY_TO_P (first_kind)
+		|| first_kind == GOMP_MAP_ALLOC)
+	      {
+		OMP_CLAUSE_SET_MAP_KIND (grp_end, GOMP_MAP_ATTACH_DETACH);
+		OMP_CLAUSE_DECL (grp_end) = ptr;
+		if ((OMP_CLAUSE_CHAIN (*to_group->grp_start)
+		     == to_group->grp_end)
+		    && (OMP_CLAUSE_MAP_KIND (to_group->grp_end)
+			== GOMP_MAP_FIRSTPRIVATE_REFERENCE))
+		  {
+		    gcc_assert (TREE_ADDRESSABLE
+				  (OMP_CLAUSE_DECL (to_group->grp_end)));
+		    OMP_CLAUSE_SET_MAP_KIND (to_group->grp_end,
+					     GOMP_MAP_ATTACH_DETACH);
+
+		    location_t loc = OMP_CLAUSE_LOCATION (to_group->grp_end);
+		    tree alloc
+		      = build_omp_clause (loc, OMP_CLAUSE_MAP);
+		    OMP_CLAUSE_SET_MAP_KIND (alloc, GOMP_MAP_ALLOC);
+		    tree tmp = build_fold_addr_expr (OMP_CLAUSE_DECL
+						      (to_group->grp_end));
+		    tree char_ptr_type = build_pointer_type (char_type_node);
+		    OMP_CLAUSE_DECL (alloc)
+		      = build2 (MEM_REF, char_type_node,
+				tmp,
+				build_int_cst (char_ptr_type, 0));
+		    OMP_CLAUSE_SIZE (alloc) = TYPE_SIZE_UNIT (TREE_TYPE (tmp));
+
+		    OMP_CLAUSE_CHAIN (alloc)
+		      = OMP_CLAUSE_CHAIN (*to_group->grp_start);
+		    OMP_CLAUSE_CHAIN (*to_group->grp_start) = alloc;
+		  }
+	      }
+	  }
+	  break;
+
+	case GOMP_MAP_ATTACH_DETACH:
+	case GOMP_MAP_ATTACH_ZERO_LENGTH_ARRAY_SECTION:
+	  {
+	    tree base_ptr, referenced_ptr_node = NULL_TREE;
+
+	    while (TREE_CODE (decl) == ARRAY_REF)
+	      decl = TREE_OPERAND (decl, 0);
+
+	    if (TREE_CODE (decl) == INDIRECT_REF)
+	      decl = TREE_OPERAND (decl, 0);
+
+	    /* Only component accesses.  */
+	    if (DECL_P (decl))
+	      continue;
+
+	    /* We want the pointer itself when checking if the base pointer is
+	       mapped elsewhere in the same directive -- if we have a
+	       reference to the pointer, don't use that.  */
+
+	    if (TREE_CODE (TREE_TYPE (decl)) == REFERENCE_TYPE
+		&& TREE_CODE (TREE_TYPE (TREE_TYPE (decl))) == POINTER_TYPE)
+	      {
+		referenced_ptr_node = OMP_CLAUSE_CHAIN (*grp->grp_start);
+		base_ptr = OMP_CLAUSE_DECL (referenced_ptr_node);
+	      }
+	    else
+	      base_ptr = decl;
+
+	    gomp_map_kind zlas_kind
+	      = (code == OACC_EXIT_DATA || code == OMP_TARGET_EXIT_DATA)
+		? GOMP_MAP_DETACH : GOMP_MAP_ATTACH_ZERO_LENGTH_ARRAY_SECTION;
+
+	    if (TREE_CODE (TREE_TYPE (base_ptr)) == POINTER_TYPE)
+	      {
+		/* If we map the base TO, and we're doing an attachment, we can
+		   skip the TO mapping altogether and create an ALLOC mapping
+		   instead, since the attachment will overwrite the device
+		   pointer in that location immediately anyway.  Otherwise,
+		   change our mapping to
+		   GOMP_MAP_ATTACH_ZERO_LENGTH_ARRAY_SECTION in case the
+		   attachment target has not been copied to the device already
+		   by some earlier directive.  */
+
+		bool base_mapped_to = false;
+
+		omp_mapping_group *base_group;
+
+		if (omp_directive_maps_explicitly (grpmap, base_ptr,
+						   &base_group, false, true,
+						   false))
+		  {
+		    if (referenced_ptr_node)
+		      {
+			base_mapped_to = true;
+			if ((OMP_CLAUSE_MAP_KIND (base_group->grp_end)
+			     == GOMP_MAP_ATTACH_DETACH)
+			    && (OMP_CLAUSE_CHAIN (*base_group->grp_start)
+				== base_group->grp_end))
+			  {
+			    OMP_CLAUSE_CHAIN (*base_group->grp_start)
+			      = OMP_CLAUSE_CHAIN (base_group->grp_end);
+			    base_group->grp_end = *base_group->grp_start;
+			    repair_chain = true;
+			  }
+		      }
+		    else
+		      {
+			base_group->deleted = true;
+			OMP_CLAUSE_ATTACHMENT_MAPPING_ERASED (grp_end) = 1;
+		      }
+		  }
+
+		/* We're dealing with a reference to a pointer, and we are
+		   attaching both the reference and the pointer.  We know the
+		   reference itself is on the target, because we are going to
+		   create an ALLOC node for it in accumulate_sibling_list.  The
+		   pointer might be on the target already or it might not, but
+		   if it isn't then it's not an error, so use
+		   GOMP_MAP_ATTACH_ZLAS for it.  */
+		if (!base_mapped_to && referenced_ptr_node)
+		  OMP_CLAUSE_SET_MAP_KIND (referenced_ptr_node, zlas_kind);
+	      }
+	    else if (TREE_CODE (TREE_TYPE (base_ptr)) == REFERENCE_TYPE
+		     && (TREE_CODE (TREE_TYPE (TREE_TYPE (base_ptr)))
+			 == ARRAY_TYPE)
+		     && OMP_CLAUSE_MAP_MAYBE_ZERO_LENGTH_ARRAY_SECTION
+			  (*grp->grp_start))
+	      OMP_CLAUSE_SET_MAP_KIND (grp->grp_end, zlas_kind);
+	  }
+	  break;
+
+	default:
+	  {
+	    omp_mapping_group *struct_group;
+	    if (omp_mapped_by_containing_struct (grpmap, decl, &struct_group)
+		&& *grp->grp_start == grp_end)
+	      {
+		omp_check_mapping_compatibility (OMP_CLAUSE_LOCATION (grp_end),
+						 struct_group, grp);
+		/* Remove the whole of this mapping -- redundant.  */
+		grp->deleted = true;
+	      }
+
+	    tree base = decl;
+	    while ((base = omp_get_base_pointer (base)))
+	      {
+		omp_mapping_group *base_group;
+
+		if (omp_directive_maps_explicitly (grpmap, base, &base_group,
+						   true, true, false))
+		  {
+		    tree grp_first = *base_group->grp_start;
+		    OMP_CLAUSE_SET_MAP_KIND (grp_first, GOMP_MAP_ALLOC);
+		  }
+	      }
+	  }
+	}
+    }
+
+  if (repair_chain)
+    {
+      /* Group start pointers may have become detached from the
+	 OMP_CLAUSE_CHAIN of previous groups if elements were removed from the
+	 end of those groups.  Fix that now.  */
+      tree *new_next = NULL;
+      FOR_EACH_VEC_ELT (*groups, i, grp)
+	{
+	  if (new_next)
+	    grp->grp_start = new_next;
+
+	  new_next = &OMP_CLAUSE_CHAIN (grp->grp_end);
+	}
+    }
+}
+
 /* Similar to omp_resolve_clause_dependencies, but for OpenACC.  The only
    clause dependencies we handle for now are struct element mappings and
    whole-struct mappings on the same directive, and duplicate clause
@@ -10196,6 +10510,59 @@  omp_siblist_move_concat_nodes_after (tree first_new, tree *last_new_tail,
   return continue_at;
 }
 
+/* Expand a chained access.  We only expect to see a quite limited range of
+   expression types here, because e.g. you can't have an array of
+   references.  See also c-omp.cc:omp_expand_access_chain.  */
+
+static void
+omp_expand_access_chain (location_t loc, tree **list_pp, tree expr,
+			 vec<omp_addr_token *> &addr_tokens,
+			 unsigned *idx, gomp_map_kind kind)
+{
+  using namespace omp_addr_tokenizer;
+  unsigned i = *idx;
+  tree c = NULL_TREE;
+
+  switch (addr_tokens[i]->u.access_kind)
+    {
+    case ACCESS_POINTER:
+    case ACCESS_POINTER_OFFSET:
+      {
+	tree virtual_origin
+	  = fold_convert_loc (loc, ptrdiff_type_node, addr_tokens[i]->expr);
+	tree data_addr = omp_accessed_addr (addr_tokens, i, expr);
+	c = build_omp_clause (loc, OMP_CLAUSE_MAP);
+	OMP_CLAUSE_SET_MAP_KIND (c, kind);
+	OMP_CLAUSE_DECL (c) = addr_tokens[i]->expr;
+	OMP_CLAUSE_SIZE (c)
+	  = fold_build2_loc (loc, MINUS_EXPR, ptrdiff_type_node,
+			     fold_convert_loc (loc, ptrdiff_type_node,
+					       data_addr),
+			     virtual_origin);
+      }
+      break;
+
+    case ACCESS_INDEXED_ARRAY:
+      break;
+
+    default:
+      return;
+    }
+
+  if (c)
+    {
+      OMP_CLAUSE_CHAIN (c) = **list_pp;
+      **list_pp = c;
+      *list_pp = &OMP_CLAUSE_CHAIN (c);
+    }
+
+  *idx = ++i;
+
+  if (addr_tokens[i]->type == ACCESS_METHOD
+      && omp_access_chain_p (addr_tokens, i))
+    omp_expand_access_chain (loc, list_pp, expr, addr_tokens, idx, kind);
+}
+
 /* Mapping struct members causes an additional set of nodes to be created,
    starting with GOMP_MAP_STRUCT followed by a number of mappings equal to the
    number of members being mapped, in order of ascending position (address or
@@ -10237,9 +10604,15 @@  static tree *
 omp_accumulate_sibling_list (enum omp_region_type region_type,
 			     enum tree_code code,
 			     hash_map<tree_operand_hash, tree>
-			       *&struct_map_to_clause, tree *grp_start_p,
-			     tree grp_end, tree *inner)
+			       *&struct_map_to_clause,
+			     hash_map<tree_operand_hash, omp_mapping_group *>
+			       *group_map,
+			     tree *grp_start_p, tree grp_end,
+			     vec<omp_addr_token *> &addr_tokens, tree **inner,
+			     bool *fragile_p, bool reprocessing_struct,
+			     tree **added_tail)
 {
+  using namespace omp_addr_tokenizer;
   poly_offset_int coffset;
   poly_int64 cbitpos;
   tree ocd = OMP_CLAUSE_DECL (grp_end);
@@ -10249,118 +10622,265 @@  omp_accumulate_sibling_list (enum omp_region_type region_type,
   while (TREE_CODE (ocd) == ARRAY_REF)
     ocd = TREE_OPERAND (ocd, 0);
 
-  if (TREE_CODE (ocd) == INDIRECT_REF)
-    ocd = TREE_OPERAND (ocd, 0);
+  if (*fragile_p)
+    {
+      omp_mapping_group *to_group
+	= omp_get_nonfirstprivate_group (group_map, ocd, true);
+
+      if (to_group)
+	return NULL;
+    }
+
+  omp_addr_token *last_token = addr_tokens[addr_tokens.length () - 1];
+  if (last_token->type == ACCESS_METHOD)
+    {
+      switch (last_token->u.access_kind)
+	{
+	case ACCESS_REF:
+	case ACCESS_REF_TO_POINTER:
+	case ACCESS_REF_TO_POINTER_OFFSET:
+	case ACCESS_INDEXED_REF_TO_ARRAY:
+	  /* We may see either a bare reference or a dereferenced
+	     "convert_from_reference"-like one here.  Handle either way.  */
+	  if (TREE_CODE (ocd) == INDIRECT_REF)
+	    ocd = TREE_OPERAND (ocd, 0);
+	  gcc_assert (TREE_CODE (TREE_TYPE (ocd)) == REFERENCE_TYPE);
+	  break;
+
+	default:
+	  ;
+	}
+    }
 
   tree base = extract_base_bit_offset (ocd, &cbitpos, &coffset);
 
+  int base_token;
+  for (base_token = addr_tokens.length () - 1; base_token >= 0; base_token--)
+    {
+      if (addr_tokens[base_token]->type == ARRAY_BASE
+	  || addr_tokens[base_token]->type == STRUCTURE_BASE)
+	break;
+    }
+
+  /* The two expressions in the assertion below aren't quite the same: if we
+     have 'struct_base_decl access_indexed_array' for something like
+     "myvar[2].x" then base will be "myvar" and addr_tokens[base_token]->expr
+     will be "myvar[2]" -- the actual base of the structure.
+     The former interpretation leads to a strange situation where we get
+       struct(myvar) alloc(myvar[2].ptr1)
+     That is, the array of structures is kind of treated as one big structure
+     for the purposes of gathering sibling lists, etc.  */
+  /* gcc_assert (base == addr_tokens[base_token]->expr);  */
+
   bool ptr = (OMP_CLAUSE_MAP_KIND (grp_end) == GOMP_MAP_ALWAYS_POINTER);
   bool attach_detach = ((OMP_CLAUSE_MAP_KIND (grp_end)
 			 == GOMP_MAP_ATTACH_DETACH)
 			|| (OMP_CLAUSE_MAP_KIND (grp_end)
 			    == GOMP_MAP_ATTACH_ZERO_LENGTH_ARRAY_SECTION));
-  bool attach = (OMP_CLAUSE_MAP_KIND (grp_end) == GOMP_MAP_ATTACH
-		 || OMP_CLAUSE_MAP_KIND (grp_end) == GOMP_MAP_DETACH);
-
-  /* FIXME: If we're not mapping the base pointer in some other clause on this
-     directive, I think we want to create ALLOC/RELEASE here -- i.e. not
-     early-exit.  */
-  if (openmp && attach_detach)
-    return NULL;
 
   if (!struct_map_to_clause || struct_map_to_clause->get (base) == NULL)
     {
       tree l = build_omp_clause (OMP_CLAUSE_LOCATION (grp_end), OMP_CLAUSE_MAP);
-      gomp_map_kind k = attach ? GOMP_MAP_FORCE_PRESENT : GOMP_MAP_STRUCT;
-
-      OMP_CLAUSE_SET_MAP_KIND (l, k);
 
+      OMP_CLAUSE_SET_MAP_KIND (l, GOMP_MAP_STRUCT);
       OMP_CLAUSE_DECL (l) = unshare_expr (base);
+      OMP_CLAUSE_SIZE (l) = size_int (1);
 
-      OMP_CLAUSE_SIZE (l)
-	= (!attach ? size_int (1)
-	   : (DECL_P (OMP_CLAUSE_DECL (l))
-	      ? DECL_SIZE_UNIT (OMP_CLAUSE_DECL (l))
-	      : TYPE_SIZE_UNIT (TREE_TYPE (OMP_CLAUSE_DECL (l)))));
       if (struct_map_to_clause == NULL)
 	struct_map_to_clause = new hash_map<tree_operand_hash, tree>;
       struct_map_to_clause->put (base, l);
 
+      /* On first iterating through the clause list, we insert the struct node
+	 just before the component access node that triggers the initial
+	 omp_accumulate_sibling_list call for a particular sibling list (and
+	 it then forms the first entry in that list).  When reprocessing
+	 struct bases that are themselves component accesses, we insert the
+	 struct node on an off-side list to avoid inserting the new
+	 GOMP_MAP_STRUCT into the middle of the old one.  */
+      tree *insert_node_pos = reprocessing_struct ? *added_tail : grp_start_p;
+
       if (ptr || attach_detach)
 	{
 	  tree extra_node;
 	  tree alloc_node
 	    = build_omp_struct_comp_nodes (code, *grp_start_p, grp_end,
 					   &extra_node);
+	  tree *tail;
 	  OMP_CLAUSE_CHAIN (l) = alloc_node;
 
-	  tree *insert_node_pos = grp_start_p;
-
 	  if (extra_node)
 	    {
 	      OMP_CLAUSE_CHAIN (extra_node) = *insert_node_pos;
 	      OMP_CLAUSE_CHAIN (alloc_node) = extra_node;
+	      tail = &OMP_CLAUSE_CHAIN (extra_node);
 	    }
 	  else
-	    OMP_CLAUSE_CHAIN (alloc_node) = *insert_node_pos;
+	    {
+	      OMP_CLAUSE_CHAIN (alloc_node) = *insert_node_pos;
+	      tail = &OMP_CLAUSE_CHAIN (alloc_node);
+	    }
+
+	  /* For OpenMP semantics, we don't want to implicitly allocate
+	     space for the pointer here.  A FRAGILE_P node is only being
+	     created so that omp-low.cc is able to rewrite the struct
+	     properly.
+	     For references (to pointers), we want to actually allocate the
+	     space for the reference itself in the sorted list following the
+	     struct node.
+	     For pointers, we want to allocate space if we had an explicit
+	     mapping of the attachment point, but not otherwise.  */
+	  if (*fragile_p
+	      || (openmp
+		  && attach_detach
+		  && TREE_CODE (TREE_TYPE (ocd)) == POINTER_TYPE
+		  && !OMP_CLAUSE_ATTACHMENT_MAPPING_ERASED (grp_end)))
+	    {
+	      if (!lang_GNU_Fortran ())
+	       /* In Fortran, pointers are dereferenced automatically, but may
+		  be unassociated.  So we still want to allocate space for the
+		  pointer (as the base for an attach operation that should be
+		  present in the same directive's clause list also).  */
+		OMP_CLAUSE_SIZE (alloc_node) = size_zero_node;
+	      OMP_CLAUSE_MAP_MAYBE_ZERO_LENGTH_ARRAY_SECTION (alloc_node) = 1;
+	    }
 
 	  *insert_node_pos = l;
+
+	  if (reprocessing_struct)
+	    {
+	      /* When reprocessing a struct node group used as the base of a
+		 subcomponent access, if we have a reference-to-pointer base,
+		 we will see:
+		   struct(**ptr) attach(*ptr)
+		 whereas for a non-reprocess-struct group, we see, e.g.:
+		   tofrom(**ptr) attach(*ptr) attach(ptr)
+		 and we create the "alloc" for the second "attach", i.e.
+		 for the reference itself.  When reprocessing a struct group we
+		 thus change the pointer attachment into a reference attachment
+		 by stripping the indirection.  (The attachment of the
+		 referenced pointer must happen elsewhere, either on the same
+		 directive, or otherwise.)  */
+	      tree adecl = OMP_CLAUSE_DECL (alloc_node);
+
+	      if ((TREE_CODE (adecl) == INDIRECT_REF
+		   || (TREE_CODE (adecl) == MEM_REF
+		       && integer_zerop (TREE_OPERAND (adecl, 1))))
+		  && (TREE_CODE (TREE_TYPE (TREE_OPERAND (adecl, 0)))
+		      == REFERENCE_TYPE)
+		  && (TREE_CODE (TREE_TYPE (TREE_TYPE
+			(TREE_OPERAND (adecl, 0)))) == POINTER_TYPE))
+		OMP_CLAUSE_DECL (alloc_node) = TREE_OPERAND (adecl, 0);
+
+	      *added_tail = tail;
+	    }
 	}
       else
 	{
 	  gcc_assert (*grp_start_p == grp_end);
-	  grp_start_p = omp_siblist_insert_node_after (l, grp_start_p);
+	  if (reprocessing_struct)
+	    {
+	      /* If we don't have an attach/detach node, this is a
+		 "target data" directive or similar, not an offload region.
+		 Synthesize an "alloc" node using just the initiating
+		 GOMP_MAP_STRUCT decl.  */
+	      gomp_map_kind k = (code == OMP_TARGET_EXIT_DATA
+				 || code == OACC_EXIT_DATA)
+				? GOMP_MAP_RELEASE : GOMP_MAP_ALLOC;
+	      tree alloc_node
+		= build_omp_clause (OMP_CLAUSE_LOCATION (grp_end),
+				    OMP_CLAUSE_MAP);
+	      OMP_CLAUSE_SET_MAP_KIND (alloc_node, k);
+	      OMP_CLAUSE_DECL (alloc_node) = unshare_expr (last_token->expr);
+	      OMP_CLAUSE_SIZE (alloc_node)
+		= TYPE_SIZE_UNIT (TREE_TYPE (OMP_CLAUSE_DECL (alloc_node)));
+
+	      OMP_CLAUSE_CHAIN (alloc_node) = OMP_CLAUSE_CHAIN (l);
+	      OMP_CLAUSE_CHAIN (l) = alloc_node;
+	      *insert_node_pos = l;
+	      *added_tail = &OMP_CLAUSE_CHAIN (alloc_node);
+	    }
+	  else
+	    grp_start_p = omp_siblist_insert_node_after (l, insert_node_pos);
 	}
 
-      tree noind = omp_strip_indirections (base);
+      unsigned last_access = base_token + 1;
 
-      if (!openmp
-	  && (region_type & ORT_TARGET)
-	  && TREE_CODE (noind) == COMPONENT_REF)
+      while (last_access + 1 < addr_tokens.length ()
+	     && addr_tokens[last_access + 1]->type == ACCESS_METHOD)
+	last_access++;
+
+      if ((region_type & ORT_TARGET)
+	  && addr_tokens[base_token + 1]->type == ACCESS_METHOD)
 	{
-	  /* The base for this component access is a struct component access
-	     itself.  Insert a node to be processed on the next iteration of
-	     our caller's loop, which will subsequently be turned into a new,
-	     inner GOMP_MAP_STRUCT mapping.
+	  bool base_ref = false;
+	  access_method_kinds access_kind
+	    = addr_tokens[last_access]->u.access_kind;
 
-	     We need to do this else the non-DECL_P base won't be
-	     rewritten correctly in the offloaded region.  */
+	  switch (access_kind)
+	    {
+	    case ACCESS_DIRECT:
+	    case ACCESS_INDEXED_ARRAY:
+	      return NULL;
+
+	    case ACCESS_REF:
+	    case ACCESS_REF_TO_POINTER:
+	    case ACCESS_REF_TO_POINTER_OFFSET:
+	    case ACCESS_INDEXED_REF_TO_ARRAY:
+	      base_ref = true;
+	      break;
+
+	    default:
+	      ;
+	    }
 	  tree c2 = build_omp_clause (OMP_CLAUSE_LOCATION (grp_end),
 				      OMP_CLAUSE_MAP);
-	  OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_FORCE_PRESENT);
-	  OMP_CLAUSE_DECL (c2) = unshare_expr (noind);
-	  OMP_CLAUSE_SIZE (c2) = TYPE_SIZE_UNIT (TREE_TYPE (noind));
-	  *inner = c2;
-	  return NULL;
-	}
+	  enum gomp_map_kind mkind;
+	  omp_mapping_group *decl_group;
+	  tree use_base;
+	  switch (access_kind)
+	    {
+	    case ACCESS_POINTER:
+	    case ACCESS_POINTER_OFFSET:
+	      use_base = addr_tokens[last_access]->expr;
+	      break;
+	    case ACCESS_REF_TO_POINTER:
+	    case ACCESS_REF_TO_POINTER_OFFSET:
+	      use_base
+		= build_fold_indirect_ref (addr_tokens[last_access]->expr);
+	      break;
+	    default:
+	      use_base = addr_tokens[base_token]->expr;
+	    }
+	  bool mapped_to_p
+	    = omp_directive_maps_explicitly (group_map, use_base, &decl_group,
+					     true, false, true);
+	  if (addr_tokens[base_token]->type == STRUCTURE_BASE
+	      && DECL_P (addr_tokens[last_access]->expr)
+	      && !mapped_to_p)
+	    mkind = base_ref ? GOMP_MAP_FIRSTPRIVATE_REFERENCE
+			     : GOMP_MAP_FIRSTPRIVATE_POINTER;
+	  else
+	    mkind = GOMP_MAP_ATTACH_DETACH;
 
-      tree sdecl = omp_strip_components_and_deref (base);
-
-      if (POINTER_TYPE_P (TREE_TYPE (sdecl)) && (region_type & ORT_TARGET))
-	{
-	  tree c2 = build_omp_clause (OMP_CLAUSE_LOCATION (grp_end),
-				      OMP_CLAUSE_MAP);
-	  bool base_ref
-	    = (TREE_CODE (base) == INDIRECT_REF
-	       && ((TREE_CODE (TREE_TYPE (TREE_OPERAND (base, 0)))
-		    == REFERENCE_TYPE)
-		   || ((TREE_CODE (TREE_OPERAND (base, 0))
-			== INDIRECT_REF)
-		       && (TREE_CODE (TREE_TYPE (TREE_OPERAND
-						  (TREE_OPERAND (base, 0), 0)))
-			   == REFERENCE_TYPE))));
-	  enum gomp_map_kind mkind = base_ref ? GOMP_MAP_FIRSTPRIVATE_REFERENCE
-					      : GOMP_MAP_FIRSTPRIVATE_POINTER;
 	  OMP_CLAUSE_SET_MAP_KIND (c2, mkind);
-	  OMP_CLAUSE_DECL (c2) = sdecl;
+	  /* If we have a reference to pointer base, we want to attach the
+	     pointer here, not the reference.  The reference attachment happens
+	     elsewhere.  */
+	  bool ref_to_ptr
+	    = (access_kind == ACCESS_REF_TO_POINTER
+	       || access_kind == ACCESS_REF_TO_POINTER_OFFSET);
+	  tree sdecl = addr_tokens[last_access]->expr;
+	  tree sdecl_ptr = ref_to_ptr ? build_fold_indirect_ref (sdecl)
+				      : sdecl;
+	  /* For the FIRSTPRIVATE_REFERENCE after the struct node, we
+	     want to use the reference itself for the decl, but we
+	     still want to use the pointer to calculate the bias.  */
+	  OMP_CLAUSE_DECL (c2) = (mkind == GOMP_MAP_ATTACH_DETACH)
+				 ? sdecl_ptr : sdecl;
+	  sdecl = sdecl_ptr;
 	  tree baddr = build_fold_addr_expr (base);
 	  baddr = fold_convert_loc (OMP_CLAUSE_LOCATION (grp_end),
 				    ptrdiff_type_node, baddr);
-	  /* This isn't going to be good enough when we add support for more
-	     complicated lvalue expressions.  FIXME.  */
-	  if (TREE_CODE (TREE_TYPE (sdecl)) == REFERENCE_TYPE
-	      && TREE_CODE (TREE_TYPE (TREE_TYPE (sdecl))) == POINTER_TYPE)
-	    sdecl = build_simple_mem_ref (sdecl);
 	  tree decladdr = fold_convert_loc (OMP_CLAUSE_LOCATION (grp_end),
 					    ptrdiff_type_node, sdecl);
 	  OMP_CLAUSE_SIZE (c2)
@@ -10369,24 +10889,46 @@  omp_accumulate_sibling_list (enum omp_region_type region_type,
 	  /* Insert after struct node.  */
 	  OMP_CLAUSE_CHAIN (c2) = OMP_CLAUSE_CHAIN (l);
 	  OMP_CLAUSE_CHAIN (l) = c2;
+
+	  if (addr_tokens[base_token]->type == STRUCTURE_BASE
+	      && (addr_tokens[base_token]->u.structure_base_kind
+		  == BASE_COMPONENT_EXPR)
+	      && mkind == GOMP_MAP_ATTACH_DETACH
+	      && addr_tokens[last_access]->u.access_kind != ACCESS_REF)
+	    {
+	      *inner = insert_node_pos;
+	      if (openmp)
+		*fragile_p = true;
+	      return NULL;
+	    }
 	}
 
+      if (addr_tokens[base_token]->type == STRUCTURE_BASE
+	  && (addr_tokens[base_token]->u.structure_base_kind
+	      == BASE_COMPONENT_EXPR)
+	  && addr_tokens[last_access]->u.access_kind == ACCESS_REF)
+	*inner = insert_node_pos;
+
       return NULL;
     }
   else if (struct_map_to_clause)
     {
       tree *osc = struct_map_to_clause->get (base);
       tree *sc = NULL, *scp = NULL;
+      unsigned HOST_WIDE_INT i, elems = tree_to_uhwi (OMP_CLAUSE_SIZE (*osc));
       sc = &OMP_CLAUSE_CHAIN (*osc);
       /* The struct mapping might be immediately followed by a
-	 FIRSTPRIVATE_POINTER and/or FIRSTPRIVATE_REFERENCE -- if it's an
-	 indirect access or a reference, or both.  (This added node is removed
-	 in omp-low.c after it has been processed there.)  */
-      if (*sc != grp_end
-	  && (OMP_CLAUSE_MAP_KIND (*sc) == GOMP_MAP_FIRSTPRIVATE_POINTER
-	      || OMP_CLAUSE_MAP_KIND (*sc) == GOMP_MAP_FIRSTPRIVATE_REFERENCE))
+	 FIRSTPRIVATE_POINTER,  FIRSTPRIVATE_REFERENCE or an ATTACH_DETACH --
+	 if it's an indirect access or a reference, or if the structure base
+	 is not a decl.  The FIRSTPRIVATE_* nodes are removed in omp-low.c
+	 after they have been processed there, and ATTACH_DETACH nodes are
+	 recomputed and moved out of the GOMP_MAP_STRUCT construct once
+	 sibling list building is complete.  */
+      if (OMP_CLAUSE_MAP_KIND (*sc) == GOMP_MAP_FIRSTPRIVATE_POINTER
+	  || OMP_CLAUSE_MAP_KIND (*sc) == GOMP_MAP_FIRSTPRIVATE_REFERENCE
+	  || OMP_CLAUSE_MAP_KIND (*sc) == GOMP_MAP_ATTACH_DETACH)
 	sc = &OMP_CLAUSE_CHAIN (*sc);
-      for (; *sc != grp_end; sc = &OMP_CLAUSE_CHAIN (*sc))
+      for (i = 0; i < elems; i++, sc = &OMP_CLAUSE_CHAIN (*sc))
 	if ((ptr || attach_detach) && sc == grp_start_p)
 	  break;
 	else if (TREE_CODE (OMP_CLAUSE_DECL (*sc)) != COMPONENT_REF
@@ -10418,6 +10960,27 @@  omp_accumulate_sibling_list (enum omp_region_type region_type,
 	      break;
 	    if (scp)
 	      continue;
+	    if ((region_type & ORT_ACC) != 0)
+	      {
+		/* For OpenACC, allow (ignore) duplicate struct accesses in
+		   the middle of a mapping clause, e.g. "mystruct->foo" in:
+		   copy(mystruct->foo->bar) copy(mystruct->foo->qux).  */
+		if (reprocessing_struct
+		    && known_eq (coffset, offset)
+		    && known_eq (cbitpos, bitpos))
+		  return NULL;
+	      }
+	    else if (known_eq (coffset, offset)
+		     && known_eq (cbitpos, bitpos))
+	      {
+		/* Having two struct members at the same offset doesn't work,
+		   so make sure we don't.  (We're allowed to ignore this.
+		   Should we report the error?)  */
+		/*error_at (OMP_CLAUSE_LOCATION (grp_end),
+			  "duplicate struct member %qE in map clauses",
+			  OMP_CLAUSE_DECL (grp_end));*/
+		return NULL;
+	      }
 	    if (maybe_lt (coffset, offset)
 		|| (known_eq (coffset, offset)
 		    && maybe_lt (cbitpos, bitpos)))
@@ -10429,9 +10992,48 @@  omp_accumulate_sibling_list (enum omp_region_type region_type,
 	      }
 	  }
 
-      if (!attach)
-	OMP_CLAUSE_SIZE (*osc)
-	  = size_binop (PLUS_EXPR, OMP_CLAUSE_SIZE (*osc), size_one_node);
+      OMP_CLAUSE_SIZE (*osc)
+	= size_binop (PLUS_EXPR, OMP_CLAUSE_SIZE (*osc), size_one_node);
+
+      if (reprocessing_struct)
+	{
+	  /* If we're reprocessing a struct node, we don't want to do most of
+	     the list manipulation below.  We only need to handle the (pointer
+	     or reference) attach/detach case.  */
+	  tree extra_node, alloc_node;
+	  if (attach_detach)
+	    alloc_node = build_omp_struct_comp_nodes (code, *grp_start_p,
+						      grp_end, &extra_node);
+	  else
+	    {
+	      /* If we don't have an attach/detach node, this is a
+		 "target data" directive or similar, not an offload region.
+		 Synthesize an "alloc" node using just the initiating
+		 GOMP_MAP_STRUCT decl.  */
+	      gomp_map_kind k = (code == OMP_TARGET_EXIT_DATA
+				 || code == OACC_EXIT_DATA)
+				? GOMP_MAP_RELEASE : GOMP_MAP_ALLOC;
+	      alloc_node
+		= build_omp_clause (OMP_CLAUSE_LOCATION (grp_end),
+				    OMP_CLAUSE_MAP);
+	      OMP_CLAUSE_SET_MAP_KIND (alloc_node, k);
+	      OMP_CLAUSE_DECL (alloc_node) = unshare_expr (last_token->expr);
+	      OMP_CLAUSE_SIZE (alloc_node)
+		= TYPE_SIZE_UNIT (TREE_TYPE (OMP_CLAUSE_DECL (alloc_node)));
+	    }
+
+	  if (scp)
+	    omp_siblist_insert_node_after (alloc_node, scp);
+	  else
+	    {
+	      tree *new_end = omp_siblist_insert_node_after (alloc_node, sc);
+	      if (sc == *added_tail)
+		*added_tail = new_end;
+	    }
+
+	  return NULL;
+	}
+
       if (ptr || attach_detach)
 	{
 	  tree cl = NULL_TREE, extra_node;
@@ -10439,6 +11041,17 @@  omp_accumulate_sibling_list (enum omp_region_type region_type,
 							 grp_end, &extra_node);
 	  tree *tail_chain = NULL;
 
+	  if (*fragile_p
+	      || (openmp
+		  && attach_detach
+		  && TREE_CODE (TREE_TYPE (ocd)) == POINTER_TYPE
+		  && !OMP_CLAUSE_ATTACHMENT_MAPPING_ERASED (grp_end)))
+	    {
+	      if (!lang_GNU_Fortran ())
+		OMP_CLAUSE_SIZE (alloc_node) = size_zero_node;
+	      OMP_CLAUSE_MAP_MAYBE_ZERO_LENGTH_ARRAY_SECTION (alloc_node) = 1;
+	    }
+
 	  /* Here, we have:
 
 	     grp_end : the last (or only) node in this group.
@@ -10524,12 +11137,15 @@  omp_build_struct_sibling_lists (enum tree_code code,
 				  **grpmap,
 				tree *list_p)
 {
+  using namespace omp_addr_tokenizer;
   unsigned i;
   omp_mapping_group *grp;
   hash_map<tree_operand_hash, tree> *struct_map_to_clause = NULL;
   bool success = true;
   tree *new_next = NULL;
   tree *tail = &OMP_CLAUSE_CHAIN ((*groups)[groups->length () - 1].grp_end);
+  tree added_nodes = NULL_TREE;
+  tree *added_tail = &added_nodes;
   auto_vec<omp_mapping_group> pre_hwm_groups;
 
   FOR_EACH_VEC_ELT (*groups, i, grp)
@@ -10537,9 +11153,10 @@  omp_build_struct_sibling_lists (enum tree_code code,
       tree c = grp->grp_end;
       tree decl = OMP_CLAUSE_DECL (c);
       tree grp_end = grp->grp_end;
+      auto_vec<omp_addr_token *> addr_tokens;
       tree sentinel = OMP_CLAUSE_CHAIN (grp_end);
 
-      if (new_next)
+      if (new_next && !grp->reprocess_struct)
 	grp->grp_start = new_next;
 
       new_next = NULL;
@@ -10550,7 +11167,7 @@  omp_build_struct_sibling_lists (enum tree_code code,
 	continue;
 
       /* Skip groups we marked for deletion in
-	 oacc_resolve_clause_dependencies.  */
+	 {omp,oacc}_resolve_clause_dependencies.  */
       if (grp->deleted)
 	continue;
 
@@ -10567,6 +11184,38 @@  omp_build_struct_sibling_lists (enum tree_code code,
 	    continue;
 	}
 
+      tree expr = decl;
+
+      while (TREE_CODE (expr) == ARRAY_REF)
+	expr = TREE_OPERAND (expr, 0);
+
+      if (!omp_parse_expr (addr_tokens, expr))
+	continue;
+
+      omp_addr_token *last_token = addr_tokens[addr_tokens.length () - 1];
+
+      /* A mapping of a reference to a pointer member that doesn't specify an
+	 array section, etc., like this:
+	   *mystruct.ref_to_ptr
+	 should not be processed by the struct sibling-list handling code --
+	 it just transfers the referenced pointer.
+
+	 In contrast, the quite similar-looking construct:
+	   *mystruct.ptr
+	 which is equivalent to e.g.
+	   mystruct.ptr[0]
+	 *does* trigger sibling-list processing.
+
+	 An exception for the former case is for "fragile" groups where the
+	 reference itself is not handled otherwise; this is subject to special
+	 handling in omp_accumulate_sibling_list also.  */
+
+      if (TREE_CODE (TREE_TYPE (decl)) == POINTER_TYPE
+	  && last_token->type == ACCESS_METHOD
+	  && last_token->u.access_kind == ACCESS_REF
+	  && !grp->fragile)
+	continue;
+
       tree d = decl;
       if (TREE_CODE (d) == ARRAY_REF)
 	{
@@ -10595,14 +11244,7 @@  omp_build_struct_sibling_lists (enum tree_code code,
       omp_mapping_group *wholestruct;
       if (omp_mapped_by_containing_struct (*grpmap, OMP_CLAUSE_DECL (c),
 					   &wholestruct))
-	{
-	  if (!(region_type & ORT_ACC)
-	      && *grp_start_p == grp_end)
-	    /* Remove the whole of this mapping -- redundant.  */
-	    grp->deleted = true;
-
-	  continue;
-	}
+	continue;
 
       if (OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_TO_PSET
 	  && OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_ATTACH
@@ -10629,27 +11271,30 @@  omp_build_struct_sibling_lists (enum tree_code code,
 	      goto error_out;
 	    }
 
-	  tree inner = NULL_TREE;
+	  tree *inner = NULL;
+	  bool fragile_p = grp->fragile;
 
 	  new_next
 	    = omp_accumulate_sibling_list (region_type, code,
-					   struct_map_to_clause, grp_start_p,
-					   grp_end, &inner);
+					   struct_map_to_clause, *grpmap,
+					   grp_start_p, grp_end, addr_tokens,
+					   &inner, &fragile_p,
+					   grp->reprocess_struct, &added_tail);
 
 	  if (inner)
 	    {
-	      if (new_next && *new_next == NULL_TREE)
-		*new_next = inner;
-	      else
-		*tail = inner;
-
-	      OMP_CLAUSE_CHAIN (inner) = NULL_TREE;
 	      omp_mapping_group newgrp;
-	      newgrp.grp_start = new_next ? new_next : tail;
-	      newgrp.grp_end = inner;
+	      newgrp.grp_start = inner;
+	      if (OMP_CLAUSE_MAP_KIND (OMP_CLAUSE_CHAIN (*inner))
+		  == GOMP_MAP_ATTACH_DETACH)
+		newgrp.grp_end = OMP_CLAUSE_CHAIN (*inner);
+	      else
+		newgrp.grp_end = *inner;
 	      newgrp.mark = UNVISITED;
 	      newgrp.sibling = NULL;
 	      newgrp.deleted = false;
+	      newgrp.reprocess_struct = true;
+	      newgrp.fragile = fragile_p;
 	      newgrp.next = NULL;
 	      groups->safe_push (newgrp);
 
@@ -10660,8 +11305,6 @@  omp_build_struct_sibling_lists (enum tree_code code,
 	      *grpmap
 		= omp_reindex_mapping_groups (list_p, groups, &pre_hwm_groups,
 					      sentinel);
-
-	      tail = &OMP_CLAUSE_CHAIN (inner);
 	    }
 	}
     }
@@ -10690,6 +11333,61 @@  omp_build_struct_sibling_lists (enum tree_code code,
 	tail = &OMP_CLAUSE_CHAIN (*tail);
     }
 
+  /* Tack on the struct nodes added during nested struct reprocessing.  */
+  if (added_nodes)
+    {
+      *tail = added_nodes;
+      tail = added_tail;
+    }
+
+  /* Now we have finished building the struct sibling lists, reprocess
+     newly-added "attach" nodes: we need the address of the first
+     mapped element of each struct sibling list for the bias of the attach
+     operation -- not necessarily the base address of the whole struct.  */
+  if (struct_map_to_clause)
+    for (hash_map<tree_operand_hash, tree>::iterator iter
+	   = struct_map_to_clause->begin ();
+	 iter != struct_map_to_clause->end ();
+	 ++iter)
+      {
+	tree struct_node = (*iter).second;
+	gcc_assert (OMP_CLAUSE_CODE (struct_node) == OMP_CLAUSE_MAP);
+	tree attach = OMP_CLAUSE_CHAIN (struct_node);
+
+	if (OMP_CLAUSE_CODE (attach) != OMP_CLAUSE_MAP
+	    || OMP_CLAUSE_MAP_KIND (attach) != GOMP_MAP_ATTACH_DETACH)
+	  continue;
+
+	OMP_CLAUSE_SET_MAP_KIND (attach, GOMP_MAP_ATTACH);
+
+	/* Sanity check: the standalone attach node will not work if we have
+	   an "enter data" operation (because for those, variables need to be
+	   mapped separately and attach nodes must be grouped together with the
+	   base they attach to).  We should only have created the
+	   ATTACH_DETACH node after GOMP_MAP_STRUCT for a target region, so
+	   this should never be true.  */
+	gcc_assert ((region_type & ORT_TARGET) != 0);
+
+	/* This is the first sorted node in the struct sibling list.  Use it
+	   to recalculate the correct bias to use.
+	   (&first_node - attach_decl).  */
+	tree first_node = OMP_CLAUSE_DECL (OMP_CLAUSE_CHAIN (attach));
+	first_node = build_fold_addr_expr (first_node);
+	first_node = fold_convert (ptrdiff_type_node, first_node);
+	tree attach_decl = OMP_CLAUSE_DECL (attach);
+	attach_decl = fold_convert (ptrdiff_type_node, attach_decl);
+	OMP_CLAUSE_SIZE (attach)
+	  = fold_build2 (MINUS_EXPR, ptrdiff_type_node, first_node,
+			 attach_decl);
+
+	/* Remove GOMP_MAP_ATTACH node from after struct node.  */
+	OMP_CLAUSE_CHAIN (struct_node) = OMP_CLAUSE_CHAIN (attach);
+	/* ...and re-insert it at the end of our clause list.  */
+	*tail = attach;
+	OMP_CLAUSE_CHAIN (attach) = NULL_TREE;
+	tail = &OMP_CLAUSE_CHAIN (attach);
+      }
+
 error_out:
   if (struct_map_to_clause)
     delete struct_map_to_clause;
@@ -10705,6 +11403,7 @@  gimplify_scan_omp_clauses (tree *list_p, gimple_seq *pre_p,
 			   enum omp_region_type region_type,
 			   enum tree_code code)
 {
+  using namespace omp_addr_tokenizer;
   struct gimplify_omp_ctx *ctx, *outer_ctx;
   tree c;
   tree *prev_list_p = NULL, *orig_list_p = list_p;
@@ -10750,6 +11449,7 @@  gimplify_scan_omp_clauses (tree *list_p, gimple_seq *pre_p,
 	  hash_map<tree_operand_hash, omp_mapping_group *> *grpmap;
 	  grpmap = omp_index_mapping_groups (groups);
 
+	  omp_resolve_clause_dependencies (code, groups, grpmap);
 	  omp_build_struct_sibling_lists (code, region_type, groups, &grpmap,
 					  list_p);
 
@@ -10802,6 +11502,7 @@  gimplify_scan_omp_clauses (tree *list_p, gimple_seq *pre_p,
       const char *check_non_private = NULL;
       unsigned int flags;
       tree decl;
+      auto_vec<omp_addr_token *, 10> addr_tokens;
 
       switch (OMP_CLAUSE_CODE (c))
 	{
@@ -11108,6 +11809,13 @@  gimplify_scan_omp_clauses (tree *list_p, gimple_seq *pre_p,
 
 	case OMP_CLAUSE_MAP:
 	  decl = OMP_CLAUSE_DECL (c);
+
+	  if (!omp_parse_expr (addr_tokens, decl))
+	    {
+	      remove = true;
+	      break;
+	    }
+
 	  if (error_operand_p (decl))
 	    remove = true;
 	  switch (code)
@@ -11117,13 +11825,18 @@  gimplify_scan_omp_clauses (tree *list_p, gimple_seq *pre_p,
 	    case OACC_DATA:
 	      if (TREE_CODE (TREE_TYPE (decl)) != ARRAY_TYPE)
 		break;
+	      goto check_firstprivate;
+	    case OACC_ENTER_DATA:
+	    case OACC_EXIT_DATA:
+	      if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ATTACH_DETACH
+		  && addr_tokens[0]->type == ARRAY_BASE)
+		remove = true;
 	      /* FALLTHRU */
 	    case OMP_TARGET_DATA:
 	    case OMP_TARGET_ENTER_DATA:
 	    case OMP_TARGET_EXIT_DATA:
-	    case OACC_ENTER_DATA:
-	    case OACC_EXIT_DATA:
 	    case OACC_HOST_DATA:
+	    check_firstprivate:
 	      if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_POINTER
 		  || (OMP_CLAUSE_MAP_KIND (c)
 		      == GOMP_MAP_FIRSTPRIVATE_REFERENCE))
@@ -11158,6 +11871,18 @@  gimplify_scan_omp_clauses (tree *list_p, gimple_seq *pre_p,
 		   && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_POINTER
 		       || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_TO_PSET))
 	    remove = true;
+	  else if (code == OMP_TARGET_EXIT_DATA
+		   && OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ALLOC
+		   && OMP_CLAUSE_CHAIN (c)
+		   && (OMP_CLAUSE_CODE (OMP_CLAUSE_CHAIN (c))
+		       == OMP_CLAUSE_MAP)
+		   && ((OMP_CLAUSE_MAP_KIND (OMP_CLAUSE_CHAIN (c))
+			== GOMP_MAP_ATTACH_DETACH)
+		       || (OMP_CLAUSE_MAP_KIND (OMP_CLAUSE_CHAIN (c))
+			   == GOMP_MAP_ATTACH_ZERO_LENGTH_ARRAY_SECTION))
+		   && TREE_CODE (TREE_TYPE (OMP_CLAUSE_DECL
+			(OMP_CLAUSE_CHAIN (c)))) == REFERENCE_TYPE)
+	    OMP_CLAUSE_SET_MAP_KIND (c, GOMP_MAP_RELEASE);
 
 	  if (remove)
 	    break;
@@ -11200,26 +11925,22 @@  gimplify_scan_omp_clauses (tree *list_p, gimple_seq *pre_p,
 				  GOVD_FIRSTPRIVATE | GOVD_SEEN);
 	    }
 
-	  if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_STRUCT)
+	  if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_STRUCT
+	      && (addr_tokens[0]->type == STRUCTURE_BASE
+		  || addr_tokens[0]->type == ARRAY_BASE)
+	      && addr_tokens[0]->u.structure_base_kind == BASE_DECL)
 	    {
-	      tree base = omp_strip_components_and_deref (decl);
-	      if (DECL_P (base))
-		{
-		  decl = base;
-		  splay_tree_node n
-		    = splay_tree_lookup (ctx->variables,
-					 (splay_tree_key) decl);
-		  if (seen_error ()
-		      && n
-		      && (n->value & (GOVD_MAP | GOVD_FIRSTPRIVATE)) != 0)
-		    {
-		      remove = true;
-		      break;
-		    }
-		  flags = GOVD_MAP | GOVD_EXPLICIT;
+	      gcc_assert (addr_tokens[1]->type == ACCESS_METHOD);
+	      /* If we got to this struct via a chain of pointers, maybe we
+		 want to map it implicitly instead.  */
+	      if (omp_access_chain_p (addr_tokens, 1))
+		break;
+	      decl = addr_tokens[1]->expr;
+	      flags = GOVD_MAP | GOVD_EXPLICIT;
 
-		  goto do_add_decl;
-		}
+	      gcc_assert (addr_tokens[1]->u.access_kind != ACCESS_DIRECT
+			  || TREE_ADDRESSABLE (decl));
+	      goto do_add_decl;
 	    }
 
 	  if (TREE_CODE (decl) == TARGET_EXPR)
@@ -11466,6 +12187,16 @@  gimplify_scan_omp_clauses (tree *list_p, gimple_seq *pre_p,
 					     : GOMP_MAP_ATTACH);
 	      OMP_CLAUSE_SET_MAP_KIND (c, map_kind);
 	    }
+	  else if ((code == OACC_ENTER_DATA
+		    || code == OACC_EXIT_DATA
+		    || code == OACC_PARALLEL)
+		   && OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ATTACH_DETACH)
+	    {
+	      enum gomp_map_kind map_kind = (code == OACC_EXIT_DATA
+					     ? GOMP_MAP_DETACH
+					     : GOMP_MAP_ATTACH);
+	      OMP_CLAUSE_SET_MAP_KIND (c, map_kind);
+	    }
 
 	  goto do_add;
 
@@ -12368,7 +13099,7 @@  gimplify_adjust_omp_clauses_1 (splay_tree_node n, void *data)
       if (TREE_CODE (TREE_TYPE (decl)) == REFERENCE_TYPE
 	  && TREE_CODE (TREE_TYPE (TREE_TYPE (decl))) == POINTER_TYPE)
 	OMP_CLAUSE_DECL (clause)
-	  = build_simple_mem_ref_loc (input_location, decl);
+	  = build_fold_indirect_ref_loc (input_location, decl);
       OMP_CLAUSE_DECL (clause)
 	= build2 (MEM_REF, char_type_node, OMP_CLAUSE_DECL (clause),
 		  build_int_cst (build_pointer_type (char_type_node), 0));
diff --git a/gcc/omp-general.cc b/gcc/omp-general.cc
index 0b4ec823479a..1dae1b1b8a5d 100644
--- a/gcc/omp-general.cc
+++ b/gcc/omp-general.cc
@@ -45,6 +45,8 @@  along with GCC; see the file COPYING3.  If not see
 #include "data-streamer.h"
 #include "streamer-hooks.h"
 #include "opts.h"
+#include "omp-general.h"
+#include "tree-pretty-print.h"
 
 enum omp_requires omp_requires_mask;
 
@@ -3013,4 +3015,427 @@  omp_build_component_ref (tree obj, tree field)
   return ret;
 }
 
+namespace omp_addr_tokenizer {
+
+/* We scan an expression by recursive descent, and build a vector of
+   "omp_addr_token *" pointers representing a "parsed" version of the
+   expression.  The grammar we use is something like this:
+
+     expr0::
+       expr [section-access]
+
+     expr::
+	 structured-expr access-method
+       | array-base access-method
+
+     structured-expr::
+       structure-base component-selector
+
+     arbitrary-expr::
+       (anything else)
+
+     structure-base::
+	 DECL access-method
+       | structured-expr access-method
+       | arbitrary-expr access-method
+
+     array-base::
+	 DECL
+       | arbitrary-expr
+
+     access-method::
+	 DIRECT
+       | REF
+       | POINTER
+       | REF_TO_POINTER
+       | POINTER_OFFSET
+       | REF_TO_POINTER_OFFSET
+       | INDEXED_ARRAY
+       | INDEXED_REF_TO_ARRAY
+       | index-expr
+
+     index-expr::
+	 INDEX_EXPR access-method
+
+     component-selector::
+	 component-selector COMPONENT_REF
+       | component-selector ARRAY_REF
+       | COMPONENT_REF
+
+   This tokenized form is then used both in parsing, for OpenMP clause
+   expansion (for C and C++) and in gimplify.cc for sibling-list handling
+   (for C, C++ and Fortran).  */
+
+omp_addr_token::omp_addr_token (token_type t, tree e)
+  : type(t), expr(e)
+{
+}
+
+omp_addr_token::omp_addr_token (access_method_kinds k, tree e)
+  : type(ACCESS_METHOD), expr(e)
+{
+  u.access_kind = k;
+}
+
+omp_addr_token::omp_addr_token (token_type t, structure_base_kinds k, tree e)
+  : type(t), expr(e)
+{
+  u.structure_base_kind = k;
+}
+
+static bool
+omp_parse_component_selector (tree *expr0)
+{
+  tree expr = *expr0;
+  tree last_component = NULL_TREE;
+
+  while (TREE_CODE (expr) == COMPONENT_REF
+	 || TREE_CODE (expr) == ARRAY_REF)
+    {
+      if (TREE_CODE (expr) == COMPONENT_REF)
+	last_component = expr;
+
+      expr = TREE_OPERAND (expr, 0);
+
+      if (TREE_CODE (TREE_TYPE (expr)) == REFERENCE_TYPE)
+	break;
+    }
+
+  if (!last_component)
+    return false;
+
+  *expr0 = last_component;
+  return true;
+}
+
+/* This handles references that have had convert_from_reference called on
+   them, and also those that haven't.  */
+
+static bool
+omp_parse_ref (tree *expr0)
+{
+  tree expr = *expr0;
+
+  if (TREE_CODE (TREE_TYPE (expr)) == REFERENCE_TYPE)
+    return true;
+  else if ((TREE_CODE (expr) == INDIRECT_REF
+	    || (TREE_CODE (expr) == MEM_REF
+		&& integer_zerop (TREE_OPERAND (expr, 1))))
+	   && TREE_CODE (TREE_TYPE (TREE_OPERAND (expr, 0))) == REFERENCE_TYPE)
+    {
+      *expr0 = TREE_OPERAND (expr, 0);
+      return true;
+    }
+
+  return false;
+}
+
+static bool
+omp_parse_pointer (tree *expr0, bool *has_offset)
+{
+  tree expr = *expr0;
+
+  *has_offset = false;
+
+  if ((TREE_CODE (expr) == INDIRECT_REF
+       || (TREE_CODE (expr) == MEM_REF
+	   && integer_zerop (TREE_OPERAND (expr, 1))))
+      && TREE_CODE (TREE_TYPE (TREE_OPERAND (expr, 0))) == POINTER_TYPE)
+    {
+      expr = TREE_OPERAND (expr, 0);
+
+      /* The Fortran FE sometimes emits a no-op cast here.  */
+      STRIP_NOPS (expr);
+
+      while (1)
+	{
+	  if (TREE_CODE (expr) == COMPOUND_EXPR)
+	    {
+	      expr = TREE_OPERAND (expr, 1);
+	      STRIP_NOPS (expr);
+	    }
+	  else if (TREE_CODE (expr) == SAVE_EXPR)
+	    expr = TREE_OPERAND (expr, 0);
+	  else if (TREE_CODE (expr) == POINTER_PLUS_EXPR)
+	    {
+	      *has_offset = true;
+	      expr = TREE_OPERAND (expr, 0);
+	    }
+	  else
+	    break;
+	}
+
+      STRIP_NOPS (expr);
+
+      *expr0 = expr;
+      return true;
+    }
+
+  return false;
+}
+
+static bool
+omp_parse_access_method (tree *expr0, enum access_method_kinds *kind)
+{
+  tree expr = *expr0;
+  bool has_offset;
+
+  if (omp_parse_ref (&expr))
+    *kind = ACCESS_REF;
+  else if (omp_parse_pointer (&expr, &has_offset))
+    {
+      if (omp_parse_ref (&expr))
+	*kind = has_offset ? ACCESS_REF_TO_POINTER_OFFSET
+			   : ACCESS_REF_TO_POINTER;
+      else
+	*kind = has_offset ? ACCESS_POINTER_OFFSET : ACCESS_POINTER;
+    }
+  else if (TREE_CODE (expr) == ARRAY_REF)
+    {
+      while (TREE_CODE (expr) == ARRAY_REF)
+	expr = TREE_OPERAND (expr, 0);
+      if (omp_parse_ref (&expr))
+	*kind = ACCESS_INDEXED_REF_TO_ARRAY;
+      else
+	*kind = ACCESS_INDEXED_ARRAY;
+    }
+  else
+    *kind = ACCESS_DIRECT;
+
+  STRIP_NOPS (expr);
+
+  *expr0 = expr;
+  return true;
+}
+
+static bool
+omp_parse_access_methods (vec<omp_addr_token *> &addr_tokens, tree *expr0)
+{
+  tree expr = *expr0;
+  enum access_method_kinds kind;
+  tree am_expr;
+
+  if (omp_parse_access_method (&expr, &kind))
+    am_expr = expr;
+
+  if (TREE_CODE (expr) == INDIRECT_REF
+      || TREE_CODE (expr) == MEM_REF
+      || TREE_CODE (expr) == ARRAY_REF)
+    omp_parse_access_methods (addr_tokens, &expr);
+
+  addr_tokens.safe_push (new omp_addr_token (kind, am_expr));
+
+  *expr0 = expr;
+  return true;
+}
+
+static bool omp_parse_structured_expr (vec<omp_addr_token *> &, tree *);
+
+static bool
+omp_parse_structure_base (vec<omp_addr_token *> &addr_tokens,
+			  tree *expr0, structure_base_kinds *kind,
+			  vec<omp_addr_token *> &base_access_tokens,
+			  bool allow_structured = true)
+{
+  tree expr = *expr0;
+
+  if (allow_structured)
+    omp_parse_access_methods (base_access_tokens, &expr);
+
+  if (DECL_P (expr))
+    {
+      *kind = BASE_DECL;
+      return true;
+    }
+
+  if (allow_structured && omp_parse_structured_expr (addr_tokens, &expr))
+    {
+      *kind = BASE_COMPONENT_EXPR;
+      *expr0 = expr;
+      return true;
+    }
+
+  *kind = BASE_ARBITRARY_EXPR;
+  *expr0 = expr;
+  return true;
+}
+
+static bool
+omp_parse_structured_expr (vec<omp_addr_token *> &addr_tokens, tree *expr0)
+{
+  tree expr = *expr0;
+  tree base_component = NULL_TREE;
+  structure_base_kinds struct_base_kind;
+  auto_vec<omp_addr_token *> base_access_tokens;
+
+  if (omp_parse_component_selector (&expr))
+    base_component = expr;
+  else
+    return false;
+
+  gcc_assert (TREE_CODE (expr) == COMPONENT_REF);
+  expr = TREE_OPERAND (expr, 0);
+
+  tree structure_base = expr;
+
+  if (!omp_parse_structure_base (addr_tokens, &expr, &struct_base_kind,
+				 base_access_tokens))
+    return false;
+
+  addr_tokens.safe_push (new omp_addr_token (STRUCTURE_BASE, struct_base_kind,
+					     structure_base));
+  addr_tokens.safe_splice (base_access_tokens);
+  addr_tokens.safe_push (new omp_addr_token (COMPONENT_SELECTOR,
+					     base_component));
+
+  *expr0 = expr;
+
+  return true;
+}
+
+static bool
+omp_parse_array_expr (vec<omp_addr_token *> &addr_tokens, tree *expr0)
+{
+  tree expr = *expr0;
+  structure_base_kinds s_kind;
+  auto_vec<omp_addr_token *> base_access_tokens;
+
+  if (!omp_parse_structure_base (addr_tokens, &expr, &s_kind,
+				 base_access_tokens, false))
+    return false;
+
+  addr_tokens.safe_push (new omp_addr_token (ARRAY_BASE, s_kind, expr));
+  addr_tokens.safe_splice (base_access_tokens);
+
+  *expr0 = expr;
+  return true;
+}
+
+/* Return TRUE if the ACCESS_METHOD token at index 'i' has a further
+   ACCESS_METHOD chained after it (e.g., if we're processing an expression
+   containing multiple pointer indirections).  */
+
+bool
+omp_access_chain_p (vec<omp_addr_token *> &addr_tokens, unsigned i)
+{
+  gcc_assert (addr_tokens[i]->type == ACCESS_METHOD);
+  return (i + 1 < addr_tokens.length ()
+	  && addr_tokens[i + 1]->type == ACCESS_METHOD);
+}
+
+/* Return the address of the object accessed by the ACCESS_METHOD token
+   at 'i': either of the next access method's expr, or of EXPR if we're at
+   the end of the list of tokens.  */
+
+tree
+omp_accessed_addr (vec<omp_addr_token *> &addr_tokens, unsigned i, tree expr)
+{
+  if (i + 1 < addr_tokens.length ())
+    return build_fold_addr_expr (addr_tokens[i + 1]->expr);
+  else
+    return build_fold_addr_expr (expr);
+}
+
+} /* namespace omp_addr_tokenizer.  */
+
+bool
+omp_parse_expr (vec<omp_addr_token *> &addr_tokens, tree expr)
+{
+  using namespace omp_addr_tokenizer;
+  auto_vec<omp_addr_token *> expr_access_tokens;
+
+  if (!omp_parse_access_methods (expr_access_tokens, &expr))
+    return false;
+
+  if (omp_parse_structured_expr (addr_tokens, &expr))
+    ;
+  else if (omp_parse_array_expr (addr_tokens, &expr))
+    ;
+  else
+    return false;
+
+  addr_tokens.safe_splice (expr_access_tokens);
+
+  return true;
+}
+
+DEBUG_FUNCTION void
+debug_omp_tokenized_addr (vec<omp_addr_token *> &addr_tokens,
+			  bool with_exprs)
+{
+  using namespace omp_addr_tokenizer;
+  const char *sep = with_exprs ? "  " : "";
+
+  for (auto e : addr_tokens)
+    {
+      const char *pfx = "";
+
+      fputs (sep, stderr);
+
+      switch (e->type)
+	{
+	case COMPONENT_SELECTOR:
+	  fputs ("component_selector", stderr);
+	  break;
+	case ACCESS_METHOD:
+	  switch (e->u.access_kind)
+	    {
+	    case ACCESS_DIRECT:
+	      fputs ("access_direct", stderr);
+	      break;
+	    case ACCESS_REF:
+	      fputs ("access_ref", stderr);
+	      break;
+	    case ACCESS_POINTER:
+	      fputs ("access_pointer", stderr);
+	      break;
+	    case ACCESS_POINTER_OFFSET:
+	      fputs ("access_pointer_offset", stderr);
+	      break;
+	    case ACCESS_REF_TO_POINTER:
+	      fputs ("access_ref_to_pointer", stderr);
+	      break;
+	    case ACCESS_REF_TO_POINTER_OFFSET:
+	      fputs ("access_ref_to_pointer_offset", stderr);
+	      break;
+	    case ACCESS_INDEXED_ARRAY:
+	      fputs ("access_indexed_array", stderr);
+	      break;
+	    case ACCESS_INDEXED_REF_TO_ARRAY:
+	      fputs ("access_indexed_ref_to_array", stderr);
+	      break;
+	    }
+	  break;
+	case ARRAY_BASE:
+	case STRUCTURE_BASE:
+	  pfx = e->type == ARRAY_BASE ? "array_" : "struct_";
+	  switch (e->u.structure_base_kind)
+	    {
+	    case BASE_DECL:
+	      fprintf (stderr, "%sbase_decl", pfx);
+	      break;
+	    case BASE_COMPONENT_EXPR:
+	      fputs ("base_component_expr", stderr);
+	      break;
+	    case BASE_ARBITRARY_EXPR:
+	      fprintf (stderr, "%sbase_arbitrary_expr", pfx);
+	      break;
+	    }
+	  break;
+	}
+      if (with_exprs)
+	{
+	  fputs (" [", stderr);
+	  print_generic_expr (stderr, e->expr);
+	  fputc (']', stderr);
+	  sep = ",\n  ";
+	}
+      else
+	sep = " ";
+    }
+
+  fputs ("\n", stderr);
+}
+
+
 #include "gt-omp-general.h"
diff --git a/gcc/omp-general.h b/gcc/omp-general.h
index 1b5455a0a8ff..f9b7ef3426c4 100644
--- a/gcc/omp-general.h
+++ b/gcc/omp-general.h
@@ -152,4 +152,73 @@  get_openacc_privatization_dump_flags ()
 
 extern tree omp_build_component_ref (tree obj, tree field);
 
+namespace omp_addr_tokenizer {
+
+/* These are the ways of accessing a variable that have special-case handling
+   in the middle end (gimplify, omp-lower, etc.).  */
+
+/* These are the kinds of access that an ACCESS_METHOD token can represent.  */
+
+enum access_method_kinds
+{
+  ACCESS_DIRECT,
+  ACCESS_REF,
+  ACCESS_POINTER,
+  ACCESS_REF_TO_POINTER,
+  ACCESS_POINTER_OFFSET,
+  ACCESS_REF_TO_POINTER_OFFSET,
+  ACCESS_INDEXED_ARRAY,
+  ACCESS_INDEXED_REF_TO_ARRAY
+};
+
+/* These are the kinds that a STRUCTURE_BASE or ARRAY_BASE (except
+   BASE_COMPONENT_EXPR) can represent.  */
+
+enum structure_base_kinds
+{
+  BASE_DECL,
+  BASE_COMPONENT_EXPR,
+  BASE_ARBITRARY_EXPR
+};
+
+/* The coarse type for an address token.  These can have subtypes for
+   ARRAY_BASE or STRUCTURE_BASE (structure_base_kinds) or ACCESS_METHOD
+   (access_method_kinds).  */
+
+enum token_type
+{
+  ARRAY_BASE,
+  STRUCTURE_BASE,
+  COMPONENT_SELECTOR,
+  ACCESS_METHOD
+};
+
+/* The struct that forms a single token of an address expression as parsed by
+   omp_parse_expr.  These are typically held in a vec after parsing.  */
+
+struct omp_addr_token
+{
+  enum token_type type;
+  tree expr;
+
+  union
+  {
+    access_method_kinds access_kind;
+    structure_base_kinds structure_base_kind;
+  } u;
+
+  omp_addr_token (token_type, tree);
+  omp_addr_token (access_method_kinds, tree);
+  omp_addr_token (token_type, structure_base_kinds, tree);
+};
+
+extern bool omp_access_chain_p (vec<omp_addr_token *> &, unsigned);
+extern tree omp_accessed_addr (vec<omp_addr_token *> &, unsigned, tree);
+
+}
+
+typedef omp_addr_tokenizer::omp_addr_token omp_addr_token;
+
+extern bool omp_parse_expr (vec<omp_addr_token *> &, tree);
+
 #endif /* GCC_OMP_GENERAL_H */
diff --git a/gcc/omp-low.cc b/gcc/omp-low.cc
index 82a93d00f67f..e62b7d69ef4d 100644
--- a/gcc/omp-low.cc
+++ b/gcc/omp-low.cc
@@ -1600,10 +1600,13 @@  scan_sharing_clauses (tree clauses, omp_context *ctx)
 	    {
 	      /* If this is an offloaded region, an attach operation should
 		 only exist when the pointer variable is mapped in a prior
-		 clause.
+		 clause.  An exception is if we have a reference (to pointer):
+		 in that case we should have mapped "*decl" in a previous
+		 mapping instead of "decl".  Skip the assertion in that case.
 		 If we had an error, we may not have attempted to sort clauses
 		 properly, so avoid the test.  */
-	      if (is_gimple_omp_offloaded (ctx->stmt)
+	      if (TREE_CODE (TREE_TYPE (decl)) != REFERENCE_TYPE
+		  && is_gimple_omp_offloaded (ctx->stmt)
 		  && !seen_error ())
 		gcc_assert
 		  (maybe_lookup_decl (decl, ctx)
diff --git a/gcc/testsuite/c-c++-common/gomp/clauses-2.c b/gcc/testsuite/c-c++-common/gomp/clauses-2.c
index bbc8fb4e32bf..8f98d57a312f 100644
--- a/gcc/testsuite/c-c++-common/gomp/clauses-2.c
+++ b/gcc/testsuite/c-c++-common/gomp/clauses-2.c
@@ -11,7 +11,7 @@  foo (int *p, int q, struct S t, int i, int j, int k, int l)
     bar (p);
   #pragma omp target firstprivate (p), map (p[0]) /* { dg-error "appears more than once in data clauses" } */
     bar (p);
-  #pragma omp target map (p[0]) map (p) /* { dg-error "appears both in data and map clauses" } */
+  #pragma omp target map (p[0]) map (p)
     bar (p);
   #pragma omp target map (p) , map (p[0])
     bar (p);
diff --git a/gcc/testsuite/c-c++-common/gomp/target-50.c b/gcc/testsuite/c-c++-common/gomp/target-50.c
index 41f1d37845cf..a30a25e08936 100644
--- a/gcc/testsuite/c-c++-common/gomp/target-50.c
+++ b/gcc/testsuite/c-c++-common/gomp/target-50.c
@@ -17,7 +17,7 @@  int main()
 
   #pragma omp target map(tofrom: tmp->arr[0:10]) map(to: tmp->arr)
   { }
-/* { dg-final { scan-tree-dump-times {map\(struct:\*tmp \[len: 1\]\) map\(to:tmp[._0-9]*->arr \[len: [0-9]+\]\) map\(tofrom:\*_[0-9]+ \[len: [0-9]+\]\) map\(attach:tmp[._0-9]*->arr \[bias: 0\]\)} 2 "gimple" { target { ! { nvptx*-*-* amdgcn*-*-* } } } } } */
+/* { dg-final { scan-tree-dump-times {map\(struct:\*tmp \[len: 1\]\) map\(alloc:tmp[._0-9]*->arr \[len: [0-9]+\]\) map\(tofrom:\*_[0-9]+ \[len: [0-9]+\]\) map\(attach:tmp[._0-9]*->arr \[bias: 0\]\)} 2 "gimple" { target { ! { nvptx*-*-* amdgcn*-*-* } } } } } */
 
   return 0;
 }
diff --git a/gcc/testsuite/c-c++-common/gomp/target-implicit-map-2.c b/gcc/testsuite/c-c++-common/gomp/target-implicit-map-2.c
index 3aa1a8fc55ec..5ba1d7efe08d 100644
--- a/gcc/testsuite/c-c++-common/gomp/target-implicit-map-2.c
+++ b/gcc/testsuite/c-c++-common/gomp/target-implicit-map-2.c
@@ -49,4 +49,4 @@  main (void)
 
 /* { dg-final { scan-tree-dump {#pragma omp target num_teams.* map\(tofrom:a \[len: [0-9]+\]\[implicit\]\)} "gimple" } } */
 
-/* { dg-final { scan-tree-dump {#pragma omp target num_teams.* map\(tofrom:a \[len: [0-9]+\]\[implicit\]\) map\(tofrom:\*_[0-9]+ \[len: [0-9]+\]\) map\(attach:a\.ptr \[bias: 0\]\)} "gimple" } } */
+/* { dg-final { scan-tree-dump {#pragma omp target num_teams.* map\(struct:a \[len: 1\]\) map\(alloc:a\.ptr \[len: 0\]\) map\(tofrom:\*_[0-9]+ \[len: [0-9]+\]\) map\(attach:a\.ptr \[bias: 0\]\)} "gimple" } } */
diff --git a/gcc/testsuite/g++.dg/gomp/static-component-1.C b/gcc/testsuite/g++.dg/gomp/static-component-1.C
new file mode 100644
index 000000000000..c2f959335674
--- /dev/null
+++ b/gcc/testsuite/g++.dg/gomp/static-component-1.C
@@ -0,0 +1,23 @@ 
+/* { dg-do compile } */
+
+/* Types with static members should be mappable.  */
+
+struct A {
+  static int x[10];
+};
+
+struct B {
+  A a;
+};
+
+int
+main (int argc, char *argv[])
+{
+  B *b = new B;
+#pragma omp target map(b->a)
+  ;
+  B bb;
+#pragma omp target map(bb.a)
+  ;
+  delete b;
+}
diff --git a/gcc/testsuite/gcc.dg/gomp/target-3.c b/gcc/testsuite/gcc.dg/gomp/target-3.c
index 3e7921270c92..3d5e05f85710 100644
--- a/gcc/testsuite/gcc.dg/gomp/target-3.c
+++ b/gcc/testsuite/gcc.dg/gomp/target-3.c
@@ -13,4 +13,4 @@  void foo (struct S *s)
   #pragma omp target enter data map (alloc: s->a, s->b)
 }
 
-/* { dg-final { scan-tree-dump-times "map\\(struct:\\*s \\\[len: 2\\\]\\) map\\(alloc:s->a \\\[len: \[0-9\]+\\\]\\) map\\(alloc:s->b \\\[len: \[0-9\]+\\\]\\)" 2 "gimple" } } */
+/* { dg-final { scan-tree-dump-times "map\\(struct:\\*s \\\[len: 2\\\]\\) map\\(alloc:s\[\\._0-9\]+->a \\\[len: \[0-9\]+\\\]\\) map\\(alloc:s\[\\._0-9\]+->b \\\[len: \[0-9\]+\\\]\\)" 2 "gimple" } } */
diff --git a/gcc/tree.h b/gcc/tree.h
index 64a241f51e23..ac3dc6181e2a 100644
--- a/gcc/tree.h
+++ b/gcc/tree.h
@@ -1766,6 +1766,10 @@  class auto_suppress_location_wrappers
    NOTE: this is different than OMP_CLAUSE_MAP_IMPLICIT.  */
 #define OMP_CLAUSE_MAP_RUNTIME_IMPLICIT_P(NODE) \
   (OMP_CLAUSE_SUBCODE_CHECK (NODE, OMP_CLAUSE_MAP)->base.deprecated_flag)
+/* Nonzero for an attach/detach node whose decl was explicitly mapped on the
+   same directive.  */
+#define OMP_CLAUSE_ATTACHMENT_MAPPING_ERASED(NODE) \
+  TREE_STATIC (OMP_CLAUSE_SUBCODE_CHECK (NODE, OMP_CLAUSE_MAP))
 
 /* Flag that 'OMP_CLAUSE_DECL (NODE)' is to be made addressable during OMP
    lowering.  */
diff --git a/libgomp/target.c b/libgomp/target.c
index e38cc3b6f1c1..c84dfba351fb 100644
--- a/libgomp/target.c
+++ b/libgomp/target.c
@@ -730,7 +730,7 @@  gomp_map_fields_existing (struct target_mem_desc *tgt,
 
   cur_node.host_start = (uintptr_t) hostaddrs[i];
   cur_node.host_end = cur_node.host_start + sizes[i];
-  splay_tree_key n2 = splay_tree_lookup (mem_map, &cur_node);
+  splay_tree_key n2 = gomp_map_0len_lookup (mem_map, &cur_node);
   kind = get_kind (short_mapkind, kinds, i);
   implicit = get_implicit (short_mapkind, kinds, i);
   if (n2
@@ -827,8 +827,20 @@  gomp_attach_pointer (struct gomp_device_descr *devicep,
 
       if ((void *) target == NULL)
 	{
-	  gomp_mutex_unlock (&devicep->lock);
-	  gomp_fatal ("attempt to attach null pointer");
+	  /* As a special case, allow attaching NULL host pointers.  This
+	     allows e.g. unassociated Fortran pointers to be mapped
+	     properly.  */
+	  data = 0;
+
+	  gomp_debug (1,
+		      "%s: attaching NULL host pointer, target %p "
+		      "(struct base %p)\n", __FUNCTION__, (void *) devptr,
+		      (void *) (n->tgt->tgt_start + n->tgt_offset));
+
+	  gomp_copy_host2dev (devicep, aq, (void *) devptr, (void *) &data,
+			      sizeof (void *), true, cbufp);
+
+	  return;
 	}
 
       s.host_start = target + bias;
@@ -1085,7 +1097,8 @@  gomp_map_vars_internal (struct gomp_device_descr *devicep,
 		  tgt->list[i].key = NULL;
 		  if (!aq
 		      && gomp_to_device_kind_p (get_kind (short_mapkind, kinds, i)
-						& typemask))
+						& typemask)
+		      && sizes[i] != 0)
 		    gomp_coalesce_buf_add (&cbuf,
 					   tgt_size - cur_node.host_end
 					   + (uintptr_t) hostaddrs[i],
@@ -1447,7 +1460,17 @@  gomp_map_vars_internal (struct gomp_device_descr *devicep,
 				    + sizes[last];
 		if (tgt->list[first].key != NULL)
 		  continue;
+		if (sizes[last] == 0)
+		  cur_node.host_end++;
 		n = splay_tree_lookup (mem_map, &cur_node);
+		if (sizes[last] == 0)
+		  cur_node.host_end--;
+		if (n == NULL && cur_node.host_start == cur_node.host_end)
+		  {
+		    gomp_mutex_unlock (&devicep->lock);
+		    gomp_fatal ("Struct pointer member not mapped (%p)",
+				(void*) hostaddrs[first]);
+		  }
 		if (n == NULL)
 		  {
 		    size_t align = (size_t) 1 << (kind >> rshift);
diff --git a/libgomp/testsuite/libgomp.c++/baseptrs-3.C b/libgomp/testsuite/libgomp.c++/baseptrs-3.C
new file mode 100644
index 000000000000..39a48a40920a
--- /dev/null
+++ b/libgomp/testsuite/libgomp.c++/baseptrs-3.C
@@ -0,0 +1,275 @@ 
+#include <cstdlib>
+#include <cstring>
+#include <cassert>
+
+struct sa0
+{
+  int *ptr;
+};
+
+struct sb0
+{
+  int arr[10];
+};
+
+struct sc0
+{
+  sa0 a;
+  sb0 b;
+  sc0 (sa0 &my_a, sb0 &my_b) : a(my_a), b(my_b) {}
+};
+
+void
+foo0 ()
+{
+  sa0 my_a;
+  sb0 my_b;
+
+  my_a.ptr = (int *) malloc (sizeof (int) * 10);
+  sc0 my_c(my_a, my_b);
+
+  memset (my_c.a.ptr, 0, sizeof (int) * 10);
+
+  #pragma omp target map (my_c.a.ptr, my_c.a.ptr[:10])
+  {
+    for (int i = 0; i < 10; i++)
+      my_c.a.ptr[i] = i;
+  }
+
+  for (int i = 0; i < 10; i++)
+    assert (my_c.a.ptr[i] == i);
+
+  memset (my_c.b.arr, 0, sizeof (int) * 10);
+
+  #pragma omp target map (my_c.b.arr[:10])
+  {
+    for (int i = 0; i < 10; i++)
+      my_c.b.arr[i] = i;
+  }
+
+  for (int i = 0; i < 10; i++)
+    assert (my_c.b.arr[i] == i);
+
+  free (my_a.ptr);
+}
+
+struct sa
+{
+  int *ptr;
+};
+
+struct sb
+{
+  int arr[10];
+};
+
+struct sc
+{
+  sa &a;
+  sb &b;
+  sc (sa &my_a, sb &my_b) : a(my_a), b(my_b) {}
+};
+
+void
+foo ()
+{
+  sa my_a;
+  sb my_b;
+
+  my_a.ptr = (int *) malloc (sizeof (int) * 10);
+  sc my_c(my_a, my_b);
+
+  memset (my_c.a.ptr, 0, sizeof (int) * 10);
+
+  #pragma omp target map (my_c.a.ptr, my_c.a.ptr[:10])
+  {
+    for (int i = 0; i < 10; i++)
+      my_c.a.ptr[i] = i;
+  }
+
+  for (int i = 0; i < 10; i++)
+    assert (my_c.a.ptr[i] == i);
+
+  memset (my_c.b.arr, 0, sizeof (int) * 10);
+
+  #pragma omp target map (my_c.b.arr[:10])
+  {
+    for (int i = 0; i < 10; i++)
+      my_c.b.arr[i] = i;
+  }
+
+  for (int i = 0; i < 10; i++)
+    assert (my_c.b.arr[i] == i);
+
+  free (my_a.ptr);
+}
+
+void
+bar ()
+{
+  sa my_a;
+  sb my_b;
+
+  my_a.ptr = (int *) malloc (sizeof (int) * 10);
+  sc my_c(my_a, my_b);
+  sc &my_cref = my_c;
+
+  memset (my_cref.a.ptr, 0, sizeof (int) * 10);
+
+  #pragma omp target map (my_cref.a.ptr, my_cref.a.ptr[:10])
+  {
+    for (int i = 0; i < 10; i++)
+      my_cref.a.ptr[i] = i;
+  }
+
+  for (int i = 0; i < 10; i++)
+    assert (my_cref.a.ptr[i] == i);
+
+  memset (my_cref.b.arr, 0, sizeof (int) * 10);
+
+  #pragma omp target map (my_cref.b.arr[:10])
+  {
+    for (int i = 0; i < 10; i++)
+      my_cref.b.arr[i] = i;
+  }
+
+  for (int i = 0; i < 10; i++)
+    assert (my_cref.b.arr[i] == i);
+
+  free (my_a.ptr);
+}
+
+struct scp0
+{
+  sa *a;
+  sb *b;
+  scp0 (sa *my_a, sb *my_b) : a(my_a), b(my_b) {}
+};
+
+void
+foop0 ()
+{
+  sa *my_a = new sa;
+  sb *my_b = new sb;
+
+  my_a->ptr = new int[10];
+  scp0 *my_c = new scp0(my_a, my_b);
+
+  memset (my_c->a->ptr, 0, sizeof (int) * 10);
+
+  #pragma omp target map (my_c->a, my_c->a[:1], my_c->a->ptr, my_c->a->ptr[:10])
+  {
+    for (int i = 0; i < 10; i++)
+      my_c->a->ptr[i] = i;
+  }
+
+  for (int i = 0; i < 10; i++)
+    assert (my_c->a->ptr[i] == i);
+
+  memset (my_c->b->arr, 0, sizeof (int) * 10);
+
+  #pragma omp target map (my_c->b, my_c->b[:1], my_c->b->arr[:10])
+  {
+    for (int i = 0; i < 10; i++)
+      my_c->b->arr[i] = i;
+  }
+
+  for (int i = 0; i < 10; i++)
+    assert (my_c->b->arr[i] == i);
+
+  delete[] my_a->ptr;
+  delete my_a;
+  delete my_b;
+}
+
+struct scp
+{
+  sa *&a;
+  sb *&b;
+  scp (sa *&my_a, sb *&my_b) : a(my_a), b(my_b) {}
+};
+
+void
+foop ()
+{
+  sa *my_a = new sa;
+  sb *my_b = new sb;
+
+  my_a->ptr = new int[10];
+  scp *my_c = new scp(my_a, my_b);
+
+  memset (my_c->a->ptr, 0, sizeof (int) * 10);
+
+  #pragma omp target map (my_c->a, my_c->a[:1], my_c->a->ptr, my_c->a->ptr[:10])
+  {
+    for (int i = 0; i < 10; i++)
+      my_c->a->ptr[i] = i;
+  }
+
+  for (int i = 0; i < 10; i++)
+    assert (my_c->a->ptr[i] == i);
+
+  memset (my_c->b->arr, 0, sizeof (int) * 10);
+
+  #pragma omp target map (my_c->b, my_c->b[:1], my_c->b->arr[:10])
+  {
+    for (int i = 0; i < 10; i++)
+      my_c->b->arr[i] = i;
+  }
+
+  for (int i = 0; i < 10; i++)
+    assert (my_c->b->arr[i] == i);
+
+  delete[] my_a->ptr;
+  delete my_a;
+  delete my_b;
+}
+
+void
+barp ()
+{
+  sa *my_a = new sa;
+  sb *my_b = new sb;
+
+  my_a->ptr = new int[10];
+  scp *my_c = new scp(my_a, my_b);
+  scp *&my_cref = my_c;
+
+  memset (my_cref->a->ptr, 0, sizeof (int) * 10);
+
+  #pragma omp target map (my_cref->a, my_cref->a[:1], my_cref->a->ptr, \
+			  my_cref->a->ptr[:10])
+  {
+    for (int i = 0; i < 10; i++)
+      my_cref->a->ptr[i] = i;
+  }
+
+  for (int i = 0; i < 10; i++)
+    assert (my_cref->a->ptr[i] == i);
+
+  memset (my_cref->b->arr, 0, sizeof (int) * 10);
+
+  #pragma omp target map (my_cref->b, my_cref->b[:1], my_cref->b->arr[:10])
+  {
+    for (int i = 0; i < 10; i++)
+      my_cref->b->arr[i] = i;
+  }
+
+  for (int i = 0; i < 10; i++)
+    assert (my_cref->b->arr[i] == i);
+
+  delete my_a->ptr;
+  delete my_a;
+  delete my_b;
+}
+
+int main (int argc, char *argv[])
+{
+  foo0 ();
+  foo ();
+  bar ();
+  foop0 ();
+  foop ();
+  barp ();
+  return 0;
+}
diff --git a/libgomp/testsuite/libgomp.c++/baseptrs-4.C b/libgomp/testsuite/libgomp.c++/baseptrs-4.C
new file mode 100644
index 000000000000..196029ac1868
--- /dev/null
+++ b/libgomp/testsuite/libgomp.c++/baseptrs-4.C
@@ -0,0 +1,3154 @@ 
+// { dg-do run }
+
+#include <cstring>
+#include <cassert>
+
+#define MAP_DECLS
+
+#define NONREF_DECL_BASE
+#define REF_DECL_BASE
+#define PTR_DECL_BASE
+#define REF2PTR_DECL_BASE
+
+#define ARRAY_DECL_BASE
+// Needs map clause "lvalue"-parsing support.
+//#define REF2ARRAY_DECL_BASE
+#define PTR_OFFSET_DECL_BASE
+// Needs map clause "lvalue"-parsing support.
+//#define REF2PTR_OFFSET_DECL_BASE
+
+#define MAP_SECTIONS
+
+#define NONREF_DECL_MEMBER_SLICE
+#define NONREF_DECL_MEMBER_SLICE_BASEPTR
+#define REF_DECL_MEMBER_SLICE
+#define REF_DECL_MEMBER_SLICE_BASEPTR
+#define PTR_DECL_MEMBER_SLICE
+#define PTR_DECL_MEMBER_SLICE_BASEPTR
+#define REF2PTR_DECL_MEMBER_SLICE
+#define REF2PTR_DECL_MEMBER_SLICE_BASEPTR
+
+#define ARRAY_DECL_MEMBER_SLICE
+#define ARRAY_DECL_MEMBER_SLICE_BASEPTR
+// Needs map clause "lvalue"-parsing support.
+//#define REF2ARRAY_DECL_MEMBER_SLICE
+//#define REF2ARRAY_DECL_MEMBER_SLICE_BASEPTR
+#define PTR_OFFSET_DECL_MEMBER_SLICE
+#define PTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR
+// Needs map clause "lvalue"-parsing support.
+//#define REF2PTR_OFFSET_DECL_MEMBER_SLICE
+//#define REF2PTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR
+
+#define PTRARRAY_DECL_MEMBER_SLICE
+#define PTRARRAY_DECL_MEMBER_SLICE_BASEPTR
+// Needs map clause "lvalue"-parsing support.
+//#define REF2PTRARRAY_DECL_MEMBER_SLICE
+//#define REF2PTRARRAY_DECL_MEMBER_SLICE_BASEPTR
+#define PTRPTR_OFFSET_DECL_MEMBER_SLICE
+#define PTRPTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR
+// Needs map clause "lvalue"-parsing support.
+//#define REF2PTRPTR_OFFSET_DECL_MEMBER_SLICE
+//#define REF2PTRPTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR
+
+#define NONREF_COMPONENT_BASE
+#define NONREF_COMPONENT_MEMBER_SLICE
+#define NONREF_COMPONENT_MEMBER_SLICE_BASEPTR
+
+#define REF_COMPONENT_BASE
+#define REF_COMPONENT_MEMBER_SLICE
+#define REF_COMPONENT_MEMBER_SLICE_BASEPTR
+
+#define PTR_COMPONENT_BASE
+#define PTR_COMPONENT_MEMBER_SLICE
+#define PTR_COMPONENT_MEMBER_SLICE_BASEPTR
+
+#define REF2PTR_COMPONENT_BASE
+#define REF2PTR_COMPONENT_MEMBER_SLICE
+#define REF2PTR_COMPONENT_MEMBER_SLICE_BASEPTR
+
+#ifdef MAP_DECLS
+void
+map_decls (void)
+{
+  int x = 0;
+  int &y = x;
+  int arr[4];
+  int (&arrref)[4] = arr;
+  int *z = &arr[0];
+  int *&t = z;
+
+  memset (arr, 0, sizeof arr);
+
+  #pragma omp target map(x)
+  {
+    x++;
+  }
+
+  #pragma omp target map(y)
+  {
+    y++;
+  }
+
+  assert (x == 2);
+  assert (y == 2);
+
+  /* "A variable that is of type pointer is treated as if it is the base
+      pointer of a zero-length array section that appeared as a list item in a
+      map clause."  */
+  #pragma omp target map(z)
+  {
+    z++;
+  }
+
+  /* "A variable that is of type reference to pointer is treated as if it had
+      appeared in a map clause as a zero-length array section."
+
+     The pointer here is *not* associated with a target address, so we're not
+     disallowed from modifying it.  */
+  #pragma omp target map(t)
+  {
+    t++;
+  }
+
+  assert (z == &arr[2]);
+  assert (t == &arr[2]);
+
+  #pragma omp target map(arr)
+  {
+    arr[2]++;
+  }
+
+  #pragma omp target map(arrref)
+  {
+    arrref[2]++;
+  }
+
+  assert (arr[2] == 2);
+  assert (arrref[2] == 2);
+}
+#endif
+
+struct S {
+  int a;
+  int &b;
+  int *c;
+  int *&d;
+  int e[4];
+  int (&f)[4];
+
+  S(int a1, int &b1, int *c1, int *&d1) :
+    a(a1), b(b1), c(c1), d(d1), f(e)
+  {
+    memset (e, 0, sizeof e);
+  }
+};
+
+#ifdef NONREF_DECL_BASE
+void
+nonref_decl_base (void)
+{
+  int a = 0, b = 0, c, *d = &c;
+  S mys(a, b, &c, d);
+
+  #pragma omp target map(mys.a)
+  {
+    mys.a++;
+  }
+
+  #pragma omp target map(mys.b)
+  {
+    mys.b++;
+  }
+
+  assert (mys.a == 1);
+  assert (mys.b == 1);
+
+  #pragma omp target map(mys.c)
+  {
+    mys.c++;
+  }
+
+  #pragma omp target map(mys.d)
+  {
+    mys.d++;
+  }
+
+  assert (mys.c == &c + 1);
+  assert (mys.d == &c + 1);
+
+  #pragma omp target map(mys.e)
+  {
+    mys.e[0]++;
+  }
+
+  #pragma omp target map(mys.f)
+  {
+    mys.f[0]++;
+  }
+
+  assert (mys.e[0] == 2);
+  assert (mys.f[0] == 2);
+}
+#endif
+
+#ifdef REF_DECL_BASE
+void
+ref_decl_base (void)
+{
+  int a = 0, b = 0, c, *d = &c;
+  S mys_orig(a, b, &c, d);
+  S &mys = mys_orig;
+
+  #pragma omp target map(mys.a)
+  {
+    mys.a++;
+  }
+
+  #pragma omp target map(mys.b)
+  {
+    mys.b++;
+  }
+
+  assert (mys.a == 1);
+  assert (mys.b == 1);
+
+  #pragma omp target map(mys.c)
+  {
+    mys.c++;
+  }
+
+  #pragma omp target map(mys.d)
+  {
+    mys.d++;
+  }
+
+  assert (mys.c == &c + 1);
+  assert (mys.d == &c + 1);
+
+  #pragma omp target map(mys.e)
+  {
+    mys.e[0]++;
+  }
+
+  #pragma omp target map(mys.f)
+  {
+    mys.f[0]++;
+  }
+
+  assert (mys.e[0] == 2);
+  assert (mys.f[0] == 2);
+}
+#endif
+
+#ifdef PTR_DECL_BASE
+void
+ptr_decl_base (void)
+{
+  int a = 0, b = 0, c, *d = &c;
+  S mys_orig(a, b, &c, d);
+  S *mys = &mys_orig;
+
+  #pragma omp target map(mys->a)
+  {
+    mys->a++;
+  }
+
+  #pragma omp target map(mys->b)
+  {
+    mys->b++;
+  }
+
+  assert (mys->a == 1);
+  assert (mys->b == 1);
+
+  #pragma omp target map(mys->c)
+  {
+    mys->c++;
+  }
+
+  #pragma omp target map(mys->d)
+  {
+    mys->d++;
+  }
+
+  assert (mys->c == &c + 1);
+  assert (mys->d == &c + 1);
+
+  #pragma omp target map(mys->e)
+  {
+    mys->e[0]++;
+  }
+
+  #pragma omp target map(mys->f)
+  {
+    mys->f[0]++;
+  }
+
+  assert (mys->e[0] == 2);
+  assert (mys->f[0] == 2);
+}
+#endif
+
+#ifdef REF2PTR_DECL_BASE
+void
+ref2ptr_decl_base (void)
+{
+  int a = 0, b = 0, c, *d = &c;
+  S mys_orig(a, b, &c, d);
+  S *mysp = &mys_orig;
+  S *&mys = mysp;
+
+  #pragma omp target map(mys->a)
+  {
+    mys->a++;
+  }
+
+  #pragma omp target map(mys->b)
+  {
+    mys->b++;
+  }
+
+  assert (mys->a == 1);
+  assert (mys->b == 1);
+
+  #pragma omp target map(mys->c)
+  {
+    mys->c++;
+  }
+
+  #pragma omp target map(mys->d)
+  {
+    mys->d++;
+  }
+
+  assert (mys->c == &c + 1);
+  assert (mys->d == &c + 1);
+
+  #pragma omp target map(mys->e)
+  {
+    mys->e[0]++;
+  }
+
+  #pragma omp target map(mys->f)
+  {
+    mys->f[0]++;
+  }
+
+  assert (mys->e[0] == 2);
+  assert (mys->f[0] == 2);
+}
+#endif
+
+#ifdef ARRAY_DECL_BASE
+void
+array_decl_base (void)
+{
+  int a = 0, b = 0, c, *d = &c;
+  S mys[4] =
+    {
+      S(a, b, &c, d),
+      S(a, b, &c, d),
+      S(a, b, &c, d),
+      S(a, b, &c, d)
+    };
+
+  #pragma omp target map(mys[2].a)
+  {
+    mys[2].a++;
+  }
+
+  #pragma omp target map(mys[2].b)
+  {
+    mys[2].b++;
+  }
+
+  assert (mys[2].a == 1);
+  assert (mys[2].b == 1);
+
+  #pragma omp target map(mys[2].c)
+  {
+    mys[2].c++;
+  }
+
+  #pragma omp target map(mys[2].d)
+  {
+    mys[2].d++;
+  }
+
+  assert (mys[2].c == &c + 1);
+  assert (mys[2].d == &c + 1);
+
+  #pragma omp target map(mys[2].e)
+  {
+    mys[2].e[0]++;
+  }
+
+  #pragma omp target map(mys[2].f)
+  {
+    mys[2].f[0]++;
+  }
+
+  assert (mys[2].e[0] == 2);
+  assert (mys[2].f[0] == 2);
+}
+#endif
+
+#ifdef REF2ARRAY_DECL_BASE
+void
+ref2array_decl_base (void)
+{
+  int a = 0, b = 0, c, *d = &c;
+  S mys_orig[4] =
+    {
+      S(a, b, &c, d),
+      S(a, b, &c, d),
+      S(a, b, &c, d),
+      S(a, b, &c, d)
+    };
+  S (&mys)[4] = mys_orig;
+
+  #pragma omp target map(mys[2].a)
+  {
+    mys[2].a++;
+  }
+
+  #pragma omp target map(mys[2].b)
+  {
+    mys[2].b++;
+  }
+
+  assert (mys[2].a == 1);
+  assert (mys[2].b == 1);
+
+  #pragma omp target map(mys[2].c)
+  {
+    mys[2].c++;
+  }
+
+  #pragma omp target map(mys[2].d)
+  {
+    mys[2].d++;
+  }
+
+  assert (mys[2].c == &c + 1);
+  assert (mys[2].d == &c + 1);
+
+  #pragma omp target map(mys[2].e)
+  {
+    mys[2].e[0]++;
+  }
+
+  #pragma omp target map(mys[2].f)
+  {
+    mys[2].f[0]++;
+  }
+
+  assert (mys[2].e[0] == 2);
+  assert (mys[2].f[0] == 2);
+}
+#endif
+
+#ifdef PTR_OFFSET_DECL_BASE
+void
+ptr_offset_decl_base (void)
+{
+  int a = 0, b = 0, c, *d = &c;
+  S mys_orig[4] =
+    {
+      S(a, b, &c, d),
+      S(a, b, &c, d),
+      S(a, b, &c, d),
+      S(a, b, &c, d)
+    };
+  S *mys = &mys_orig[0];
+
+  #pragma omp target map(mys[2].a)
+  {
+    mys[2].a++;
+  }
+
+  #pragma omp target map(mys[2].b)
+  {
+    mys[2].b++;
+  }
+
+  assert (mys[2].a == 1);
+  assert (mys[2].b == 1);
+
+  #pragma omp target map(mys[2].c)
+  {
+    mys[2].c++;
+  }
+
+  #pragma omp target map(mys[2].d)
+  {
+    mys[2].d++;
+  }
+
+  assert (mys[2].c == &c + 1);
+  assert (mys[2].d == &c + 1);
+
+  #pragma omp target map(mys[2].e)
+  {
+    mys[2].e[0]++;
+  }
+
+  #pragma omp target map(mys[2].f)
+  {
+    mys[2].f[0]++;
+  }
+
+  assert (mys[2].e[0] == 2);
+  assert (mys[2].f[0] == 2);
+}
+#endif
+
+#ifdef REF2PTR_OFFSET_DECL_BASE
+void
+ref2ptr_offset_decl_base (void)
+{
+  int a = 0, b = 0, c, *d = &c;
+  S mys_orig[4] =
+    {
+      S(a, b, &c, d),
+      S(a, b, &c, d),
+      S(a, b, &c, d),
+      S(a, b, &c, d)
+    };
+  S *mys_ptr = &mys_orig[0];
+  S *&mys = mys_ptr;
+
+  #pragma omp target map(mys[2].a)
+  {
+    mys[2].a++;
+  }
+
+  #pragma omp target map(mys[2].b)
+  {
+    mys[2].b++;
+  }
+
+  assert (mys[2].a == 1);
+  assert (mys[2].b == 1);
+
+  #pragma omp target map(mys[2].c)
+  {
+    mys[2].c++;
+  }
+
+  #pragma omp target map(mys[2].d)
+  {
+    mys[2].d++;
+  }
+
+  assert (mys[2].c == &c + 1);
+  assert (mys[2].d == &c + 1);
+
+  #pragma omp target map(mys[2].e)
+  {
+    mys[2].e[0]++;
+  }
+
+  #pragma omp target map(mys[2].f)
+  {
+    mys[2].f[0]++;
+  }
+
+  assert (mys[2].e[0] == 2);
+  assert (mys[2].f[0] == 2);
+}
+#endif
+
+#ifdef MAP_SECTIONS
+void
+map_sections (void)
+{
+  int arr[10];
+  int *ptr;
+  int (&arrref)[10] = arr;
+  int *&ptrref = ptr;
+
+  ptr = new int[10];
+  memset (ptr, 0, sizeof (int) * 10);
+  memset (arr, 0, sizeof (int) * 10);
+
+  #pragma omp target map(arr[0:10])
+  {
+    arr[2]++;
+  }
+
+  #pragma omp target map(ptr[0:10])
+  {
+    ptr[2]++;
+  }
+
+  #pragma omp target map(arrref[0:10])
+  {
+    arrref[2]++;
+  }
+
+  #pragma omp target map(ptrref[0:10])
+  {
+    ptrref[2]++;
+  }
+
+  assert (arr[2] == 2);
+  assert (ptr[2] == 2);
+
+  delete ptr;
+}
+#endif
+
+struct T {
+  int a[10];
+  int (&b)[10];
+  int *c;
+  int *&d;
+
+  T(int (&b1)[10], int *c1, int *&d1) : b(b1), c(c1), d(d1)
+  {
+    memset (a, 0, sizeof a);
+  }
+};
+
+#ifdef NONREF_DECL_MEMBER_SLICE
+void
+nonref_decl_member_slice (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt(c, &c[0], d);
+
+  memset (c, 0, sizeof c);
+
+  #pragma omp target map(myt.a[0:10])
+  {
+    myt.a[2]++;
+  }
+
+  #pragma omp target map(myt.b[0:10])
+  {
+    myt.b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myt.c)
+
+  #pragma omp target map(myt.c[0:10])
+  {
+    myt.c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myt.c)
+
+  #pragma omp target enter data map(to: myt.d)
+
+  #pragma omp target map(myt.d[0:10])
+  {
+    myt.d[2]++;
+  }
+
+  #pragma omp target exit data map(from: myt.d)
+
+  assert (myt.a[2] == 1);
+  assert (myt.b[2] == 3);
+  assert (myt.c[2] == 3);
+  assert (myt.d[2] == 3);
+}
+#endif
+
+#ifdef NONREF_DECL_MEMBER_SLICE_BASEPTR
+void
+nonref_decl_member_slice_baseptr (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt(c, &c[0], d);
+
+  memset (c, 0, sizeof c);
+
+  #pragma omp target map(to:myt.c) map(myt.c[0:10])
+  {
+    myt.c[2]++;
+  }
+
+  #pragma omp target map(to:myt.d) map(myt.d[0:10])
+  {
+    myt.d[2]++;
+  }
+
+  assert (myt.c[2] == 2);
+  assert (myt.d[2] == 2);
+}
+#endif
+
+#ifdef REF_DECL_MEMBER_SLICE
+void
+ref_decl_member_slice (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real(c, &c[0], d);
+  T &myt = myt_real;
+
+  memset (c, 0, sizeof c);
+
+  #pragma omp target map(myt.a[0:10])
+  {
+    myt.a[2]++;
+  }
+
+  #pragma omp target map(myt.b[0:10])
+  {
+    myt.b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myt.c)
+
+  #pragma omp target map(myt.c[0:10])
+  {
+    myt.c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myt.c)
+
+  #pragma omp target enter data map(to: myt.d)
+
+  #pragma omp target map(myt.d[0:10])
+  {
+    myt.d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myt.d)
+
+  assert (myt.a[2] == 1);
+  assert (myt.b[2] == 3);
+  assert (myt.c[2] == 3);
+  assert (myt.d[2] == 3);
+}
+#endif
+
+#ifdef REF_DECL_MEMBER_SLICE_BASEPTR
+void
+ref_decl_member_slice_baseptr (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real(c, &c[0], d);
+  T &myt = myt_real;
+
+  memset (c, 0, sizeof c);
+
+  #pragma omp target map(to:myt.c) map(myt.c[0:10])
+  {
+    myt.c[2]++;
+  }
+
+  #pragma omp target map(to:myt.d) map(myt.d[0:10])
+  {
+    myt.d[2]++;
+  }
+
+  assert (myt.c[2] == 2);
+  assert (myt.d[2] == 2);
+}
+#endif
+
+#ifdef PTR_DECL_MEMBER_SLICE
+void
+ptr_decl_member_slice (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real(c, &c[0], d);
+  T *myt = &myt_real;
+
+  memset (c, 0, sizeof c);
+
+  #pragma omp target enter data map(to: myt)
+
+  #pragma omp target map(myt->a[0:10])
+  {
+    myt->a[2]++;
+  }
+
+  #pragma omp target map(myt->b[0:10])
+  {
+    myt->b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myt->c)
+
+  #pragma omp target map(myt->c[0:10])
+  {
+    myt->c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myt->c)
+
+  #pragma omp target enter data map(to: myt->d)
+
+  #pragma omp target map(myt->d[0:10])
+  {
+    myt->d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myt, myt->d)
+
+  assert (myt->a[2] == 1);
+  assert (myt->b[2] == 3);
+  assert (myt->c[2] == 3);
+  assert (myt->d[2] == 3);
+}
+#endif
+
+#ifdef PTR_DECL_MEMBER_SLICE_BASEPTR
+void
+ptr_decl_member_slice_baseptr (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real(c, &c[0], d);
+  T *myt = &myt_real;
+
+  memset (c, 0, sizeof c);
+
+  // These ones have an implicit firstprivate for 'myt'.
+  #pragma omp target map(to:myt->c) map(myt->c[0:10])
+  {
+    myt->c[2]++;
+  }
+
+  #pragma omp target map(to:myt->d) map(myt->d[0:10])
+  {
+    myt->d[2]++;
+  }
+
+  // These ones have an explicit "TO" mapping for 'myt'.
+  #pragma omp target map(to:myt) map(to:myt->c) map(myt->c[0:10])
+  {
+    myt->c[2]++;
+  }
+
+  #pragma omp target map(to:myt) map(to:myt->d) map(myt->d[0:10])
+  {
+    myt->d[2]++;
+  }
+
+  assert (myt->c[2] == 4);
+  assert (myt->d[2] == 4);
+}
+#endif
+
+#ifdef REF2PTR_DECL_MEMBER_SLICE
+void
+ref2ptr_decl_member_slice (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real(c, &c[0], d);
+  T *myt_ptr = &myt_real;
+  T *&myt = myt_ptr;
+
+  memset (c, 0, sizeof c);
+
+  #pragma omp target enter data map(to: myt)
+
+  #pragma omp target map(myt->a[0:10])
+  {
+    myt->a[2]++;
+  }
+
+  #pragma omp target map(myt->b[0:10])
+  {
+    myt->b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myt->c)
+
+  #pragma omp target map(myt->c[0:10])
+  {
+    myt->c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myt->c)
+
+  #pragma omp target enter data map(to: myt->d)
+
+  #pragma omp target map(myt->d[0:10])
+  {
+    myt->d[2]++;
+  }
+
+  #pragma omp target exit data map(from: myt, myt->d)
+
+  assert (myt->a[2] == 1);
+  assert (myt->b[2] == 3);
+  assert (myt->c[2] == 3);
+  assert (myt->d[2] == 3);
+}
+#endif
+
+#ifdef REF2PTR_DECL_MEMBER_SLICE_BASEPTR
+void
+ref2ptr_decl_member_slice_baseptr (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real(c, &c[0], d);
+  T *myt_ptr = &myt_real;
+  T *&myt = myt_ptr;
+
+  memset (c, 0, sizeof c);
+
+  // These ones have an implicit firstprivate for 'myt'.
+  #pragma omp target map(to:myt->c) map(myt->c[0:10])
+  {
+    myt->c[2]++;
+  }
+
+  #pragma omp target map(to:myt->d) map(myt->d[0:10])
+  {
+    myt->d[2]++;
+  }
+
+  // These ones have an explicit "TO" mapping for 'myt'.
+  #pragma omp target map(to:myt) map(to:myt->c) map(myt->c[0:10])
+  {
+    myt->c[2]++;
+  }
+
+  #pragma omp target map(to:myt) map(to:myt->d) map(myt->d[0:10])
+  {
+    myt->d[2]++;
+  }
+
+  assert (myt->c[2] == 4);
+  assert (myt->d[2] == 4);
+}
+#endif
+
+#ifdef ARRAY_DECL_MEMBER_SLICE
+void
+array_decl_member_slice (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt[4] =
+    {
+      T (c, &c[0], d),
+      T (c, &c[0], d),
+      T (c, &c[0], d),
+      T (c, &c[0], d)
+    };
+
+  memset (c, 0, sizeof c);
+
+  #pragma omp target map(myt[2].a[0:10])
+  {
+    myt[2].a[2]++;
+  }
+
+  #pragma omp target map(myt[2].b[0:10])
+  {
+    myt[2].b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myt[2].c)
+
+  #pragma omp target map(myt[2].c[0:10])
+  {
+    myt[2].c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myt[2].c)
+
+  #pragma omp target enter data map(to: myt[2].d)
+
+  #pragma omp target map(myt[2].d[0:10])
+  {
+    myt[2].d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myt[2].d)
+
+  assert (myt[2].a[2] == 1);
+  assert (myt[2].b[2] == 3);
+  assert (myt[2].c[2] == 3);
+  assert (myt[2].d[2] == 3);
+}
+#endif
+
+#ifdef ARRAY_DECL_MEMBER_SLICE_BASEPTR
+void
+array_decl_member_slice_baseptr (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt[4] =
+    {
+      T (c, &c[0], d),
+      T (c, &c[0], d),
+      T (c, &c[0], d),
+      T (c, &c[0], d)
+    };
+
+  memset (c, 0, sizeof c);
+
+  #pragma omp target map(to:myt[2].c) map(myt[2].c[0:10])
+  {
+    myt[2].c[2]++;
+  }
+
+  #pragma omp target map(to:myt[2].d) map(myt[2].d[0:10])
+  {
+    myt[2].d[2]++;
+  }
+
+  assert (myt[2].c[2] == 2);
+  assert (myt[2].d[2] == 2);
+}
+#endif
+
+#ifdef REF2ARRAY_DECL_MEMBER_SLICE
+void
+ref2array_decl_member_slice (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real[4] =
+    {
+      T (c, &c[0], d),
+      T (c, &c[0], d),
+      T (c, &c[0], d),
+      T (c, &c[0], d)
+    };
+  T (&myt)[4] = myt_real;
+
+  memset (c, 0, sizeof c);
+
+  #pragma omp target map(myt[2].a[0:10])
+  {
+    myt[2].a[2]++;
+  }
+
+  #pragma omp target map(myt[2].b[0:10])
+  {
+    myt[2].b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myt[2].c)
+
+  #pragma omp target map(myt[2].c[0:10])
+  {
+    myt[2].c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myt[2].c)
+
+  #pragma omp target enter data map(to: myt[2].d)
+
+  #pragma omp target map(myt[2].d[0:10])
+  {
+    myt[2].d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myt[2].d)
+
+  assert (myt[2].a[2] == 1);
+  assert (myt[2].b[2] == 3);
+  assert (myt[2].c[2] == 3);
+  assert (myt[2].d[2] == 3);
+}
+#endif
+
+#ifdef REF2ARRAY_DECL_MEMBER_SLICE_BASEPTR
+void
+ref2array_decl_member_slice_baseptr (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real[4] =
+    {
+      T (c, &c[0], d),
+      T (c, &c[0], d),
+      T (c, &c[0], d),
+      T (c, &c[0], d)
+    };
+  T (&myt)[4] = myt_real;
+
+  memset (c, 0, sizeof c);
+
+  #pragma omp target map(to:myt[2].c) map(myt[2].c[0:10])
+  {
+    myt[2].c[2]++;
+  }
+
+  #pragma omp target map(to:myt[2].d) map(myt[2].d[0:10])
+  {
+    myt[2].d[2]++;
+  }
+
+  assert (myt[2].c[2] == 2);
+  assert (myt[2].d[2] == 2);
+}
+#endif
+
+#ifdef PTR_OFFSET_DECL_MEMBER_SLICE
+void
+ptr_offset_decl_member_slice (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real[4] =
+    {
+      T (c, &c[0], d),
+      T (c, &c[0], d),
+      T (c, &c[0], d),
+      T (c, &c[0], d)
+    };
+  T *myt = &myt_real[0];
+
+  memset (c, 0, sizeof c);
+
+  #pragma omp target map(myt[2].a[0:10])
+  {
+    myt[2].a[2]++;
+  }
+
+  #pragma omp target map(myt[2].b[0:10])
+  {
+    myt[2].b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myt[2].c)
+
+  #pragma omp target map(myt[2].c[0:10])
+  {
+    myt[2].c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myt[2].c)
+
+  #pragma omp target enter data map(to: myt[2].d)
+
+  #pragma omp target map(myt[2].d[0:10])
+  {
+    myt[2].d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myt[2].d)
+
+  assert (myt[2].a[2] == 1);
+  assert (myt[2].b[2] == 3);
+  assert (myt[2].c[2] == 3);
+  assert (myt[2].d[2] == 3);
+}
+#endif
+
+#ifdef PTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR
+void
+ptr_offset_decl_member_slice_baseptr (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real[4] =
+    {
+      T (c, &c[0], d),
+      T (c, &c[0], d),
+      T (c, &c[0], d),
+      T (c, &c[0], d)
+    };
+  T *myt = &myt_real[0];
+
+  memset (c, 0, sizeof c);
+
+  /* Implicit 'myt'.  */
+  #pragma omp target map(to:myt[2].c) map(myt[2].c[0:10])
+  {
+    myt[2].c[2]++;
+  }
+
+  #pragma omp target map(to:myt[2].d) map(myt[2].d[0:10])
+  {
+    myt[2].d[2]++;
+  }
+
+  /* Explicit 'to'-mapped 'myt'.  */
+  #pragma omp target map(to:myt) map(to:myt[2].c) map(myt[2].c[0:10])
+  {
+    myt[2].c[2]++;
+  }
+
+  #pragma omp target map(to:myt) map(to:myt[2].d) map(myt[2].d[0:10])
+  {
+    myt[2].d[2]++;
+  }
+
+  assert (myt[2].c[2] == 4);
+  assert (myt[2].d[2] == 4);
+}
+#endif
+
+#ifdef REF2PTR_OFFSET_DECL_MEMBER_SLICE
+void
+ref2ptr_offset_decl_member_slice (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real[4] =
+    {
+      T (c, &c[0], d),
+      T (c, &c[0], d),
+      T (c, &c[0], d),
+      T (c, &c[0], d)
+    };
+  T *myt_ptr = &myt_real[0];
+  T *&myt = myt_ptr;
+
+  memset (c, 0, sizeof c);
+
+  #pragma omp target map(myt[2].a[0:10])
+  {
+    myt[2].a[2]++;
+  }
+
+  #pragma omp target map(myt[2].b[0:10])
+  {
+    myt[2].b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myt[2].c)
+
+  #pragma omp target map(myt[2].c[0:10])
+  {
+    myt[2].c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myt[2].c)
+
+  #pragma omp target enter data map(to: myt[2].d)
+
+  #pragma omp target map(myt[2].d[0:10])
+  {
+    myt[2].d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myt[2].d)
+
+  assert (myt[2].a[2] == 1);
+  assert (myt[2].b[2] == 3);
+  assert (myt[2].c[2] == 3);
+  assert (myt[2].d[2] == 3);
+}
+#endif
+
+#ifdef REF2PTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR
+void
+ref2ptr_offset_decl_member_slice_baseptr (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real[4] =
+    {
+      T (c, &c[0], d),
+      T (c, &c[0], d),
+      T (c, &c[0], d),
+      T (c, &c[0], d)
+    };
+  T *myt_ptr = &myt_real[0];
+  T *&myt = myt_ptr;
+
+  memset (c, 0, sizeof c);
+
+  /* Implicit 'myt'.  */
+  #pragma omp target map(to:myt[2].c) map(myt[2].c[0:10])
+  {
+    myt[2].c[2]++;
+  }
+
+  #pragma omp target map(to:myt[2].d) map(myt[2].d[0:10])
+  {
+    myt[2].d[2]++;
+  }
+
+  /* Explicit 'to'-mapped 'myt'.  */
+  #pragma omp target map(to:myt) map(to:myt[2].c) map(myt[2].c[0:10])
+  {
+    myt[2].c[2]++;
+  }
+
+  #pragma omp target map(to:myt) map(to:myt[2].d) map(myt[2].d[0:10])
+  {
+    myt[2].d[2]++;
+  }
+
+  assert (myt[2].c[2] == 4);
+  assert (myt[2].d[2] == 4);
+}
+#endif
+
+#ifdef PTRARRAY_DECL_MEMBER_SLICE
+void
+ptrarray_decl_member_slice (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real(c, &c[0], d);
+  T *myt[4] =
+    {
+      &myt_real,
+      &myt_real,
+      &myt_real,
+      &myt_real
+    };
+
+  memset (c, 0, sizeof c);
+
+  #pragma omp target enter data map(to: myt[2])
+
+  #pragma omp target map(myt[2]->a[0:10])
+  {
+    myt[2]->a[2]++;
+  }
+
+  #pragma omp target map(myt[2]->b[0:10])
+  {
+    myt[2]->b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myt[2]->c)
+
+  #pragma omp target map(myt[2]->c[0:10])
+  {
+    myt[2]->c[2]++;
+  }
+
+  #pragma omp target exit data map(from: myt[2]->c)
+
+  #pragma omp target enter data map(to: myt[2]->d)
+
+  #pragma omp target map(myt[2]->d[0:10])
+  {
+    myt[2]->d[2]++;
+  }
+
+  #pragma omp target exit data map(from: myt[2]->d)
+
+  #pragma omp target exit data map(release: myt[2])
+
+  assert (myt[2]->a[2] == 1);
+  assert (myt[2]->b[2] == 3);
+  assert (myt[2]->c[2] == 3);
+  assert (myt[2]->d[2] == 3);
+}
+#endif
+
+#ifdef PTRARRAY_DECL_MEMBER_SLICE_BASEPTR
+void
+ptrarray_decl_member_slice_baseptr (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real(c, &c[0], d);
+  T *myt[4] =
+    {
+      &myt_real,
+      &myt_real,
+      &myt_real,
+      &myt_real
+    };
+
+  memset (c, 0, sizeof c);
+
+  // Implicit 'myt'
+  #pragma omp target map(to: myt[2]->c) map(myt[2]->c[0:10])
+  {
+    myt[2]->c[2]++;
+  }
+
+  #pragma omp target map(to: myt[2]->d) map(myt[2]->d[0:10])
+  {
+    myt[2]->d[2]++;
+  }
+
+  // One element of 'myt'
+  #pragma omp target map(to:myt[2], myt[2]->c) map(myt[2]->c[0:10])
+  {
+    myt[2]->c[2]++;
+  }
+
+  #pragma omp target map(to:myt[2], myt[2]->d) map(myt[2]->d[0:10])
+  {
+    myt[2]->d[2]++;
+  }
+
+  // Explicit map of all of 'myt'
+  #pragma omp target map(to:myt, myt[2]->c) map(myt[2]->c[0:10])
+  {
+    myt[2]->c[2]++;
+  }
+
+  #pragma omp target map(to:myt, myt[2]->d) map(myt[2]->d[0:10])
+  {
+    myt[2]->d[2]++;
+  }
+
+  // Explicit map slice of 'myt'
+  #pragma omp target map(to:myt[1:3], myt[2]->c) map(myt[2]->c[0:10])
+  {
+    myt[2]->c[2]++;
+  }
+
+  #pragma omp target map(to:myt[1:3], myt[2]->d) map(myt[2]->d[0:10])
+  {
+    myt[2]->d[2]++;
+  }
+
+  assert (myt[2]->c[2] == 8);
+  assert (myt[2]->d[2] == 8);
+}
+#endif
+
+#ifdef REF2PTRARRAY_DECL_MEMBER_SLICE
+void
+ref2ptrarray_decl_member_slice (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real(c, &c[0], d);
+  T *myt_ptrarr[4] =
+    {
+      &myt_real,
+      &myt_real,
+      &myt_real,
+      &myt_real
+    };
+  T *(&myt)[4] = myt_ptrarr;
+
+  memset (c, 0, sizeof c);
+
+  #pragma omp target enter data map(to: myt[2])
+
+  #pragma omp target map(myt[2]->a[0:10])
+  {
+    myt[2]->a[2]++;
+  }
+
+  #pragma omp target map(myt[2]->b[0:10])
+  {
+    myt[2]->b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myt[2]->c)
+
+  #pragma omp target map(myt[2]->c[0:10])
+  {
+    myt[2]->c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myt[2]->c)
+
+  #pragma omp target enter data map(to: myt[2]->d)
+
+  #pragma omp target map(myt[2]->d[0:10])
+  {
+    myt[2]->d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myt[2]->d)
+
+  #pragma omp target exit data map(release: myt[2])
+
+  assert (myt[2]->a[2] == 1);
+  assert (myt[2]->b[2] == 3);
+  assert (myt[2]->c[2] == 3);
+  assert (myt[2]->d[2] == 3);
+}
+#endif
+
+#ifdef REF2PTRARRAY_DECL_MEMBER_SLICE_BASEPTR
+void
+ref2ptrarray_decl_member_slice_baseptr (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real(c, &c[0], d);
+  T *myt_ptrarr[4] =
+    {
+      &myt_real,
+      &myt_real,
+      &myt_real,
+      &myt_real
+    };
+  T *(&myt)[4] = myt_ptrarr;
+
+  memset (c, 0, sizeof c);
+
+  #pragma omp target map(to:myt[2], myt[2]->c) map(myt[2]->c[0:10])
+  {
+    myt[2]->c[2]++;
+  }
+
+  #pragma omp target map(to:myt[2], myt[2]->d) map(myt[2]->d[0:10])
+  {
+    myt[2]->d[2]++;
+  }
+
+  #pragma omp target map(to:myt, myt[2]->c) map(myt[2]->c[0:10])
+  {
+    myt[2]->c[2]++;
+  }
+
+  #pragma omp target map(to:myt, myt[2]->d) map(myt[2]->d[0:10])
+  {
+    myt[2]->d[2]++;
+  }
+
+  assert (myt[2]->c[2] == 4);
+  assert (myt[2]->d[2] == 4);
+}
+#endif
+
+#ifdef PTRPTR_OFFSET_DECL_MEMBER_SLICE
+void
+ptrptr_offset_decl_member_slice (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real(c, &c[0], d);
+  T *myt_ptrarr[4] =
+    {
+      &myt_real,
+      &myt_real,
+      &myt_real,
+      &myt_real
+    };
+  T **myt = &myt_ptrarr[0];
+
+  memset (c, 0, sizeof c);
+
+  #pragma omp target enter data map(to: myt[0:3])
+
+  /* NOTE: For the implicit firstprivate 'myt' to work, the zeroth element of
+     myt[] must be mapped above -- otherwise the zero-length array section
+     lookup fails.  */
+  #pragma omp target map(myt[2]->a[0:10])
+  {
+    myt[2]->a[2]++;
+  }
+
+  #pragma omp target map(myt[2]->b[0:10])
+  {
+    myt[2]->b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myt[2]->c)
+
+  #pragma omp target map(myt[2]->c[0:10])
+  {
+    myt[2]->c[2]++;
+  }
+
+  #pragma omp target exit data map(from: myt[2]->c)
+
+  #pragma omp target enter data map(to: myt[2]->d)
+
+  #pragma omp target map(myt[2]->d[0:10])
+  {
+    myt[2]->d[2]++;
+  }
+
+  #pragma omp target exit data map(from: myt[0:3], myt[2]->d)
+
+  assert (myt[2]->a[2] == 1);
+  assert (myt[2]->b[2] == 3);
+  assert (myt[2]->c[2] == 3);
+  assert (myt[2]->d[2] == 3);
+}
+#endif
+
+#ifdef PTRPTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR
+void
+ptrptr_offset_decl_member_slice_baseptr (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real(c, &c[0], d);
+  T *myt_ptrarr[4] =
+    {
+      0,
+      0,
+      0,
+      &myt_real
+    };
+  T **myt = &myt_ptrarr[0];
+
+  memset (c, 0, sizeof c);
+
+  #pragma omp target map(to:myt[3], myt[3]->c) map(myt[3]->c[0:10])
+  {
+    myt[3]->c[2]++;
+  }
+
+  #pragma omp target map(to:myt[3], myt[3]->d) map(myt[3]->d[0:10])
+  {
+    myt[3]->d[2]++;
+  }
+
+  #pragma omp target map(to:myt, myt[3], myt[3]->c) map(myt[3]->c[0:10])
+  {
+    myt[3]->c[2]++;
+  }
+
+  #pragma omp target map(to:myt, myt[3], myt[3]->d) map(myt[3]->d[0:10])
+  {
+    myt[3]->d[2]++;
+  }
+
+  assert (myt[3]->c[2] == 4);
+  assert (myt[3]->d[2] == 4);
+}
+#endif
+
+#ifdef REF2PTRPTR_OFFSET_DECL_MEMBER_SLICE
+void
+ref2ptrptr_offset_decl_member_slice (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real(c, &c[0], d);
+  T *myt_ptrarr[4] =
+    {
+      0,
+      0,
+      &myt_real,
+      0
+    };
+  T **myt_ptrptr = &myt_ptrarr[0];
+  T **&myt = myt_ptrptr;
+
+  memset (c, 0, sizeof c);
+
+  #pragma omp target enter data map(to: myt[0:3])
+
+  #pragma omp target map(myt[2]->a[0:10])
+  {
+    myt[2]->a[2]++;
+  }
+
+  #pragma omp target map(myt[2]->b[0:10])
+  {
+    myt[2]->b[2]++;
+  }
+
+  #pragma omp target enter data map(to:myt[2]->c)
+
+  #pragma omp target map(myt[2]->c[0:10])
+  {
+    myt[2]->c[2]++;
+  }
+
+  #pragma omp target exit data map(release:myt[2]->c)
+
+  #pragma omp target enter data map(to:myt[2]->d)
+
+  #pragma omp target map(myt[2]->d[0:10])
+  {
+    myt[2]->d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myt[0:3], myt[2]->d)
+
+  assert (myt[2]->a[2] == 1);
+  assert (myt[2]->b[2] == 3);
+  assert (myt[2]->c[2] == 3);
+  assert (myt[2]->d[2] == 3);
+}
+#endif
+
+#ifdef REF2PTRPTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR
+void
+ref2ptrptr_offset_decl_member_slice_baseptr (void)
+{
+  int c[10];
+  int *d = &c[0];
+  T myt_real(c, &c[0], d);
+  T *myt_ptrarr[4] =
+    {
+      0,
+      0,
+      &myt_real,
+      0
+    };
+  T **myt_ptrptr = &myt_ptrarr[0];
+  T **&myt = myt_ptrptr;
+
+  memset (c, 0, sizeof c);
+
+  #pragma omp target map(to:myt[2], myt[2]->c) map(myt[2]->c[0:10])
+  {
+    myt[2]->c[2]++;
+  }
+
+  #pragma omp target map(to:myt[2], myt[2]->d) map(myt[2]->d[0:10])
+  {
+    myt[2]->d[2]++;
+  }
+
+  #pragma omp target map(to:myt, myt[2], myt[2]->c) map(myt[2]->c[0:10])
+  {
+    myt[2]->c[2]++;
+  }
+
+  #pragma omp target map(to:myt, myt[2], myt[2]->d) map(myt[2]->d[0:10])
+  {
+    myt[2]->d[2]++;
+  }
+
+  assert (myt[2]->c[2] == 4);
+  assert (myt[2]->d[2] == 4);
+}
+#endif
+
+struct U
+{
+  S s1;
+  T t1;
+  S &s2;
+  T &t2;
+  S *s3;
+  T *t3;
+  S *&s4;
+  T *&t4;
+
+  U(S &sptr1, T &tptr1, S &sptr2, T &tptr2, S *sptr3, T *tptr3,
+    S *&sptr4, T *&tptr4)
+    : s1(sptr1), t1(tptr1), s2(sptr2), t2(tptr2), s3(sptr3), t3(tptr3),
+      s4(sptr4), t4(tptr4)
+  {
+  }
+};
+
+#define INIT_S(N)				\
+  int a##N = 0, b##N = 0, c##N = 0, d##N = 0;	\
+  int *d##N##ptr = &d##N;			\
+  S s##N(a##N, b##N, &c##N, d##N##ptr)
+
+#define INIT_T(N)				\
+  int arr##N[10];				\
+  int *ptr##N = &arr##N[0];			\
+  T t##N(arr##N, &arr##N[0], ptr##N);		\
+  memset (arr##N, 0, sizeof arr##N)
+
+#define INIT_ST				\
+  INIT_S(1);				\
+  INIT_T(1);				\
+  INIT_S(2);				\
+  INIT_T(2);				\
+  INIT_S(3);				\
+  INIT_T(3);				\
+  int a4 = 0, b4 = 0, c4 = 0, d4 = 0;	\
+  int *d4ptr = &d4;			\
+  S *s4 = new S(a4, b4, &c4, d4ptr);	\
+  int arr4[10];				\
+  int *ptr4 = &arr4[0];			\
+  T *t4 = new T(arr4, &arr4[0], ptr4);	\
+  memset (arr4, 0, sizeof arr4)
+
+#ifdef NONREF_COMPONENT_BASE
+void
+nonref_component_base (void)
+{
+  INIT_ST;
+  U myu(s1, t1, s2, t2, &s3, &t3, s4, t4);
+
+  #pragma omp target map(myu.s1.a, myu.s1.b, myu.s1.c, myu.s1.d)
+  {
+    myu.s1.a++;
+    myu.s1.b++;
+    myu.s1.c++;
+    myu.s1.d++;
+  }
+
+  assert (myu.s1.a == 1);
+  assert (myu.s1.b == 1);
+  assert (myu.s1.c == &c1 + 1);
+  assert (myu.s1.d == &d1 + 1);
+
+  #pragma omp target map(myu.s2.a, myu.s2.b, myu.s2.c, myu.s2.d)
+  {
+    myu.s2.a++;
+    myu.s2.b++;
+    myu.s2.c++;
+    myu.s2.d++;
+  }
+
+  assert (myu.s2.a == 1);
+  assert (myu.s2.b == 1);
+  assert (myu.s2.c == &c2 + 1);
+  assert (myu.s2.d == &d2 + 1);
+
+  #pragma omp target map(to:myu.s3) \
+		     map(myu.s3->a, myu.s3->b, myu.s3->c, myu.s3->d)
+  {
+    myu.s3->a++;
+    myu.s3->b++;
+    myu.s3->c++;
+    myu.s3->d++;
+  }
+
+  assert (myu.s3->a == 1);
+  assert (myu.s3->b == 1);
+  assert (myu.s3->c == &c3 + 1);
+  assert (myu.s3->d == &d3 + 1);
+
+  #pragma omp target map(to:myu.s4) \
+		     map(myu.s4->a, myu.s4->b, myu.s4->c, myu.s4->d)
+  {
+    myu.s4->a++;
+    myu.s4->b++;
+    myu.s4->c++;
+    myu.s4->d++;
+  }
+
+  assert (myu.s4->a == 1);
+  assert (myu.s4->b == 1);
+  assert (myu.s4->c == &c4 + 1);
+  assert (myu.s4->d == &d4 + 1);
+
+  delete s4;
+  delete t4;
+}
+#endif
+
+#ifdef NONREF_COMPONENT_MEMBER_SLICE
+void
+nonref_component_member_slice (void)
+{
+  INIT_ST;
+  U myu(s1, t1, s2, t2, &s3, &t3, s4, t4);
+
+  #pragma omp target map(myu.t1.a[2:5])
+  {
+    myu.t1.a[2]++;
+  }
+
+  #pragma omp target map(myu.t1.b[2:5])
+  {
+    myu.t1.b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myu.t1.c)
+
+  #pragma omp target map(myu.t1.c[2:5])
+  {
+    myu.t1.c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu.t1.c)
+
+  #pragma omp target enter data map(to: myu.t1.d)
+
+  #pragma omp target map(myu.t1.d[2:5])
+  {
+    myu.t1.d[2]++;
+  }
+
+  #pragma omp target exit data map(from: myu.t1.d)
+
+  assert (myu.t1.a[2] == 1);
+  assert (myu.t1.b[2] == 3);
+  assert (myu.t1.c[2] == 3);
+  assert (myu.t1.d[2] == 3);
+
+  #pragma omp target map(myu.t2.a[2:5])
+  {
+    myu.t2.a[2]++;
+  }
+
+  #pragma omp target map(myu.t2.b[2:5])
+  {
+    myu.t2.b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myu.t2.c)
+
+  #pragma omp target map(myu.t2.c[2:5])
+  {
+    myu.t2.c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu.t2.c)
+
+  #pragma omp target enter data map(to: myu.t2.d)
+
+  #pragma omp target map(myu.t2.d[2:5])
+  {
+    myu.t2.d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu.t2.d)
+
+  assert (myu.t2.a[2] == 1);
+  assert (myu.t2.b[2] == 3);
+  assert (myu.t2.c[2] == 3);
+  assert (myu.t2.d[2] == 3);
+
+  #pragma omp target enter data map(to: myu.t3)
+
+  #pragma omp target map(myu.t3->a[2:5])
+  {
+    myu.t3->a[2]++;
+  }
+
+  #pragma omp target map(myu.t3->b[2:5])
+  {
+    myu.t3->b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myu.t3->c)
+
+  #pragma omp target map(myu.t3->c[2:5])
+  {
+    myu.t3->c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu.t3->c)
+
+  #pragma omp target enter data map(to: myu.t3->d)
+
+  #pragma omp target map(myu.t3->d[2:5])
+  {
+    myu.t3->d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu.t3, myu.t3->d)
+
+  assert (myu.t3->a[2] == 1);
+  assert (myu.t3->b[2] == 3);
+  assert (myu.t3->c[2] == 3);
+  assert (myu.t3->d[2] == 3);
+
+  #pragma omp target enter data map(to: myu.t4)
+
+  #pragma omp target map(myu.t4->a[2:5])
+  {
+    myu.t4->a[2]++;
+  }
+
+  #pragma omp target map(myu.t4->b[2:5])
+  {
+    myu.t4->b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myu.t4->c)
+
+  #pragma omp target map(myu.t4->c[2:5])
+  {
+    myu.t4->c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu.t4->c)
+
+  #pragma omp target enter data map(to: myu.t4->d)
+
+  #pragma omp target map(myu.t4->d[2:5])
+  {
+    myu.t4->d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu.t4, myu.t4->d)
+
+  assert (myu.t4->a[2] == 1);
+  assert (myu.t4->b[2] == 3);
+  assert (myu.t4->c[2] == 3);
+  assert (myu.t4->d[2] == 3);
+
+  delete s4;
+  delete t4;
+}
+#endif
+
+#ifdef NONREF_COMPONENT_MEMBER_SLICE_BASEPTR
+void
+nonref_component_member_slice_baseptr (void)
+{
+  INIT_ST;
+  U myu(s1, t1, s2, t2, &s3, &t3, s4, t4);
+
+  #pragma omp target map(to: myu.t1.c) map(myu.t1.c[2:5])
+  {
+    myu.t1.c[2]++;
+  }
+
+  #pragma omp target map(to: myu.t1.d) map(myu.t1.d[2:5])
+  {
+    myu.t1.d[2]++;
+  }
+
+  assert (myu.t1.c[2] == 2);
+  assert (myu.t1.d[2] == 2);
+
+  #pragma omp target map(to: myu.t2.c) map(myu.t2.c[2:5])
+  {
+    myu.t2.c[2]++;
+  }
+
+  #pragma omp target map(to: myu.t2.d) map(myu.t2.d[2:5])
+  {
+    myu.t2.d[2]++;
+  }
+
+  assert (myu.t2.c[2] == 2);
+  assert (myu.t2.d[2] == 2);
+
+  #pragma omp target map(to: myu.t3, myu.t3->c) map(myu.t3->c[2:5])
+  {
+    myu.t3->c[2]++;
+  }
+
+  #pragma omp target map(to: myu.t3, myu.t3->d) map(myu.t3->d[2:5])
+  {
+    myu.t3->d[2]++;
+  }
+
+  assert (myu.t3->c[2] == 2);
+  assert (myu.t3->d[2] == 2);
+
+  #pragma omp target map(to: myu.t4, myu.t4->c) map(myu.t4->c[2:5])
+  {
+    myu.t4->c[2]++;
+  }
+
+  #pragma omp target map(to: myu.t4, myu.t4->d) map(myu.t4->d[2:5])
+  {
+    myu.t4->d[2]++;
+  }
+
+  assert (myu.t4->c[2] == 2);
+  assert (myu.t4->d[2] == 2);
+
+  delete s4;
+  delete t4;
+}
+#endif
+
+#ifdef REF_COMPONENT_BASE
+void
+ref_component_base (void)
+{
+  INIT_ST;
+  U myu_real(s1, t1, s2, t2, &s3, &t3, s4, t4);
+  U &myu = myu_real;
+
+  #pragma omp target map(myu.s1.a, myu.s1.b, myu.s1.c, myu.s1.d)
+  {
+    myu.s1.a++;
+    myu.s1.b++;
+    myu.s1.c++;
+    myu.s1.d++;
+  }
+
+  assert (myu.s1.a == 1);
+  assert (myu.s1.b == 1);
+  assert (myu.s1.c == &c1 + 1);
+  assert (myu.s1.d == &d1 + 1);
+
+  #pragma omp target map(myu.s2.a, myu.s2.b, myu.s2.c, myu.s2.d)
+  {
+    myu.s2.a++;
+    myu.s2.b++;
+    myu.s2.c++;
+    myu.s2.d++;
+  }
+
+  assert (myu.s2.a == 1);
+  assert (myu.s2.b == 1);
+  assert (myu.s2.c == &c2 + 1);
+  assert (myu.s2.d == &d2 + 1);
+
+  #pragma omp target map(to:myu.s3) \
+		     map(myu.s3->a, myu.s3->b, myu.s3->c, myu.s3->d)
+  {
+    myu.s3->a++;
+    myu.s3->b++;
+    myu.s3->c++;
+    myu.s3->d++;
+  }
+
+  assert (myu.s3->a == 1);
+  assert (myu.s3->b == 1);
+  assert (myu.s3->c == &c3 + 1);
+  assert (myu.s3->d == &d3 + 1);
+
+  #pragma omp target map(to:myu.s4) \
+		     map(myu.s4->a, myu.s4->b, myu.s4->c, myu.s4->d)
+  {
+    myu.s4->a++;
+    myu.s4->b++;
+    myu.s4->c++;
+    myu.s4->d++;
+  }
+
+  assert (myu.s4->a == 1);
+  assert (myu.s4->b == 1);
+  assert (myu.s4->c == &c4 + 1);
+  assert (myu.s4->d == &d4 + 1);
+
+  delete s4;
+  delete t4;
+}
+#endif
+
+#ifdef REF_COMPONENT_MEMBER_SLICE
+void
+ref_component_member_slice (void)
+{
+  INIT_ST;
+  U myu_real(s1, t1, s2, t2, &s3, &t3, s4, t4);
+  U &myu = myu_real;
+
+  #pragma omp target map(myu.t1.a[2:5])
+  {
+    myu.t1.a[2]++;
+  }
+
+  #pragma omp target map(myu.t1.b[2:5])
+  {
+    myu.t1.b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myu.t1.c)
+
+  #pragma omp target map(myu.t1.c[2:5])
+  {
+    myu.t1.c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu.t1.c)
+
+  #pragma omp target enter data map(to: myu.t1.d)
+
+  #pragma omp target map(myu.t1.d[2:5])
+  {
+    myu.t1.d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu.t1.d)
+
+  assert (myu.t1.a[2] == 1);
+  assert (myu.t1.b[2] == 3);
+  assert (myu.t1.c[2] == 3);
+  assert (myu.t1.d[2] == 3);
+
+  #pragma omp target map(myu.t2.a[2:5])
+  {
+    myu.t2.a[2]++;
+  }
+
+  #pragma omp target map(myu.t2.b[2:5])
+  {
+    myu.t2.b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myu.t2.c)
+
+  #pragma omp target map(myu.t2.c[2:5])
+  {
+    myu.t2.c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu.t2.c)
+
+  #pragma omp target enter data map(to: myu.t2.d)
+
+  #pragma omp target map(myu.t2.d[2:5])
+  {
+    myu.t2.d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu.t2.d)
+
+  assert (myu.t2.a[2] == 1);
+  assert (myu.t2.b[2] == 3);
+  assert (myu.t2.c[2] == 3);
+  assert (myu.t2.d[2] == 3);
+
+  #pragma omp target enter data map(to: myu.t3)
+
+  #pragma omp target map(myu.t3->a[2:5])
+  {
+    myu.t3->a[2]++;
+  }
+
+  #pragma omp target map(myu.t3->b[2:5])
+  {
+    myu.t3->b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myu.t3->c)
+
+  #pragma omp target map(myu.t3->c[2:5])
+  {
+    myu.t3->c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu.t3->c)
+
+  #pragma omp target enter data map(to: myu.t3->d)
+
+  #pragma omp target map(myu.t3->d[2:5])
+  {
+    myu.t3->d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu.t3, myu.t3->d)
+
+  assert (myu.t3->a[2] == 1);
+  assert (myu.t3->b[2] == 3);
+  assert (myu.t3->c[2] == 3);
+  assert (myu.t3->d[2] == 3);
+
+  #pragma omp target enter data map(to: myu.t4)
+
+  #pragma omp target map(myu.t4->a[2:5])
+  {
+    myu.t4->a[2]++;
+  }
+
+  #pragma omp target map(myu.t4->b[2:5])
+  {
+    myu.t4->b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myu.t4->c)
+
+  #pragma omp target map(myu.t4->c[2:5])
+  {
+    myu.t4->c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu.t4->c)
+
+  #pragma omp target enter data map(to: myu.t4->d)
+
+  #pragma omp target map(myu.t4->d[2:5])
+  {
+    myu.t4->d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu.t4, myu.t4->d)
+
+  assert (myu.t4->a[2] == 1);
+  assert (myu.t4->b[2] == 3);
+  assert (myu.t4->c[2] == 3);
+  assert (myu.t4->d[2] == 3);
+
+  delete s4;
+  delete t4;
+}
+#endif
+
+#ifdef REF_COMPONENT_MEMBER_SLICE_BASEPTR
+void
+ref_component_member_slice_baseptr (void)
+{
+  INIT_ST;
+  U myu_real(s1, t1, s2, t2, &s3, &t3, s4, t4);
+  U &myu = myu_real;
+
+  #pragma omp target map(to: myu.t1.c) map(myu.t1.c[2:5])
+  {
+    myu.t1.c[2]++;
+  }
+
+  #pragma omp target map(to: myu.t1.d) map(myu.t1.d[2:5])
+  {
+    myu.t1.d[2]++;
+  }
+
+  assert (myu.t1.c[2] == 2);
+  assert (myu.t1.d[2] == 2);
+
+  #pragma omp target map(to: myu.t2.c) map(myu.t2.c[2:5])
+  {
+    myu.t2.c[2]++;
+  }
+
+  #pragma omp target map(to: myu.t2.d) map(myu.t2.d[2:5])
+  {
+    myu.t2.d[2]++;
+  }
+
+  assert (myu.t2.c[2] == 2);
+  assert (myu.t2.d[2] == 2);
+
+  #pragma omp target map(to: myu.t3, myu.t3->c) map(myu.t3->c[2:5])
+  {
+    myu.t3->c[2]++;
+  }
+
+  #pragma omp target map(to: myu.t3, myu.t3->d) map(myu.t3->d[2:5])
+  {
+    myu.t3->d[2]++;
+  }
+
+  assert (myu.t3->c[2] == 2);
+  assert (myu.t3->d[2] == 2);
+
+  #pragma omp target map(to: myu.t4, myu.t4->c) map(myu.t4->c[2:5])
+  {
+    myu.t4->c[2]++;
+  }
+
+  #pragma omp target map(to: myu.t4, myu.t4->d) map(myu.t4->d[2:5])
+  {
+    myu.t4->d[2]++;
+  }
+
+  assert (myu.t4->c[2] == 2);
+  assert (myu.t4->d[2] == 2);
+
+  delete s4;
+  delete t4;
+}
+#endif
+
+#ifdef PTR_COMPONENT_BASE
+void
+ptr_component_base (void)
+{
+  INIT_ST;
+  U *myu = new U(s1, t1, s2, t2, &s3, &t3, s4, t4);
+
+  #pragma omp target map(myu->s1.a, myu->s1.b, myu->s1.c, myu->s1.d)
+  {
+    myu->s1.a++;
+    myu->s1.b++;
+    myu->s1.c++;
+    myu->s1.d++;
+  }
+
+  assert (myu->s1.a == 1);
+  assert (myu->s1.b == 1);
+  assert (myu->s1.c == &c1 + 1);
+  assert (myu->s1.d == &d1 + 1);
+
+  #pragma omp target map(myu->s2.a, myu->s2.b, myu->s2.c, myu->s2.d)
+  {
+    myu->s2.a++;
+    myu->s2.b++;
+    myu->s2.c++;
+    myu->s2.d++;
+  }
+
+  assert (myu->s2.a == 1);
+  assert (myu->s2.b == 1);
+  assert (myu->s2.c == &c2 + 1);
+  assert (myu->s2.d == &d2 + 1);
+
+  #pragma omp target map(to:myu->s3) \
+		     map(myu->s3->a, myu->s3->b, myu->s3->c, myu->s3->d)
+  {
+    myu->s3->a++;
+    myu->s3->b++;
+    myu->s3->c++;
+    myu->s3->d++;
+  }
+
+  assert (myu->s3->a == 1);
+  assert (myu->s3->b == 1);
+  assert (myu->s3->c == &c3 + 1);
+  assert (myu->s3->d == &d3 + 1);
+
+  #pragma omp target map(to:myu->s4) \
+		     map(myu->s4->a, myu->s4->b, myu->s4->c, myu->s4->d)
+  {
+    myu->s4->a++;
+    myu->s4->b++;
+    myu->s4->c++;
+    myu->s4->d++;
+  }
+
+  assert (myu->s4->a == 1);
+  assert (myu->s4->b == 1);
+  assert (myu->s4->c == &c4 + 1);
+  assert (myu->s4->d == &d4 + 1);
+
+  delete s4;
+  delete t4;
+  delete myu;
+}
+#endif
+
+#ifdef PTR_COMPONENT_MEMBER_SLICE
+void
+ptr_component_member_slice (void)
+{
+  INIT_ST;
+  U *myu = new U(s1, t1, s2, t2, &s3, &t3, s4, t4);
+
+  #pragma omp target map(myu->t1.a[2:5])
+  {
+    myu->t1.a[2]++;
+  }
+
+  #pragma omp target map(myu->t1.b[2:5])
+  {
+    myu->t1.b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myu->t1.c)
+
+  #pragma omp target map(myu->t1.c[2:5])
+  {
+    myu->t1.c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu->t1.c)
+
+  #pragma omp target enter data map(to: myu->t1.d)
+
+  #pragma omp target map(myu->t1.d[2:5])
+  {
+    myu->t1.d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu->t1.d)
+
+  assert (myu->t1.a[2] == 1);
+  assert (myu->t1.b[2] == 3);
+  assert (myu->t1.c[2] == 3);
+  assert (myu->t1.d[2] == 3);
+
+  #pragma omp target map(myu->t2.a[2:5])
+  {
+    myu->t2.a[2]++;
+  }
+
+  #pragma omp target map(myu->t2.b[2:5])
+  {
+    myu->t2.b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myu->t2.c)
+
+  #pragma omp target map(myu->t2.c[2:5])
+  {
+    myu->t2.c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu->t2.c)
+
+  #pragma omp target enter data map(to: myu->t2.d)
+
+  #pragma omp target map(myu->t2.d[2:5])
+  {
+    myu->t2.d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu->t2.d)
+
+  assert (myu->t2.a[2] == 1);
+  assert (myu->t2.b[2] == 3);
+  assert (myu->t2.c[2] == 3);
+  assert (myu->t2.d[2] == 3);
+
+  #pragma omp target enter data map(to: myu->t3)
+
+  #pragma omp target map(myu->t3->a[2:5])
+  {
+    myu->t3->a[2]++;
+  }
+
+  #pragma omp target map(myu->t3->b[2:5])
+  {
+    myu->t3->b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myu->t3->c)
+
+  #pragma omp target map(myu->t3->c[2:5])
+  {
+    myu->t3->c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu->t3->c)
+
+  #pragma omp target enter data map(to: myu->t3->d)
+
+  #pragma omp target map(myu->t3->d[2:5])
+  {
+    myu->t3->d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu->t3, myu->t3->d)
+
+  assert (myu->t3->a[2] == 1);
+  assert (myu->t3->b[2] == 3);
+  assert (myu->t3->c[2] == 3);
+  assert (myu->t3->d[2] == 3);
+
+  #pragma omp target enter data map(to: myu->t4)
+
+  #pragma omp target map(myu->t4->a[2:5])
+  {
+    myu->t4->a[2]++;
+  }
+
+  #pragma omp target map(myu->t4->b[2:5])
+  {
+    myu->t4->b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myu->t4->c)
+
+  #pragma omp target map(myu->t4->c[2:5])
+  {
+    myu->t4->c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu->t4->c)
+
+  #pragma omp target enter data map(to: myu->t4->d)
+
+  #pragma omp target map(myu->t4->d[2:5])
+  {
+    myu->t4->d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu->t4, myu->t4->d)
+
+  assert (myu->t4->a[2] == 1);
+  assert (myu->t4->b[2] == 3);
+  assert (myu->t4->c[2] == 3);
+  assert (myu->t4->d[2] == 3);
+
+  delete s4;
+  delete t4;
+  delete myu;
+}
+#endif
+
+#ifdef PTR_COMPONENT_MEMBER_SLICE_BASEPTR
+void
+ptr_component_member_slice_baseptr (void)
+{
+  INIT_ST;
+  U *myu = new U(s1, t1, s2, t2, &s3, &t3, s4, t4);
+
+  /* Implicit firstprivate 'myu'.  */
+  #pragma omp target map(to: myu->t1.c) map(myu->t1.c[2:5])
+  {
+    myu->t1.c[2]++;
+  }
+
+  #pragma omp target map(to: myu->t1.d) map(myu->t1.d[2:5])
+  {
+    myu->t1.d[2]++;
+  }
+
+  assert (myu->t1.c[2] == 2);
+  assert (myu->t1.d[2] == 2);
+
+  /* Explicitly-mapped 'myu'.  */
+  #pragma omp target map(to: myu, myu->t1.c) map(myu->t1.c[2:5])
+  {
+    myu->t1.c[2]++;
+  }
+
+  #pragma omp target map(to: myu, myu->t1.d) map(myu->t1.d[2:5])
+  {
+    myu->t1.d[2]++;
+  }
+
+  assert (myu->t1.c[2] == 4);
+  assert (myu->t1.d[2] == 4);
+
+  /* Implicit firstprivate 'myu'.  */
+  #pragma omp target map(to: myu->t2.c) map(myu->t2.c[2:5])
+  {
+    myu->t2.c[2]++;
+  }
+
+  #pragma omp target map(to: myu->t2.d) map(myu->t2.d[2:5])
+  {
+    myu->t2.d[2]++;
+  }
+
+  assert (myu->t2.c[2] == 2);
+  assert (myu->t2.d[2] == 2);
+
+  /* Explicitly-mapped 'myu'.  */
+  #pragma omp target map(to: myu, myu->t2.c) map(myu->t2.c[2:5])
+  {
+    myu->t2.c[2]++;
+  }
+
+  #pragma omp target map(to: myu, myu->t2.d) map(myu->t2.d[2:5])
+  {
+    myu->t2.d[2]++;
+  }
+
+  assert (myu->t2.c[2] == 4);
+  assert (myu->t2.d[2] == 4);
+
+  /* Implicit firstprivate 'myu'.  */
+  #pragma omp target map(to: myu->t3, myu->t3->c) map(myu->t3->c[2:5])
+  {
+    myu->t3->c[2]++;
+  }
+
+  #pragma omp target map(to: myu->t3, myu->t3->d) map(myu->t3->d[2:5])
+  {
+    myu->t3->d[2]++;
+  }
+
+  assert (myu->t3->c[2] == 2);
+  assert (myu->t3->d[2] == 2);
+
+  /* Explicitly-mapped 'myu'.  */
+  #pragma omp target map(to: myu, myu->t3, myu->t3->c) map(myu->t3->c[2:5])
+  {
+    myu->t3->c[2]++;
+  }
+
+  #pragma omp target map(to: myu, myu->t3, myu->t3->d) map(myu->t3->d[2:5])
+  {
+    myu->t3->d[2]++;
+  }
+
+  assert (myu->t3->c[2] == 4);
+  assert (myu->t3->d[2] == 4);
+
+  /* Implicit firstprivate 'myu'.  */
+  #pragma omp target map(to: myu->t4, myu->t4->c) map(myu->t4->c[2:5])
+  {
+    myu->t4->c[2]++;
+  }
+
+  #pragma omp target map(to: myu->t4, myu->t4->d) map(myu->t4->d[2:5])
+  {
+    myu->t4->d[2]++;
+  }
+
+  assert (myu->t4->c[2] == 2);
+  assert (myu->t4->d[2] == 2);
+
+  /* Explicitly-mapped 'myu'.  */
+  #pragma omp target map(to: myu, myu->t4, myu->t4->c) map(myu->t4->c[2:5])
+  {
+    myu->t4->c[2]++;
+  }
+
+  #pragma omp target map(to: myu, myu->t4, myu->t4->d) map(myu->t4->d[2:5])
+  {
+    myu->t4->d[2]++;
+  }
+
+  assert (myu->t4->c[2] == 4);
+  assert (myu->t4->d[2] == 4);
+
+  delete s4;
+  delete t4;
+  delete myu;
+}
+#endif
+
+#ifdef REF2PTR_COMPONENT_BASE
+void
+ref2ptr_component_base (void)
+{
+  INIT_ST;
+  U *myu_ptr = new U(s1, t1, s2, t2, &s3, &t3, s4, t4);
+  U *&myu = myu_ptr;
+
+  #pragma omp target map(myu->s1.a, myu->s1.b, myu->s1.c, myu->s1.d)
+  {
+    myu->s1.a++;
+    myu->s1.b++;
+    myu->s1.c++;
+    myu->s1.d++;
+  }
+
+  assert (myu->s1.a == 1);
+  assert (myu->s1.b == 1);
+  assert (myu->s1.c == &c1 + 1);
+  assert (myu->s1.d == &d1 + 1);
+
+  #pragma omp target map(myu->s2.a, myu->s2.b, myu->s2.c, myu->s2.d)
+  {
+    myu->s2.a++;
+    myu->s2.b++;
+    myu->s2.c++;
+    myu->s2.d++;
+  }
+
+  assert (myu->s2.a == 1);
+  assert (myu->s2.b == 1);
+  assert (myu->s2.c == &c2 + 1);
+  assert (myu->s2.d == &d2 + 1);
+
+  #pragma omp target map(to:myu->s3) \
+		     map(myu->s3->a, myu->s3->b, myu->s3->c, myu->s3->d)
+  {
+    myu->s3->a++;
+    myu->s3->b++;
+    myu->s3->c++;
+    myu->s3->d++;
+  }
+
+  assert (myu->s3->a == 1);
+  assert (myu->s3->b == 1);
+  assert (myu->s3->c == &c3 + 1);
+  assert (myu->s3->d == &d3 + 1);
+
+  #pragma omp target map(to:myu->s4) \
+		     map(myu->s4->a, myu->s4->b, myu->s4->c, myu->s4->d)
+  {
+    myu->s4->a++;
+    myu->s4->b++;
+    myu->s4->c++;
+    myu->s4->d++;
+  }
+
+  assert (myu->s4->a == 1);
+  assert (myu->s4->b == 1);
+  assert (myu->s4->c == &c4 + 1);
+  assert (myu->s4->d == &d4 + 1);
+
+  delete s4;
+  delete t4;
+  delete myu_ptr;
+}
+#endif
+
+#ifdef REF2PTR_COMPONENT_MEMBER_SLICE
+void
+ref2ptr_component_member_slice (void)
+{
+  INIT_ST;
+  U *myu_ptr = new U(s1, t1, s2, t2, &s3, &t3, s4, t4);
+  U *&myu = myu_ptr;
+
+  #pragma omp target map(myu->t1.a[2:5])
+  {
+    myu->t1.a[2]++;
+  }
+
+  #pragma omp target map(myu->t1.b[2:5])
+  {
+    myu->t1.b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myu->t1.c)
+
+  #pragma omp target map(myu->t1.c[2:5])
+  {
+    myu->t1.c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu->t1.c)
+
+  #pragma omp target enter data map(to: myu->t1.d)
+
+  #pragma omp target map(myu->t1.d[2:5])
+  {
+    myu->t1.d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu->t1.d)
+
+  assert (myu->t1.a[2] == 1);
+  assert (myu->t1.b[2] == 3);
+  assert (myu->t1.c[2] == 3);
+  assert (myu->t1.d[2] == 3);
+
+  #pragma omp target map(myu->t2.a[2:5])
+  {
+    myu->t2.a[2]++;
+  }
+
+  #pragma omp target map(myu->t2.b[2:5])
+  {
+    myu->t2.b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myu->t2.c)
+
+  #pragma omp target map(myu->t2.c[2:5])
+  {
+    myu->t2.c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu->t2.c)
+
+  #pragma omp target enter data map(to: myu->t2.d)
+
+  #pragma omp target map(myu->t2.d[2:5])
+  {
+    myu->t2.d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu->t2.d)
+
+  assert (myu->t2.a[2] == 1);
+  assert (myu->t2.b[2] == 3);
+  assert (myu->t2.c[2] == 3);
+  assert (myu->t2.d[2] == 3);
+
+  #pragma omp target enter data map(to: myu->t3)
+
+  #pragma omp target map(myu->t3->a[2:5])
+  {
+    myu->t3->a[2]++;
+  }
+
+  #pragma omp target map(myu->t3->b[2:5])
+  {
+    myu->t3->b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myu->t3->c)
+
+  #pragma omp target map(myu->t3->c[2:5])
+  {
+    myu->t3->c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu->t3->c)
+
+  #pragma omp target enter data map(to: myu->t3->d)
+
+  #pragma omp target map(myu->t3->d[2:5])
+  {
+    myu->t3->d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu->t3, myu->t3->d)
+
+  assert (myu->t3->a[2] == 1);
+  assert (myu->t3->b[2] == 3);
+  assert (myu->t3->c[2] == 3);
+  assert (myu->t3->d[2] == 3);
+
+  #pragma omp target enter data map(to: myu->t4)
+
+  #pragma omp target map(myu->t4->a[2:5])
+  {
+    myu->t4->a[2]++;
+  }
+
+  #pragma omp target map(myu->t4->b[2:5])
+  {
+    myu->t4->b[2]++;
+  }
+
+  #pragma omp target enter data map(to: myu->t4->c)
+
+  #pragma omp target map(myu->t4->c[2:5])
+  {
+    myu->t4->c[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu->t4->c)
+
+  #pragma omp target enter data map(to: myu->t4->d)
+
+  #pragma omp target map(myu->t4->d[2:5])
+  {
+    myu->t4->d[2]++;
+  }
+
+  #pragma omp target exit data map(release: myu->t4, myu->t4->d)
+
+  assert (myu->t4->a[2] == 1);
+  assert (myu->t4->b[2] == 3);
+  assert (myu->t4->c[2] == 3);
+  assert (myu->t4->d[2] == 3);
+
+  delete s4;
+  delete t4;
+  delete myu_ptr;
+}
+#endif
+
+#ifdef REF2PTR_COMPONENT_MEMBER_SLICE_BASEPTR
+void
+ref2ptr_component_member_slice_baseptr (void)
+{
+  INIT_ST;
+  U *myu_ptr = new U(s1, t1, s2, t2, &s3, &t3, s4, t4);
+  U *&myu = myu_ptr;
+
+  /* Implicit firstprivate 'myu'.  */
+  #pragma omp target map(to: myu->t1.c) map(myu->t1.c[2:5])
+  {
+    myu->t1.c[2]++;
+  }
+
+  #pragma omp target map(to: myu->t1.d) map(myu->t1.d[2:5])
+  {
+    myu->t1.d[2]++;
+  }
+
+  assert (myu->t1.c[2] == 2);
+  assert (myu->t1.d[2] == 2);
+
+  /* Explicitly-mapped 'myu'.  */
+  #pragma omp target map(to: myu, myu->t1.c) map(myu->t1.c[2:5])
+  {
+    myu->t1.c[2]++;
+  }
+
+  #pragma omp target map(to: myu, myu->t1.d) map(myu->t1.d[2:5])
+  {
+    myu->t1.d[2]++;
+  }
+
+  assert (myu->t1.c[2] == 4);
+  assert (myu->t1.d[2] == 4);
+
+  /* Implicit firstprivate 'myu'.  */
+  #pragma omp target map(to: myu->t2.c) map(myu->t2.c[2:5])
+  {
+    myu->t2.c[2]++;
+  }
+
+  #pragma omp target map(to: myu->t2.d) map(myu->t2.d[2:5])
+  {
+    myu->t2.d[2]++;
+  }
+
+  assert (myu->t2.c[2] == 2);
+  assert (myu->t2.d[2] == 2);
+
+  /* Explicitly-mapped 'myu'.  */
+  #pragma omp target map(to: myu, myu->t2.c) map(myu->t2.c[2:5])
+  {
+    myu->t2.c[2]++;
+  }
+
+  #pragma omp target map(to: myu, myu->t2.d) map(myu->t2.d[2:5])
+  {
+    myu->t2.d[2]++;
+  }
+
+  assert (myu->t2.c[2] == 4);
+  assert (myu->t2.d[2] == 4);
+
+  /* Implicit firstprivate 'myu'.  */
+  #pragma omp target map(to: myu->t3, myu->t3->c) map(myu->t3->c[2:5])
+  {
+    myu->t3->c[2]++;
+  }
+
+  #pragma omp target map(to: myu->t3, myu->t3->d) map(myu->t3->d[2:5])
+  {
+    myu->t3->d[2]++;
+  }
+
+  assert (myu->t3->c[2] == 2);
+  assert (myu->t3->d[2] == 2);
+
+  /* Explicitly-mapped 'myu'.  */
+  #pragma omp target map(to: myu, myu->t3, myu->t3->c) map(myu->t3->c[2:5])
+  {
+    myu->t3->c[2]++;
+  }
+
+  #pragma omp target map(to: myu, myu->t3, myu->t3->d) map(myu->t3->d[2:5])
+  {
+    myu->t3->d[2]++;
+  }
+
+  assert (myu->t3->c[2] == 4);
+  assert (myu->t3->d[2] == 4);
+
+  /* Implicit firstprivate 'myu'.  */
+  #pragma omp target map(to: myu->t4, myu->t4->c) map(myu->t4->c[2:5])
+  {
+    myu->t4->c[2]++;
+  }
+
+  #pragma omp target map(to: myu->t4, myu->t4->d) map(myu->t4->d[2:5])
+  {
+    myu->t4->d[2]++;
+  }
+
+  assert (myu->t4->c[2] == 2);
+  assert (myu->t4->d[2] == 2);
+
+  /* Explicitly-mapped 'myu'.  */
+  #pragma omp target map(to: myu, myu->t4, myu->t4->c) map(myu->t4->c[2:5])
+  {
+    myu->t4->c[2]++;
+  }
+
+  #pragma omp target map(to: myu, myu->t4, myu->t4->d) map(myu->t4->d[2:5])
+  {
+    myu->t4->d[2]++;
+  }
+
+  assert (myu->t4->c[2] == 4);
+  assert (myu->t4->d[2] == 4);
+
+  delete s4;
+  delete t4;
+  delete myu_ptr;
+}
+#endif
+
+int main (int argc, char *argv[])
+{
+#ifdef MAP_DECLS
+  map_decls ();
+#endif
+
+#ifdef NONREF_DECL_BASE
+  nonref_decl_base ();
+#endif
+#ifdef REF_DECL_BASE
+  ref_decl_base ();
+#endif
+#ifdef PTR_DECL_BASE
+  ptr_decl_base ();
+#endif
+#ifdef REF2PTR_DECL_BASE
+  ref2ptr_decl_base ();
+#endif
+
+#ifdef ARRAY_DECL_BASE
+  array_decl_base ();
+#endif
+#ifdef REF2ARRAY_DECL_BASE
+  ref2array_decl_base ();
+#endif
+#ifdef PTR_OFFSET_DECL_BASE
+  ptr_offset_decl_base ();
+#endif
+#ifdef REF2PTR_OFFSET_DECL_BASE
+  ref2ptr_offset_decl_base ();
+#endif
+
+#ifdef MAP_SECTIONS
+  map_sections ();
+#endif
+
+#ifdef NONREF_DECL_MEMBER_SLICE
+  nonref_decl_member_slice ();
+#endif
+#ifdef NONREF_DECL_MEMBER_SLICE_BASEPTR
+  nonref_decl_member_slice_baseptr ();
+#endif
+#ifdef REF_DECL_MEMBER_SLICE
+  ref_decl_member_slice ();
+#endif
+#ifdef REF_DECL_MEMBER_SLICE_BASEPTR
+  ref_decl_member_slice_baseptr ();
+#endif
+#ifdef PTR_DECL_MEMBER_SLICE
+  ptr_decl_member_slice ();
+#endif
+#ifdef PTR_DECL_MEMBER_SLICE_BASEPTR
+  ptr_decl_member_slice_baseptr ();
+#endif
+#ifdef REF2PTR_DECL_MEMBER_SLICE
+  ref2ptr_decl_member_slice ();
+#endif
+#ifdef REF2PTR_DECL_MEMBER_SLICE_BASEPTR
+  ref2ptr_decl_member_slice_baseptr ();
+#endif
+
+#ifdef ARRAY_DECL_MEMBER_SLICE
+  array_decl_member_slice ();
+#endif
+#ifdef ARRAY_DECL_MEMBER_SLICE_BASEPTR
+  array_decl_member_slice_baseptr ();
+#endif
+#ifdef REF2ARRAY_DECL_MEMBER_SLICE
+  ref2array_decl_member_slice ();
+#endif
+#ifdef REF2ARRAY_DECL_MEMBER_SLICE_BASEPTR
+  ref2array_decl_member_slice_baseptr ();
+#endif
+#ifdef PTR_OFFSET_DECL_MEMBER_SLICE
+  ptr_offset_decl_member_slice ();
+#endif
+#ifdef PTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR
+  ptr_offset_decl_member_slice_baseptr ();
+#endif
+#ifdef REF2PTR_OFFSET_DECL_MEMBER_SLICE
+  ref2ptr_offset_decl_member_slice ();
+#endif
+#ifdef REF2PTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR
+  ref2ptr_offset_decl_member_slice_baseptr ();
+#endif
+
+#ifdef PTRARRAY_DECL_MEMBER_SLICE
+  ptrarray_decl_member_slice ();
+#endif
+#ifdef PTRARRAY_DECL_MEMBER_SLICE_BASEPTR
+  ptrarray_decl_member_slice_baseptr ();
+#endif
+#ifdef REF2PTRARRAY_DECL_MEMBER_SLICE
+  ref2ptrarray_decl_member_slice ();
+#endif
+#ifdef REF2PTRARRAY_DECL_MEMBER_SLICE_BASEPTR
+  ref2ptrarray_decl_member_slice_baseptr ();
+#endif
+#ifdef PTRPTR_OFFSET_DECL_MEMBER_SLICE
+  ptrptr_offset_decl_member_slice ();
+#endif
+#ifdef PTRPTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR
+  ptrptr_offset_decl_member_slice_baseptr ();
+#endif
+#ifdef REF2PTRPTR_OFFSET_DECL_MEMBER_SLICE
+  ref2ptrptr_offset_decl_member_slice ();
+#endif
+#ifdef REF2PTRPTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR
+  ref2ptrptr_offset_decl_member_slice_baseptr ();
+#endif
+
+#ifdef NONREF_COMPONENT_BASE
+  nonref_component_base ();
+#endif
+#ifdef NONREF_COMPONENT_MEMBER_SLICE
+  nonref_component_member_slice ();
+#endif
+#ifdef NONREF_COMPONENT_MEMBER_SLICE_BASEPTR
+  nonref_component_member_slice_baseptr ();
+#endif
+
+#ifdef REF_COMPONENT_BASE
+  ref_component_base ();
+#endif
+#ifdef REF_COMPONENT_MEMBER_SLICE
+  ref_component_member_slice ();
+#endif
+#ifdef REF_COMPONENT_MEMBER_SLICE_BASEPTR
+  ref_component_member_slice_baseptr ();
+#endif
+
+#ifdef PTR_COMPONENT_BASE
+  ptr_component_base ();
+#endif
+#ifdef PTR_COMPONENT_MEMBER_SLICE
+  ptr_component_member_slice ();
+#endif
+#ifdef PTR_COMPONENT_MEMBER_SLICE_BASEPTR
+  ptr_component_member_slice_baseptr ();
+#endif
+
+#ifdef REF2PTR_COMPONENT_BASE
+  ref2ptr_component_base ();
+#endif
+#ifdef REF2PTR_COMPONENT_MEMBER_SLICE
+  ref2ptr_component_member_slice ();
+#endif
+#ifdef REF2PTR_COMPONENT_MEMBER_SLICE_BASEPTR
+  ref2ptr_component_member_slice_baseptr ();
+#endif
+
+  return 0;
+}
diff --git a/libgomp/testsuite/libgomp.c++/baseptrs-5.C b/libgomp/testsuite/libgomp.c++/baseptrs-5.C
new file mode 100644
index 000000000000..16bdfff3ae08
--- /dev/null
+++ b/libgomp/testsuite/libgomp.c++/baseptrs-5.C
@@ -0,0 +1,62 @@ 
+// { dg-do run }
+
+#include <cstring>
+#include <cassert>
+
+struct sa
+{
+  int *ptr;
+  int *ptr2;
+};
+
+struct sb
+{
+  int arr[10];
+};
+
+struct scp
+{
+  sa *&a;
+  sb *&b;
+  scp (sa *&my_a, sb *&my_b) : a(my_a), b(my_b) {}
+};
+
+int
+main ()
+{
+  sa *my_a = new sa;
+  sb *my_b = new sb;
+
+  my_a->ptr = new int[10];
+  my_a->ptr2 = new int[10];
+  scp *my_c = new scp(my_a, my_b);
+
+  memset (my_c->a->ptr, 0, sizeof (int) * 10);
+  memset (my_c->a->ptr2, 0, sizeof (int) * 10);
+
+  #pragma omp target map (my_c->a, \
+			  my_c->a->ptr, my_c->a->ptr[:10], \
+			  my_c->a->ptr2, my_c->a->ptr2[:10])
+  {
+    for (int i = 0; i < 10; i++)
+      {
+	my_c->a->ptr[i] = i;
+	my_c->a->ptr2[i] = i * 2;
+      }
+  }
+
+  for (int i = 0; i < 10; i++)
+    {
+      assert (my_c->a->ptr[i] == i);
+      assert (my_c->a->ptr2[i] == i * 2);
+    }
+
+  delete[] my_a->ptr;
+  delete[] my_a->ptr2;
+  delete my_a;
+  delete my_b;
+  delete my_c;
+
+  return 0;
+}
+
diff --git a/libgomp/testsuite/libgomp.c++/class-array-1.C b/libgomp/testsuite/libgomp.c++/class-array-1.C
new file mode 100644
index 000000000000..d8d3f7f1f99a
--- /dev/null
+++ b/libgomp/testsuite/libgomp.c++/class-array-1.C
@@ -0,0 +1,59 @@ 
+/* { dg-do run } */
+
+#include <cassert>
+
+#define N 1024
+
+class M {
+  int array[N];
+
+public:
+  M ()
+  {
+    for (int i = 0; i < N; i++)
+      array[i] = 0;
+  }
+
+  void incr_with_this (int c)
+  {
+#pragma omp target map(this->array[:N])
+    for (int i = 0; i < N; i++)
+      array[i] += c;
+  }
+
+  void incr_without_this (int c)
+  {
+#pragma omp target map(array[:N])
+    for (int i = 0; i < N; i++)
+      array[i] += c;
+  }
+
+  void incr_implicit (int c)
+  {
+#pragma omp target
+    for (int i = 0; i < N; i++)
+      array[i] += c;
+  }
+
+  void check (int c)
+  {
+    for (int i = 0; i < N; i++)
+      assert (array[i] == c);
+  }
+};
+
+int
+main (int argc, char *argv[])
+{
+  M m;
+
+  m.check (0);
+  m.incr_with_this (3);
+  m.check (3);
+  m.incr_without_this (5);
+  m.check (8);
+  m.incr_implicit (2);
+  m.check (10);
+
+  return 0;
+}
diff --git a/libgomp/testsuite/libgomp.c++/target-48.C b/libgomp/testsuite/libgomp.c++/target-48.C
new file mode 100644
index 000000000000..db171d2f5a3d
--- /dev/null
+++ b/libgomp/testsuite/libgomp.c++/target-48.C
@@ -0,0 +1,32 @@ 
+#include <cstring>
+#include <cassert>
+
+struct s {
+  int (&a)[10];
+  s(int (&a0)[10]) : a(a0) {}
+};
+
+int
+main (int argc, char *argv[])
+{
+  int la[10];
+  s v(la);
+
+  memset (la, 0, sizeof la);
+
+  #pragma omp target enter data map(to: v)
+
+  /* This mapping must use GOMP_MAP_ATTACH_DETACH not GOMP_MAP_ALWAYS_POINTER,
+     else the host reference v.a will be corrupted on copy-out.  */
+
+  #pragma omp target map(v.a[0:10])
+  {
+    v.a[5]++;
+  }
+
+  #pragma omp target exit data map(from: v)
+
+  assert (v.a[5] == 1);
+
+  return 0;
+}
diff --git a/libgomp/testsuite/libgomp.c++/target-49.C b/libgomp/testsuite/libgomp.c++/target-49.C
new file mode 100644
index 000000000000..efae9c9580ef
--- /dev/null
+++ b/libgomp/testsuite/libgomp.c++/target-49.C
@@ -0,0 +1,37 @@ 
+#include <cstring>
+#include <cassert>
+
+struct s {
+  int (&a)[10];
+  s(int (&a0)[10]) : a(a0) {}
+};
+
+int
+main (int argc, char *argv[])
+{
+  int la[10];
+  s v_real(la);
+  s *v = &v_real;
+
+  memset (la, 0, sizeof la);
+
+  #pragma omp target enter data map(to: v)
+
+  /* Copying the whole v[0] here DOES NOT WORK yet because the reference 'a' is
+     not copied "as if" it was mapped explicitly as a member.  FIXME.  */
+  #pragma omp target enter data map(to: v[0])
+
+  //#pragma omp target
+  {
+    v->a[5]++;
+  }
+
+  #pragma omp target exit data map(release: v[0])
+  #pragma omp target exit data map(from: v)
+
+  assert (v->a[5] == 1);
+
+  return 0;
+}
+
+// { dg-xfail-run-if "TODO" { *-*-* } { "-DACC_MEM_SHARED=0" } }
diff --git a/libgomp/testsuite/libgomp.c-c++-common/baseptrs-1.c b/libgomp/testsuite/libgomp.c-c++-common/baseptrs-1.c
new file mode 100644
index 000000000000..073615625b7f
--- /dev/null
+++ b/libgomp/testsuite/libgomp.c-c++-common/baseptrs-1.c
@@ -0,0 +1,50 @@ 
+#include <stdlib.h>
+#include <string.h>
+#include <assert.h>
+#include <stdio.h>
+
+#define N 32
+
+typedef struct {
+  int x2[10][N];
+} x1type;
+
+typedef struct {
+  x1type x1[10];
+} p2type;
+
+typedef struct {
+  p2type *p2;
+} p1type;
+
+typedef struct {
+  p1type *p1;
+} x0type;
+
+typedef struct {
+  x0type x0[10];
+} p0type;
+
+int main(int argc, char *argv[])
+{
+  p0type *p0;
+  int k1 = 0, k2 = 0, k3 = 0, n = N;
+
+  p0 = (p0type *) malloc (sizeof *p0);
+  p0->x0[0].p1 = (p1type *) malloc (sizeof *p0->x0[0].p1);
+  p0->x0[0].p1->p2 = (p2type *) malloc (sizeof *p0->x0[0].p1->p2);
+  memset (p0->x0[0].p1->p2, 0, sizeof *p0->x0[0].p1->p2);
+
+#pragma omp target map(tofrom: p0->x0[k1].p1->p2[k2].x1[k3].x2[4][0:n]) \
+		   map(to: p0->x0[k1].p1, p0->x0[k1].p1->p2) \
+		   map(to: p0->x0[k1].p1[0])
+  {
+    for (int i = 0; i < n; i++)
+      p0->x0[k1].p1->p2[k2].x1[k3].x2[4][i] = i;
+  }
+
+  for (int i = 0; i < n; i++)
+    assert (i == p0->x0[k1].p1->p2[k2].x1[k3].x2[4][i]);
+
+  return 0;
+}
diff --git a/libgomp/testsuite/libgomp.c-c++-common/baseptrs-2.c b/libgomp/testsuite/libgomp.c-c++-common/baseptrs-2.c
new file mode 100644
index 000000000000..e335d7da9669
--- /dev/null
+++ b/libgomp/testsuite/libgomp.c-c++-common/baseptrs-2.c
@@ -0,0 +1,70 @@ 
+#include <stdlib.h>
+#include <string.h>
+#include <assert.h>
+
+#define N 32
+
+typedef struct {
+  int arr[N];
+  int *ptr;
+} sc;
+
+typedef struct {
+  sc *c;
+} sb;
+
+typedef struct {
+  sb *b;
+  sc *c;
+} sa;
+
+int main (int argc, char *argv[])
+{
+  sa *p;
+
+  p = (sa *) malloc (sizeof *p);
+  p->b = (sb *) malloc (sizeof *p->b);
+  p->b->c = (sc *) malloc (sizeof *p->b->c);
+  p->c = (sc *) malloc (sizeof *p->c);
+  p->b->c->ptr = (int *) malloc (N * sizeof (int));
+  p->c->ptr = (int *) malloc (N * sizeof (int));
+
+  for (int i = 0; i < N; i++)
+    {
+      p->b->c->ptr[i] = 0;
+      p->c->ptr[i] = 0;
+      p->b->c->arr[i] = 0;
+      p->c->arr[i] = 0;
+    }
+
+#pragma omp target map(to: p->b, p->b[0], p->c, p->c[0], p->b->c, p->b->c[0]) \
+		   map(to: p->b->c->ptr, p->c->ptr) \
+		   map(tofrom: p->b->c->ptr[:N], p->c->ptr[:N])
+  {
+    for (int i = 0; i < N; i++)
+      {
+	p->b->c->ptr[i] = i;
+	p->c->ptr[i] = i * 2;
+      }
+  }
+
+#pragma omp target map(to: p->b, p->b[0], p->b->c, p->c) \
+		   map(tofrom: p->c[0], p->b->c[0])
+  {
+    for (int i = 0; i < N; i++)
+      {
+	p->b->c->arr[i] = i * 3;
+	p->c->arr[i] = i * 4;
+      }
+  }
+
+  for (int i = 0; i < N; i++)
+    {
+      assert (p->b->c->ptr[i] == i);
+      assert (p->c->ptr[i] == i * 2);
+      assert (p->b->c->arr[i] == i * 3);
+      assert (p->c->arr[i] == i * 4);
+    }
+
+  return 0;
+}
diff --git a/libgomp/testsuite/libgomp.c/target-22.c b/libgomp/testsuite/libgomp.c/target-22.c
index aad8a0a09df7..492744ad0efd 100644
--- a/libgomp/testsuite/libgomp.c/target-22.c
+++ b/libgomp/testsuite/libgomp.c/target-22.c
@@ -21,7 +21,8 @@  main ()
   s.v.b = a + 16;
   s.w = c + 3;
   int err = 0;
-  #pragma omp target map (to:s.v.b[0:z + 7], s.u[z + 1:z + 4]) \
+  #pragma omp target map (to: s.w, s.v.b, s.u, s.s) \
+		     map (to:s.v.b[0:z + 7], s.u[z + 1:z + 4]) \
 		     map (tofrom:s.s[3:3]) \
 		     map (from: s.w[z:4], err) private (i)
   {
diff --git a/libgomp/testsuite/libgomp.fortran/struct-elem-map-1.f90 b/libgomp/testsuite/libgomp.fortran/struct-elem-map-1.f90
index 58550c79d69c..7f3d8174f97b 100644
--- a/libgomp/testsuite/libgomp.fortran/struct-elem-map-1.f90
+++ b/libgomp/testsuite/libgomp.fortran/struct-elem-map-1.f90
@@ -409,3 +409,6 @@  contains
   end subroutine eight
 
 end program main
+
+! Fixed by the "Fortran pointers and member mappings" patch
+! { dg-xfail-run-if TODO { offload_device_nonshared_as } }