[AArch64] Inline mempcpy again

Message ID DB5PR08MB1030A28D8A81F4A3FDDFF1D9834E0@DB5PR08MB1030.eurprd08.prod.outlook.com
State New, archived
Headers

Commit Message

Wilco Dijkstra June 29, 2018, 2:48 p.m. UTC
  ping



From: Wilco Dijkstra
Sent: 29 June 2017 17:20
To: libc-alpha@sourceware.org
Cc: nd
Subject: [PATCH][AArch64] Inline mempcpy again
  

Recent changes removed the generic mempcpy inline.  Given GCC still
doesn't optimize mempcpy (PR70140), I am adding it again.  Since
string/string.h no longer includes an architecture specific header, do this
inside include/string.h and for now only on AArch64.

OK for commit?

ChangeLog: 
2017-06-29  Wilco Dijkstra  <wdijkstr@arm.com>

        * include/string.h: (mempcpy): Redirect to __mempcpy_inline.  
        (__mempcpy): Likewise.
        (__mempcpy_inline): New inline function.
        * sysdeps/aarch64/string_private.h: Define _INLINE_mempcpy.

--
  

Patch

diff --git a/include/string.h b/include/string.h
index 069efd0b87010e5fdb64c87ced7af1dc4f54f232..46b90b8f346149f075fad026e562dfb27b658969 100644
--- a/include/string.h
+++ b/include/string.h
@@ -197,4 +197,23 @@  extern char *__strncat_chk (char *__restrict __dest,
                             size_t __len, size_t __destlen) __THROW;
 #endif
 
+#if defined __USE_GNU && defined __OPTIMIZE__ \
+    && defined __extern_always_inline && __GNUC_PREREQ (3,2) \
+    && defined _INLINE_mempcpy
+
+#undef mempcpy
+#undef __mempcpy
+
+#define mempcpy(dest, src, n) __mempcpy_inline (dest, src, n)
+#define __mempcpy(dest, src, n) __mempcpy_inline (dest, src, n)
+
+__extern_always_inline void *
+__mempcpy_inline (void *__restrict __dest,
+                 const void *__restrict __src, size_t __n)
+{
+  return (char *) memcpy (__dest, __src, __n) + __n;
+}
+
+#endif
+
 #endif
diff --git a/sysdeps/aarch64/string_private.h b/sysdeps/aarch64/string_private.h
index 09dedbf3db40cf06077a44af992b399a6b37b48d..8b8fdddcc17a3f69455e72efe9c3616d2d33abe2 100644
--- a/sysdeps/aarch64/string_private.h
+++ b/sysdeps/aarch64/string_private.h
@@ -18,3 +18,6 @@ 
 
 /* AArch64 implementations support efficient unaligned access.  */
 #define _STRING_ARCH_unaligned 1
+
+/* Inline mempcpy since GCC doesn't optimize it (PR70140).  */
+#define _INLINE_mempcpy 1