[AArch64] Merge Falkor memcpy and memmove

Message ID AM5PR0801MB2035C323144A9B7CC8BDA0B583190@AM5PR0801MB2035.eurprd08.prod.outlook.com
State Committed
Headers

Commit Message

Wilco Dijkstra Feb. 10, 2020, 5:18 p.m. UTC
  Hi,

I'm sending this on behalf of Krzysztof Koch:

Merge Falkor memcpy and memmove implementations.

Falkor's memcpy and memmove share some implementation details,
therefore, the two routines are moved to a single source file
for code reuse.

The two routines now share code for small and medium copies
(up to and including 128 bytes). Large copies in memcpy do not
handle overlap correctly, consequently, the loops for
moving/copying more than 128 bytes stay separate for memcpy
and memmove.

To increase code reuse a number of small modifications were made:
1. The old implementation of memcpy copied the first 16-bytes as
soon as the size of data was determined to be greater than 32 bytes.
For memcpy code to also work when copying small/medium overlapping
data, the first load and store was moved to the large copy case.
2. Medium memcpy case no longer assumes that 16 bytes were already
copied and uses 8 registers to copy up to 128 bytes.
3. Small case for memmove was enlarged to that of memcpy, which is
less than or equal to 32 bytes.
4. Medium case for memmove was enlarged to that of memcpy, which is
less than or equal to 128 bytes.

Other changes include:
1. Improve alignment of existing loop bodies.
2. 'Delouse' memmove and memcpy input arguments. Make sure that
upper 32-bits of input registers are zeroed if unused.
3. Do one more iteration in memmove loops and reduce the number of
copies made from the start/end of the buffer, depending on
the direction of the memmove loop.

Benchmarking:

Looking at the results from bench-memcpy-random.out, we can see that
now memmove_falkor is about 5% faster than memcpy_falkor_old, while
memmove_falkor_old was more than 15% slower. The memcpy implementation
remained largely unmodified, so there is no significant performance
change.

The reason for such a significant memmove performance gain is the
increase of the upper bound on the small copy case to 32 bytes and
the increase of the upper bound on the medium copy case to 128 bytes.

---

 sysdeps/aarch64/multiarch/Makefile         |   4 +-
 sysdeps/aarch64/multiarch/memcpy_falkor.S  | 168 ++++++++++++++++++---
 sysdeps/aarch64/multiarch/memmove_falkor.S | 225 -----------------------------
 3 files changed, 149 insertions(+), 248 deletions(-)
 delete mode 100644 sysdeps/aarch64/multiarch/memmove_falkor.S
  

Comments

Adhemerval Zanella May 14, 2020, 2:30 p.m. UTC | #1
On 10/02/2020 14:18, Wilco Dijkstra wrote:
> Hi,
> 
> I'm sending this on behalf of Krzysztof Koch:
> 
> Merge Falkor memcpy and memmove implementations.
> 
> Falkor's memcpy and memmove share some implementation details,
> therefore, the two routines are moved to a single source file
> for code reuse.
> 
> The two routines now share code for small and medium copies
> (up to and including 128 bytes). Large copies in memcpy do not
> handle overlap correctly, consequently, the loops for
> moving/copying more than 128 bytes stay separate for memcpy
> and memmove.
> 
> To increase code reuse a number of small modifications were made:
> 1. The old implementation of memcpy copied the first 16-bytes as
> soon as the size of data was determined to be greater than 32 bytes.
> For memcpy code to also work when copying small/medium overlapping
> data, the first load and store was moved to the large copy case.
> 2. Medium memcpy case no longer assumes that 16 bytes were already
> copied and uses 8 registers to copy up to 128 bytes.
> 3. Small case for memmove was enlarged to that of memcpy, which is
> less than or equal to 32 bytes.
> 4. Medium case for memmove was enlarged to that of memcpy, which is
> less than or equal to 128 bytes.
> 
> Other changes include:
> 1. Improve alignment of existing loop bodies.
> 2. 'Delouse' memmove and memcpy input arguments. Make sure that
> upper 32-bits of input registers are zeroed if unused.
> 3. Do one more iteration in memmove loops and reduce the number of
> copies made from the start/end of the buffer, depending on
> the direction of the memmove loop.
> 
> Benchmarking:
> 
> Looking at the results from bench-memcpy-random.out, we can see that
> now memmove_falkor is about 5% faster than memcpy_falkor_old, while
> memmove_falkor_old was more than 15% slower. The memcpy implementation
> remained largely unmodified, so there is no significant performance
> change.
> 
> The reason for such a significant memmove performance gain is the
> increase of the upper bound on the small copy case to 32 bytes and
> the increase of the upper bound on the medium copy case to 128 bytes.

I don't have access to a Falkor machine to verify performance does not
degrade, but the patch looks ok.  There are some nits below.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>

> 
> ---
> 
>  sysdeps/aarch64/multiarch/Makefile         |   4 +-
>  sysdeps/aarch64/multiarch/memcpy_falkor.S  | 168 ++++++++++++++++++---
>  sysdeps/aarch64/multiarch/memmove_falkor.S | 225 -----------------------------
>  3 files changed, 149 insertions(+), 248 deletions(-)
>  delete mode 100644 sysdeps/aarch64/multiarch/memmove_falkor.S
> 
> diff --git a/sysdeps/aarch64/multiarch/Makefile b/sysdeps/aarch64/multiarch/Makefile
> index 4150b89..7f7f987 100644
> --- a/sysdeps/aarch64/multiarch/Makefile
> +++ b/sysdeps/aarch64/multiarch/Makefile
> @@ -1,6 +1,6 @@
>  ifeq ($(subdir),string)
> -sysdep_routines += memcpy_generic memcpy_thunderx memcpy_thunderx2 \
> -		   memcpy_falkor memmove_falkor \
> +sysdep_routines += memcpy_generic memcpy_thunderx \
> +		   memcpy_thunderx2 memcpy_falkor \
>  		   memset_generic memset_falkor memset_emag \
>  		   memchr_generic memchr_nosimd \
>  		   strlen_generic strlen_asimd

Just remove memmove_falkor from second line, there is no need to
move memcpy_thunderx2 below.

> diff --git a/sysdeps/aarch64/multiarch/memcpy_falkor.S b/sysdeps/aarch64/multiarch/memcpy_falkor.S
> index 1845c86..d873efc 100644
> --- a/sysdeps/aarch64/multiarch/memcpy_falkor.S
> +++ b/sysdeps/aarch64/multiarch/memcpy_falkor.S
> @@ -29,11 +29,11 @@
>  #define dst	x3
>  #define srcend	x4
>  #define dstend	x5
> -#define tmp1	x14
>  #define A_x	x6
>  #define B_x	x7
>  #define A_w	w6
>  #define B_w	w7
> +#define tmp1	x14
>  

I see no need to move this definition below.

>  #define A_q	q0
>  #define B_q	q1
> @@ -42,6 +42,11 @@
>  #define E_q	q4
>  #define F_q	q5
>  #define G_q	q6
> +#define H_q	q7
> +#define Q_q	q6
> +#define S_q	q22
> +
> +#if IS_IN (libc)

Another non required change, the check is already in place
before ENTRY_ALIGN.

>  
>  /* Copies are split into 3 main cases:
>  
> @@ -49,7 +54,7 @@
>     2. Medium copies of 33..128 bytes which are fully unrolled
>     3. Large copies of more than 128 bytes.
>  
> -   Large copies align the sourceto a quad word and use an unrolled loop
> +   Large copies align the source to a quad word and use an unrolled loop
>     processing 64 bytes per iteration.
>  
>     FALKOR-SPECIFIC DESIGN:

Ok.

> @@ -67,36 +72,40 @@
>  
>     The non-temporal stores help optimize cache utilization.  */
>  
> -#if IS_IN (libc)
>  ENTRY_ALIGN (__memcpy_falkor, 6)
>  
> +	DELOUSE (0)
> +	DELOUSE (1)
> +	DELOUSE (2)
> +
>  	cmp	count, 32
>  	add	srcend, src, count
>  	add	dstend, dstin, count
>  	b.ls	L(copy32)
> -	ldr	A_q, [src]
>  	cmp	count, 128
> -	str	A_q, [dstin]
>  	b.hi	L(copy_long)
>  
>  	/* Medium copies: 33..128 bytes.  */
> +L(copy128):

Rename it to memcpy_copy128 to make it explicity on memmove code it
is branching to memcpy or at least add comment stating it.

>  	sub	tmp1, count, 1
> -	ldr	A_q, [src, 16]
> -	ldr	B_q, [srcend, -32]
> -	ldr	C_q, [srcend, -16]
> +	ldr	A_q, [src]
> +	ldr	B_q, [src, 16]
> +	ldr	C_q, [srcend, -32]
> +	ldr	D_q, [srcend, -16]
>  	tbz	tmp1, 6, 1f
> -	ldr	D_q, [src, 32]
> -	ldr	E_q, [src, 48]
> -	str	D_q, [dstin, 32]
> -	str	E_q, [dstin, 48]
> -	ldr	F_q, [srcend, -64]
> -	ldr	G_q, [srcend, -48]
> -	str	F_q, [dstend, -64]
> -	str	G_q, [dstend, -48]
> +	ldr	E_q, [src, 32]
> +	ldr	F_q, [src, 48]
> +	ldr	G_q, [srcend, -64]
> +	ldr	H_q, [srcend, -48]
> +	str	G_q, [dstend, -64]
> +	str	H_q, [dstend, -48]
> +	str	E_q, [dstin, 32]
> +	str	F_q, [dstin, 48]
>  1:
> -	str	A_q, [dstin, 16]
> -	str	B_q, [dstend, -32]
> -	str	C_q, [dstend, -16]
> +	str	A_q, [dstin]
> +	str	B_q, [dstin, 16]
> +	str	C_q, [dstend, -32]
> +	str	D_q, [dstend, -16]
>  	ret
>  
>  	.p2align 4

Ok.

> @@ -146,16 +155,19 @@ L(copy32):
>  1:
>  	ret
>  
> -	/* Align SRC to 16 bytes and copy; that way at least one of the
> +	/* Align src to 16 bytes and copy; that way at least one of the
>  	   accesses is aligned throughout the copy sequence.

I see no point in lowercase the SRC here.

>  
>  	   The count is off by 0 to 15 bytes, but this is OK because we trim
>  	   off the last 64 bytes to copy off from the end.  Due to this the
>  	   loop never runs out of bounds.  */
> -	.p2align 6
> +	.p2align 4
> +	nop
>  L(copy_long):

If the idea is to align the L(last64), I think is better to add
.p2align 4 before instead of trying to align here.

> +	ldr	A_q, [src]
>  	sub	count, count, 64 + 16
>  	and	tmp1, src, 15
> +	str	A_q, [dstin]
>  	bic	src, src, 15
>  	sub	dst, dstin, tmp1
>  	add	count, count, tmp1
> @@ -188,4 +200,118 @@ L(last64):
>  
>  END (__memcpy_falkor)
>  libc_hidden_builtin_def (__memcpy_falkor)
> +
> +
> +/* RATIONALE:
> +
> +   The move has 4 distinct parts:
> +   * Small moves of 32 bytes and under.
> +   * Medium sized moves of 33-128 bytes (fully unrolled).
> +   * Large moves where the source address is higher than the destination
> +     (forward copies)
> +   * Large moves where the destination address is higher than the source
> +     (copy backward, or move).
> +
> +   We use only two registers q6 and q22 for the moves and move 32 bytes at a
> +   time to correctly train the hardware prefetcher for better throughput.
> +
> +   For small and medium cases memcpy is used.  */
> +
> +ENTRY_ALIGN (__memmove_falkor, 6)
> +
> +	DELOUSE (0)
> +	DELOUSE (1)
> +	DELOUSE (2)
> +
> +	cmp	count, 32
> +	add	srcend, src, count
> +	add	dstend, dstin, count
> +	b.ls	L(copy32)
> +	cmp	count, 128
> +	b.ls	L(copy128)
> +	sub	tmp1, dstin, src
> +	ccmp	tmp1, count, 2, hi
> +	b.lo	L(move_long)
> +
> +	/* CASE: Copy Forwards
> +
> +	   Align src to 16 byte alignment so that we don't cross cache line
> +	   boundaries on both loads and stores.  There are at least 128 bytes
> +	   to copy, so copy 16 bytes unaligned and then align.  The loop
> +	   copies 32 bytes per iteration and prefetches one iteration ahead.  */
> +
> +	ldr	S_q, [src]
> +	and	tmp1, src, 15
> +	bic	src, src, 15
> +	sub	dst, dstin, tmp1
> +	add	count, count, tmp1	/* Count is now 16 too large.  */
> +	ldr	Q_q, [src, 16]!
> +	str	S_q, [dstin]
> +	ldr	S_q, [src, 16]!
> +	sub	count, count, 32 + 32 + 16	/* Test and readjust count.  */
> +	nop
> +	nop
> +
> +1:
> +	subs	count, count, 32
> +	str	Q_q, [dst, 16]
> +	ldr	Q_q, [src, 16]!
> +	str	S_q, [dst, 32]!
> +	ldr	S_q, [src, 16]!
> +	b.hi	1b
> +
> +	/* Copy 32 bytes from the end before writing the data prefetched in the
> +	   last loop iteration.  */
> +2:
> +	ldr	B_q, [srcend, -32]
> +	ldr	C_q, [srcend, -16]
> +	str	Q_q, [dst, 16]
> +	str	S_q, [dst, 32]
> +	str	B_q, [dstend, -32]
> +	str	C_q, [dstend, -16]
> +	ret
> +
> +	/* CASE: Copy Backwards
> +
> +	   Align srcend to 16 byte alignment so that we don't cross cache line
> +	   boundaries on both loads and stores.  There are at least 128 bytes
> +	   to copy, so copy 16 bytes unaligned and then align.  The loop
> +	   copies 32 bytes per iteration and prefetches one iteration ahead.  */
> +
> +	.p2align 4
> +	nop
> +	nop
> +L(move_long):
> +	cbz	tmp1, 3f  /* Return early if src == dstin */
> +	ldr	S_q, [srcend, -16]
> +	and	tmp1, srcend, 15
> +	sub	srcend, srcend, tmp1
> +	ldr	Q_q, [srcend, -16]!
> +	str	S_q, [dstend, -16]
> +	sub	count, count, tmp1
> +	ldr	S_q, [srcend, -16]!
> +	sub	dstend, dstend, tmp1
> +	sub	count, count, 32 + 32
> +
> +1:
> +	subs	count, count, 32
> +	str	Q_q, [dstend, -16]
> +	ldr	Q_q, [srcend, -16]!
> +	str	S_q, [dstend, -32]!
> +	ldr	S_q, [srcend, -16]!
> +	b.hi	1b
> +
> +	/* Copy 32 bytes from the start before writing the data prefetched in the
> +	   last loop iteration.  */
> +
> +	ldr	B_q, [src, 16]
> +	ldr	C_q, [src]
> +	str	Q_q, [dstend, -16]
> +	str	S_q, [dstend, -32]
> +	str	B_q, [dstin, 16]
> +	str	C_q, [dstin]
> +3:	ret
> +
> +END (__memmove_falkor)
> +libc_hidden_builtin_def (__memmove_falkor)
>  #endif

Ok.

> diff --git a/sysdeps/aarch64/multiarch/memmove_falkor.S b/sysdeps/aarch64/multiarch/memmove_falkor.S
> deleted file mode 100644
> index 3da4056..0000000
> --- a/sysdeps/aarch64/multiarch/memmove_falkor.S
> +++ /dev/null
> @@ -1,225 +0,0 @@
> -/* Copyright (C) 2017-2019 Free Software Foundation, Inc.
> -
> -   This file is part of the GNU C Library.
> -
> -   The GNU C Library is free software; you can redistribute it and/or
> -   modify it under the terms of the GNU Lesser General Public
> -   License as published by the Free Software Foundation; either
> -   version 2.1 of the License, or (at your option) any later version.
> -
> -   The GNU C Library is distributed in the hope that it will be useful,
> -   but WITHOUT ANY WARRANTY; without even the implied warranty of
> -   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> -   Lesser General Public License for more details.
> -
> -   You should have received a copy of the GNU Lesser General Public
> -   License along with the GNU C Library.  If not, see
> -   <https://www.gnu.org/licenses/>.  */
> -
> -#include <sysdep.h>
> -
> -/* Assumptions: ARMv8-a, AArch64, falkor, unaligned accesses.  */
> -
> -#define dstin	x0
> -#define src	x1
> -#define count	x2
> -#define dst	x3
> -#define srcend	x4
> -#define dstend	x5
> -#define A_x	x6
> -#define B_x	x7
> -#define A_w	w6
> -#define B_w	w7
> -#define tmp1	x14
> -
> -#define Q_q	q6
> -#define A_q	q22
> -#define B_q	q18
> -#define C_q	q19
> -#define D_q	q20
> -#define E_q	q21
> -#define F_q	q17
> -#define G_q	q23
> -
> -/* RATIONALE:
> -
> -   The move has 4 distinct parts:
> -   * Small moves of 16 bytes and under
> -   * Medium sized moves of 17-96 bytes
> -   * Large moves where the source address is higher than the destination
> -     (forward copies)
> -   * Large moves where the destination address is higher than the source
> -     (copy backward, or move).
> -
> -   We use only two registers q6 and q22 for the moves and move 32 bytes at a
> -   time to correctly train the hardware prefetcher for better throughput.  */
> -ENTRY_ALIGN (__memmove_falkor, 6)
> -
> -	sub	tmp1, dstin, src
> -	add	srcend, src, count
> -	add	dstend, dstin, count
> -	cmp	count, 96
> -	ccmp	tmp1, count, 2, hi
> -	b.lo	L(move_long)
> -
> -	cmp	count, 16
> -	b.ls	L(copy16)
> -	cmp	count, 96
> -	b.hi	L(copy_long)
> -
> -	/* Medium copies: 17..96 bytes.  */
> -	sub	tmp1, count, 1
> -	ldr	A_q, [src]
> -	tbnz	tmp1, 6, L(copy96)
> -	ldr	D_q, [srcend, -16]
> -	tbz	tmp1, 5, 1f
> -	ldr	B_q, [src, 16]
> -	ldr	C_q, [srcend, -32]
> -	str	B_q, [dstin, 16]
> -	str	C_q, [dstend, -32]
> -1:
> -	str	A_q, [dstin]
> -	str	D_q, [dstend, -16]
> -	ret
> -
> -	.p2align 4
> -	/* Small copies: 0..16 bytes.  */
> -L(copy16):
> -	cmp	count, 8
> -	b.lo	1f
> -	ldr	A_x, [src]
> -	ldr	B_x, [srcend, -8]
> -	str	A_x, [dstin]
> -	str	B_x, [dstend, -8]
> -	ret
> -	.p2align 4
> -1:
> -	/* 4-7 */
> -	tbz	count, 2, 1f
> -	ldr	A_w, [src]
> -	ldr	B_w, [srcend, -4]
> -	str	A_w, [dstin]
> -	str	B_w, [dstend, -4]
> -	ret
> -	.p2align 4
> -1:
> -	/* 2-3 */
> -	tbz	count, 1, 1f
> -	ldrh	A_w, [src]
> -	ldrh	B_w, [srcend, -2]
> -	strh	A_w, [dstin]
> -	strh	B_w, [dstend, -2]
> -	ret
> -	.p2align 4
> -1:
> -	/* 0-1 */
> -	tbz	count, 0, 1f
> -	ldrb	A_w, [src]
> -	strb	A_w, [dstin]
> -1:	ret
> -
> -	.p2align 4
> -	/* Copy 64..96 bytes.  Copy 64 bytes from the start and
> -	   32 bytes from the end.  */
> -L(copy96):
> -	ldr	B_q, [src, 16]
> -	ldr	C_q, [src, 32]
> -	ldr	D_q, [src, 48]
> -	ldr	E_q, [srcend, -32]
> -	ldr	F_q, [srcend, -16]
> -	str	A_q, [dstin]
> -	str	B_q, [dstin, 16]
> -	str	C_q, [dstin, 32]
> -	str	D_q, [dstin, 48]
> -	str	E_q, [dstend, -32]
> -	str	F_q, [dstend, -16]
> -	ret
> -
> -	/* Align SRC to 16 byte alignment so that we don't cross cache line
> -	   boundaries on both loads and stores.  There are at least 96 bytes
> -	   to copy, so copy 16 bytes unaligned and then align.  The loop
> -	   copies 32 bytes per iteration and prefetches one iteration ahead.  */
> -
> -	.p2align 4
> -L(copy_long):
> -	ldr	A_q, [src]
> -	and	tmp1, src, 15
> -	bic	src, src, 15
> -	sub	dst, dstin, tmp1
> -	add	count, count, tmp1	/* Count is now 16 too large.  */
> -	ldr	Q_q, [src, 16]!
> -	str	A_q, [dstin]
> -	ldr	A_q, [src, 16]!
> -	subs	count, count, 32 + 64 + 16	/* Test and readjust count.  */
> -	b.ls	L(last64)
> -
> -L(loop64):
> -	subs	count, count, 32
> -	str	Q_q, [dst, 16]
> -	ldr	Q_q, [src, 16]!
> -	str	A_q, [dst, 32]!
> -	ldr	A_q, [src, 16]!
> -	b.hi	L(loop64)
> -
> -	/* Write the last full set of 64 bytes.  The remainder is at most 64
> -	   bytes and at least 33 bytes, so it is safe to always copy 64 bytes
> -	   from the end.  */
> -L(last64):
> -	ldr	C_q, [srcend, -64]
> -	str	Q_q, [dst, 16]
> -	ldr	B_q, [srcend, -48]
> -	str	A_q, [dst, 32]
> -	ldr	A_q, [srcend, -32]
> -	ldr	D_q, [srcend, -16]
> -	str	C_q, [dstend, -64]
> -	str	B_q, [dstend, -48]
> -	str	A_q, [dstend, -32]
> -	str	D_q, [dstend, -16]
> -	ret
> -
> -	.p2align 4
> -L(move_long):
> -	cbz	tmp1, 3f
> -
> -	/* Align SRCEND to 16 byte alignment so that we don't cross cache line
> -	   boundaries on both loads and stores.  There are at least 96 bytes
> -	   to copy, so copy 16 bytes unaligned and then align.  The loop
> -	   copies 32 bytes per iteration and prefetches one iteration ahead.  */
> -
> -	ldr	A_q, [srcend, -16]
> -	and	tmp1, srcend, 15
> -	sub	srcend, srcend, tmp1
> -	ldr	Q_q, [srcend, -16]!
> -	str	A_q, [dstend, -16]
> -	sub	count, count, tmp1
> -	ldr	A_q, [srcend, -16]!
> -	sub	dstend, dstend, tmp1
> -	subs	count, count, 32 + 64
> -	b.ls	2f
> -
> -1:
> -	subs	count, count, 32
> -	str	Q_q, [dstend, -16]
> -	ldr	Q_q, [srcend, -16]!
> -	str	A_q, [dstend, -32]!
> -	ldr	A_q, [srcend, -16]!
> -	b.hi	1b
> -
> -	/* Write the last full set of 64 bytes.  The remainder is at most 64
> -	   bytes and at least 33 bytes, so it is safe to always copy 64 bytes
> -	   from the start.  */
> -2:
> -	ldr	C_q, [src, 48]
> -	str	Q_q, [dstend, -16]
> -	ldr	B_q, [src, 32]
> -	str	A_q, [dstend, -32]
> -	ldr	A_q, [src, 16]
> -	ldr	D_q, [src]
> -	str	C_q, [dstin, 48]
> -	str	B_q, [dstin, 32]
> -	str	A_q, [dstin, 16]
> -	str	D_q, [dstin]
> -3:	ret
> -
> -END (__memmove_falkor)
> -libc_hidden_builtin_def (__memmove_falkor)
> 

Ok.
  
Wilco Dijkstra June 8, 2020, 1:25 p.m. UTC | #2
Hi Adhemerval,

Thanks for the review.

> -sysdep_routines += memcpy_generic memcpy_thunderx memcpy_thunderx2 \
> -                memcpy_falkor memmove_falkor \

> Just remove memmove_falkor from second line, there is no need to
> move memcpy_thunderx2 below.

Fixed.

> -#define tmp1 x14
> +#define tmp1 x14

> I see no need to move this definition below.

Fixed.

> +#if IS_IN (libc)

> Another non required change, the check is already in place
> before ENTRY_ALIGN.

Fixed.

> +L(copy128):

> Rename it to memcpy_copy128 to make it explicity on memmove code it
> is branching to memcpy or at least add comment stating it.

There is already a comment just above the code that jumps back to memcpy:

+
+   For small and medium cases memcpy is used.  */
+

I'm not sure how much more would need to be said given this is exactly what
we do in the other memcpy implementations, and it is the main goal of this
patch.

> -     /* Align SRC to 16 bytes and copy; that way at least one of the
> +     /* Align src to 16 bytes and copy; that way at least one of the
>           accesses is aligned throughout the copy sequence.

> I see no point in lowercase the SRC here.

Fixed.

> +     .p2align 4
> +     nop
>  L(copy_long):

> If the idea is to align the L(last64), I think is better to add
> .p2align 4 before instead of trying to align here.

Last64 is an unused label, so I removed it. I added a comment to
explain it aligns loop64.

Committed as d1f75e964484504e4f30f4623569d5889a97ac18

Cheers,
Wilco
  

Patch

diff --git a/sysdeps/aarch64/multiarch/Makefile b/sysdeps/aarch64/multiarch/Makefile
index 4150b89..7f7f987 100644
--- a/sysdeps/aarch64/multiarch/Makefile
+++ b/sysdeps/aarch64/multiarch/Makefile
@@ -1,6 +1,6 @@ 
 ifeq ($(subdir),string)
-sysdep_routines += memcpy_generic memcpy_thunderx memcpy_thunderx2 \
-		   memcpy_falkor memmove_falkor \
+sysdep_routines += memcpy_generic memcpy_thunderx \
+		   memcpy_thunderx2 memcpy_falkor \
 		   memset_generic memset_falkor memset_emag \
 		   memchr_generic memchr_nosimd \
 		   strlen_generic strlen_asimd
diff --git a/sysdeps/aarch64/multiarch/memcpy_falkor.S b/sysdeps/aarch64/multiarch/memcpy_falkor.S
index 1845c86..d873efc 100644
--- a/sysdeps/aarch64/multiarch/memcpy_falkor.S
+++ b/sysdeps/aarch64/multiarch/memcpy_falkor.S
@@ -29,11 +29,11 @@ 
 #define dst	x3
 #define srcend	x4
 #define dstend	x5
-#define tmp1	x14
 #define A_x	x6
 #define B_x	x7
 #define A_w	w6
 #define B_w	w7
+#define tmp1	x14
 
 #define A_q	q0
 #define B_q	q1
@@ -42,6 +42,11 @@ 
 #define E_q	q4
 #define F_q	q5
 #define G_q	q6
+#define H_q	q7
+#define Q_q	q6
+#define S_q	q22
+
+#if IS_IN (libc)
 
 /* Copies are split into 3 main cases:
 
@@ -49,7 +54,7 @@ 
    2. Medium copies of 33..128 bytes which are fully unrolled
    3. Large copies of more than 128 bytes.
 
-   Large copies align the sourceto a quad word and use an unrolled loop
+   Large copies align the source to a quad word and use an unrolled loop
    processing 64 bytes per iteration.
 
    FALKOR-SPECIFIC DESIGN:
@@ -67,36 +72,40 @@ 
 
    The non-temporal stores help optimize cache utilization.  */
 
-#if IS_IN (libc)
 ENTRY_ALIGN (__memcpy_falkor, 6)
 
+	DELOUSE (0)
+	DELOUSE (1)
+	DELOUSE (2)
+
 	cmp	count, 32
 	add	srcend, src, count
 	add	dstend, dstin, count
 	b.ls	L(copy32)
-	ldr	A_q, [src]
 	cmp	count, 128
-	str	A_q, [dstin]
 	b.hi	L(copy_long)
 
 	/* Medium copies: 33..128 bytes.  */
+L(copy128):
 	sub	tmp1, count, 1
-	ldr	A_q, [src, 16]
-	ldr	B_q, [srcend, -32]
-	ldr	C_q, [srcend, -16]
+	ldr	A_q, [src]
+	ldr	B_q, [src, 16]
+	ldr	C_q, [srcend, -32]
+	ldr	D_q, [srcend, -16]
 	tbz	tmp1, 6, 1f
-	ldr	D_q, [src, 32]
-	ldr	E_q, [src, 48]
-	str	D_q, [dstin, 32]
-	str	E_q, [dstin, 48]
-	ldr	F_q, [srcend, -64]
-	ldr	G_q, [srcend, -48]
-	str	F_q, [dstend, -64]
-	str	G_q, [dstend, -48]
+	ldr	E_q, [src, 32]
+	ldr	F_q, [src, 48]
+	ldr	G_q, [srcend, -64]
+	ldr	H_q, [srcend, -48]
+	str	G_q, [dstend, -64]
+	str	H_q, [dstend, -48]
+	str	E_q, [dstin, 32]
+	str	F_q, [dstin, 48]
 1:
-	str	A_q, [dstin, 16]
-	str	B_q, [dstend, -32]
-	str	C_q, [dstend, -16]
+	str	A_q, [dstin]
+	str	B_q, [dstin, 16]
+	str	C_q, [dstend, -32]
+	str	D_q, [dstend, -16]
 	ret
 
 	.p2align 4
@@ -146,16 +155,19 @@  L(copy32):
 1:
 	ret
 
-	/* Align SRC to 16 bytes and copy; that way at least one of the
+	/* Align src to 16 bytes and copy; that way at least one of the
 	   accesses is aligned throughout the copy sequence.
 
 	   The count is off by 0 to 15 bytes, but this is OK because we trim
 	   off the last 64 bytes to copy off from the end.  Due to this the
 	   loop never runs out of bounds.  */
-	.p2align 6
+	.p2align 4
+	nop
 L(copy_long):
+	ldr	A_q, [src]
 	sub	count, count, 64 + 16
 	and	tmp1, src, 15
+	str	A_q, [dstin]
 	bic	src, src, 15
 	sub	dst, dstin, tmp1
 	add	count, count, tmp1
@@ -188,4 +200,118 @@  L(last64):
 
 END (__memcpy_falkor)
 libc_hidden_builtin_def (__memcpy_falkor)
+
+
+/* RATIONALE:
+
+   The move has 4 distinct parts:
+   * Small moves of 32 bytes and under.
+   * Medium sized moves of 33-128 bytes (fully unrolled).
+   * Large moves where the source address is higher than the destination
+     (forward copies)
+   * Large moves where the destination address is higher than the source
+     (copy backward, or move).
+
+   We use only two registers q6 and q22 for the moves and move 32 bytes at a
+   time to correctly train the hardware prefetcher for better throughput.
+
+   For small and medium cases memcpy is used.  */
+
+ENTRY_ALIGN (__memmove_falkor, 6)
+
+	DELOUSE (0)
+	DELOUSE (1)
+	DELOUSE (2)
+
+	cmp	count, 32
+	add	srcend, src, count
+	add	dstend, dstin, count
+	b.ls	L(copy32)
+	cmp	count, 128
+	b.ls	L(copy128)
+	sub	tmp1, dstin, src
+	ccmp	tmp1, count, 2, hi
+	b.lo	L(move_long)
+
+	/* CASE: Copy Forwards
+
+	   Align src to 16 byte alignment so that we don't cross cache line
+	   boundaries on both loads and stores.  There are at least 128 bytes
+	   to copy, so copy 16 bytes unaligned and then align.  The loop
+	   copies 32 bytes per iteration and prefetches one iteration ahead.  */
+
+	ldr	S_q, [src]
+	and	tmp1, src, 15
+	bic	src, src, 15
+	sub	dst, dstin, tmp1
+	add	count, count, tmp1	/* Count is now 16 too large.  */
+	ldr	Q_q, [src, 16]!
+	str	S_q, [dstin]
+	ldr	S_q, [src, 16]!
+	sub	count, count, 32 + 32 + 16	/* Test and readjust count.  */
+	nop
+	nop
+
+1:
+	subs	count, count, 32
+	str	Q_q, [dst, 16]
+	ldr	Q_q, [src, 16]!
+	str	S_q, [dst, 32]!
+	ldr	S_q, [src, 16]!
+	b.hi	1b
+
+	/* Copy 32 bytes from the end before writing the data prefetched in the
+	   last loop iteration.  */
+2:
+	ldr	B_q, [srcend, -32]
+	ldr	C_q, [srcend, -16]
+	str	Q_q, [dst, 16]
+	str	S_q, [dst, 32]
+	str	B_q, [dstend, -32]
+	str	C_q, [dstend, -16]
+	ret
+
+	/* CASE: Copy Backwards
+
+	   Align srcend to 16 byte alignment so that we don't cross cache line
+	   boundaries on both loads and stores.  There are at least 128 bytes
+	   to copy, so copy 16 bytes unaligned and then align.  The loop
+	   copies 32 bytes per iteration and prefetches one iteration ahead.  */
+
+	.p2align 4
+	nop
+	nop
+L(move_long):
+	cbz	tmp1, 3f  /* Return early if src == dstin */
+	ldr	S_q, [srcend, -16]
+	and	tmp1, srcend, 15
+	sub	srcend, srcend, tmp1
+	ldr	Q_q, [srcend, -16]!
+	str	S_q, [dstend, -16]
+	sub	count, count, tmp1
+	ldr	S_q, [srcend, -16]!
+	sub	dstend, dstend, tmp1
+	sub	count, count, 32 + 32
+
+1:
+	subs	count, count, 32
+	str	Q_q, [dstend, -16]
+	ldr	Q_q, [srcend, -16]!
+	str	S_q, [dstend, -32]!
+	ldr	S_q, [srcend, -16]!
+	b.hi	1b
+
+	/* Copy 32 bytes from the start before writing the data prefetched in the
+	   last loop iteration.  */
+
+	ldr	B_q, [src, 16]
+	ldr	C_q, [src]
+	str	Q_q, [dstend, -16]
+	str	S_q, [dstend, -32]
+	str	B_q, [dstin, 16]
+	str	C_q, [dstin]
+3:	ret
+
+END (__memmove_falkor)
+libc_hidden_builtin_def (__memmove_falkor)
 #endif
diff --git a/sysdeps/aarch64/multiarch/memmove_falkor.S b/sysdeps/aarch64/multiarch/memmove_falkor.S
deleted file mode 100644
index 3da4056..0000000
--- a/sysdeps/aarch64/multiarch/memmove_falkor.S
+++ /dev/null
@@ -1,225 +0,0 @@ 
-/* Copyright (C) 2017-2019 Free Software Foundation, Inc.
-
-   This file is part of the GNU C Library.
-
-   The GNU C Library is free software; you can redistribute it and/or
-   modify it under the terms of the GNU Lesser General Public
-   License as published by the Free Software Foundation; either
-   version 2.1 of the License, or (at your option) any later version.
-
-   The GNU C Library is distributed in the hope that it will be useful,
-   but WITHOUT ANY WARRANTY; without even the implied warranty of
-   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-   Lesser General Public License for more details.
-
-   You should have received a copy of the GNU Lesser General Public
-   License along with the GNU C Library.  If not, see
-   <https://www.gnu.org/licenses/>.  */
-
-#include <sysdep.h>
-
-/* Assumptions: ARMv8-a, AArch64, falkor, unaligned accesses.  */
-
-#define dstin	x0
-#define src	x1
-#define count	x2
-#define dst	x3
-#define srcend	x4
-#define dstend	x5
-#define A_x	x6
-#define B_x	x7
-#define A_w	w6
-#define B_w	w7
-#define tmp1	x14
-
-#define Q_q	q6
-#define A_q	q22
-#define B_q	q18
-#define C_q	q19
-#define D_q	q20
-#define E_q	q21
-#define F_q	q17
-#define G_q	q23
-
-/* RATIONALE:
-
-   The move has 4 distinct parts:
-   * Small moves of 16 bytes and under
-   * Medium sized moves of 17-96 bytes
-   * Large moves where the source address is higher than the destination
-     (forward copies)
-   * Large moves where the destination address is higher than the source
-     (copy backward, or move).
-
-   We use only two registers q6 and q22 for the moves and move 32 bytes at a
-   time to correctly train the hardware prefetcher for better throughput.  */
-ENTRY_ALIGN (__memmove_falkor, 6)
-
-	sub	tmp1, dstin, src
-	add	srcend, src, count
-	add	dstend, dstin, count
-	cmp	count, 96
-	ccmp	tmp1, count, 2, hi
-	b.lo	L(move_long)
-
-	cmp	count, 16
-	b.ls	L(copy16)
-	cmp	count, 96
-	b.hi	L(copy_long)
-
-	/* Medium copies: 17..96 bytes.  */
-	sub	tmp1, count, 1
-	ldr	A_q, [src]
-	tbnz	tmp1, 6, L(copy96)
-	ldr	D_q, [srcend, -16]
-	tbz	tmp1, 5, 1f
-	ldr	B_q, [src, 16]
-	ldr	C_q, [srcend, -32]
-	str	B_q, [dstin, 16]
-	str	C_q, [dstend, -32]
-1:
-	str	A_q, [dstin]
-	str	D_q, [dstend, -16]
-	ret
-
-	.p2align 4
-	/* Small copies: 0..16 bytes.  */
-L(copy16):
-	cmp	count, 8
-	b.lo	1f
-	ldr	A_x, [src]
-	ldr	B_x, [srcend, -8]
-	str	A_x, [dstin]
-	str	B_x, [dstend, -8]
-	ret
-	.p2align 4
-1:
-	/* 4-7 */
-	tbz	count, 2, 1f
-	ldr	A_w, [src]
-	ldr	B_w, [srcend, -4]
-	str	A_w, [dstin]
-	str	B_w, [dstend, -4]
-	ret
-	.p2align 4
-1:
-	/* 2-3 */
-	tbz	count, 1, 1f
-	ldrh	A_w, [src]
-	ldrh	B_w, [srcend, -2]
-	strh	A_w, [dstin]
-	strh	B_w, [dstend, -2]
-	ret
-	.p2align 4
-1:
-	/* 0-1 */
-	tbz	count, 0, 1f
-	ldrb	A_w, [src]
-	strb	A_w, [dstin]
-1:	ret
-
-	.p2align 4
-	/* Copy 64..96 bytes.  Copy 64 bytes from the start and
-	   32 bytes from the end.  */
-L(copy96):
-	ldr	B_q, [src, 16]
-	ldr	C_q, [src, 32]
-	ldr	D_q, [src, 48]
-	ldr	E_q, [srcend, -32]
-	ldr	F_q, [srcend, -16]
-	str	A_q, [dstin]
-	str	B_q, [dstin, 16]
-	str	C_q, [dstin, 32]
-	str	D_q, [dstin, 48]
-	str	E_q, [dstend, -32]
-	str	F_q, [dstend, -16]
-	ret
-
-	/* Align SRC to 16 byte alignment so that we don't cross cache line
-	   boundaries on both loads and stores.  There are at least 96 bytes
-	   to copy, so copy 16 bytes unaligned and then align.  The loop
-	   copies 32 bytes per iteration and prefetches one iteration ahead.  */
-
-	.p2align 4
-L(copy_long):
-	ldr	A_q, [src]
-	and	tmp1, src, 15
-	bic	src, src, 15
-	sub	dst, dstin, tmp1
-	add	count, count, tmp1	/* Count is now 16 too large.  */
-	ldr	Q_q, [src, 16]!
-	str	A_q, [dstin]
-	ldr	A_q, [src, 16]!
-	subs	count, count, 32 + 64 + 16	/* Test and readjust count.  */
-	b.ls	L(last64)
-
-L(loop64):
-	subs	count, count, 32
-	str	Q_q, [dst, 16]
-	ldr	Q_q, [src, 16]!
-	str	A_q, [dst, 32]!
-	ldr	A_q, [src, 16]!
-	b.hi	L(loop64)
-
-	/* Write the last full set of 64 bytes.  The remainder is at most 64
-	   bytes and at least 33 bytes, so it is safe to always copy 64 bytes
-	   from the end.  */
-L(last64):
-	ldr	C_q, [srcend, -64]
-	str	Q_q, [dst, 16]
-	ldr	B_q, [srcend, -48]
-	str	A_q, [dst, 32]
-	ldr	A_q, [srcend, -32]
-	ldr	D_q, [srcend, -16]
-	str	C_q, [dstend, -64]
-	str	B_q, [dstend, -48]
-	str	A_q, [dstend, -32]
-	str	D_q, [dstend, -16]
-	ret
-
-	.p2align 4
-L(move_long):
-	cbz	tmp1, 3f
-
-	/* Align SRCEND to 16 byte alignment so that we don't cross cache line
-	   boundaries on both loads and stores.  There are at least 96 bytes
-	   to copy, so copy 16 bytes unaligned and then align.  The loop
-	   copies 32 bytes per iteration and prefetches one iteration ahead.  */
-
-	ldr	A_q, [srcend, -16]
-	and	tmp1, srcend, 15
-	sub	srcend, srcend, tmp1
-	ldr	Q_q, [srcend, -16]!
-	str	A_q, [dstend, -16]
-	sub	count, count, tmp1
-	ldr	A_q, [srcend, -16]!
-	sub	dstend, dstend, tmp1
-	subs	count, count, 32 + 64
-	b.ls	2f
-
-1:
-	subs	count, count, 32
-	str	Q_q, [dstend, -16]
-	ldr	Q_q, [srcend, -16]!
-	str	A_q, [dstend, -32]!
-	ldr	A_q, [srcend, -16]!
-	b.hi	1b
-
-	/* Write the last full set of 64 bytes.  The remainder is at most 64
-	   bytes and at least 33 bytes, so it is safe to always copy 64 bytes
-	   from the start.  */
-2:
-	ldr	C_q, [src, 48]
-	str	Q_q, [dstend, -16]
-	ldr	B_q, [src, 32]
-	str	A_q, [dstend, -32]
-	ldr	A_q, [src, 16]
-	ldr	D_q, [src]
-	str	C_q, [dstin, 48]
-	str	B_q, [dstin, 32]
-	str	A_q, [dstin, 16]
-	str	D_q, [dstin]
-3:	ret
-
-END (__memmove_falkor)
-libc_hidden_builtin_def (__memmove_falkor)