V2 [PATCH] x86-64: Avoid rep movsb with short distance [BZ #27130]

Message ID 20210104151706.2129490-1-hjl.tools@gmail.com
State Committed
Commit 3ec5d83d2a237d39e7fd6ef7a0bc8ac4c171a4a5
Delegated to: Adhemerval Zanella Netto
Headers
Series V2 [PATCH] x86-64: Avoid rep movsb with short distance [BZ #27130] |

Commit Message

H.J. Lu Jan. 4, 2021, 3:17 p.m. UTC
  When copying with "rep movsb", if the distance between source and
destination is N*4GB + [1..63] with N >= 0, performance may be very
slow.  This patch updates memmove-vec-unaligned-erms.S for AVX and
AVX512 versions with the distance in RCX:

	cmpl	$63, %ecx
	// Don't use "rep movsb" if ECX <= 63
	jbe	L(Don't use rep movsb")
	Use "rep movsb"

Benchtests data with bench-memcpy, bench-memcpy-large, bench-memcpy-random
and bench-memcpy-walk on Skylake, Ice Lake and Tiger Lake show that its
performance impact is within noise range as "rep movsb" is only used for
data size >= 4KB.

Changes from V1:

1. Check distance of N*4GB + [1..63] with N >= 0 instead of [1..63].

---
 .../multiarch/memmove-vec-unaligned-erms.S    | 21 +++++++++++++++++++
 1 file changed, 21 insertions(+)
  

Comments

Florian Weimer Jan. 4, 2021, 3:22 p.m. UTC | #1
* H. J. Lu via Libc-alpha:

>  1:
> +# if AVOID_SHORT_DISTANCE_REP_MOVSB
> +	movq	%rsi, %rcx
> +	subq	%rdi, %rcx
> +2:
> +/* Avoid "rep movsb" if RCX, the distance between source and destination,
> +   is N*4GB + [1..63] with N >= 0.  */
> +	cmpl	$63, %ecx
> +	jbe	L(more_2x_vec)	/* Avoid "rep movsb" if ECX <= 63.  */
> +# endif
>  	mov	%RDX_LP, %RCX_LP
>  	rep movsb
>  L(nop):

Why not use _LP names here?  I think the %ecx comparison at least can
give false results on x86-64 (64-bit).

Thanks,
Florian
  
H.J. Lu Jan. 4, 2021, 3:27 p.m. UTC | #2
On Mon, Jan 4, 2021 at 7:22 AM Florian Weimer <fweimer@redhat.com> wrote:
>
> * H. J. Lu via Libc-alpha:
>
> >  1:
> > +# if AVOID_SHORT_DISTANCE_REP_MOVSB
> > +     movq    %rsi, %rcx
> > +     subq    %rdi, %rcx
> > +2:
> > +/* Avoid "rep movsb" if RCX, the distance between source and destination,
> > +   is N*4GB + [1..63] with N >= 0.  */
> > +     cmpl    $63, %ecx
> > +     jbe     L(more_2x_vec)  /* Avoid "rep movsb" if ECX <= 63.  */
> > +# endif
> >       mov     %RDX_LP, %RCX_LP
> >       rep movsb
> >  L(nop):
>
> Why not use _LP names here?  I think the %ecx comparison at least can
> give false results on x86-64 (64-bit).
>

This is done on purpose since we want to avoid "rep movsb" for distances of
N*4GB + [1..63] with N >= 0 which include 0x100000003.
  
Florian Weimer Jan. 4, 2021, 3:47 p.m. UTC | #3
* H. J. Lu:

> On Mon, Jan 4, 2021 at 7:22 AM Florian Weimer <fweimer@redhat.com> wrote:
>>
>> * H. J. Lu via Libc-alpha:
>>
>> >  1:
>> > +# if AVOID_SHORT_DISTANCE_REP_MOVSB
>> > +     movq    %rsi, %rcx
>> > +     subq    %rdi, %rcx
>> > +2:
>> > +/* Avoid "rep movsb" if RCX, the distance between source and destination,
>> > +   is N*4GB + [1..63] with N >= 0.  */
>> > +     cmpl    $63, %ecx
>> > +     jbe     L(more_2x_vec)  /* Avoid "rep movsb" if ECX <= 63.  */
>> > +# endif
>> >       mov     %RDX_LP, %RCX_LP
>> >       rep movsb
>> >  L(nop):
>>
>> Why not use _LP names here?  I think the %ecx comparison at least can
>> give false results on x86-64 (64-bit).
>>
>
> This is done on purpose since we want to avoid "rep movsb" for distances of
> N*4GB + [1..63] with N >= 0 which include 0x100000003.

Ah, and the comment is quite clear (the commit subject less so).

I tried to make sense of the assembler code, and I think the change is
okay because L(movsb) is only reached when there is more to copy than
twice the vector size.

Thanks,
Florian
  
H.J. Lu Jan. 4, 2021, 3:54 p.m. UTC | #4
On Mon, Jan 4, 2021 at 7:47 AM Florian Weimer <fweimer@redhat.com> wrote:
>
> * H. J. Lu:
>
> > On Mon, Jan 4, 2021 at 7:22 AM Florian Weimer <fweimer@redhat.com> wrote:
> >>
> >> * H. J. Lu via Libc-alpha:
> >>
> >> >  1:
> >> > +# if AVOID_SHORT_DISTANCE_REP_MOVSB
> >> > +     movq    %rsi, %rcx
> >> > +     subq    %rdi, %rcx
> >> > +2:
> >> > +/* Avoid "rep movsb" if RCX, the distance between source and destination,
> >> > +   is N*4GB + [1..63] with N >= 0.  */
> >> > +     cmpl    $63, %ecx
> >> > +     jbe     L(more_2x_vec)  /* Avoid "rep movsb" if ECX <= 63.  */
> >> > +# endif
> >> >       mov     %RDX_LP, %RCX_LP
> >> >       rep movsb
> >> >  L(nop):
> >>
> >> Why not use _LP names here?  I think the %ecx comparison at least can
> >> give false results on x86-64 (64-bit).
> >>
> >
> > This is done on purpose since we want to avoid "rep movsb" for distances of
> > N*4GB + [1..63] with N >= 0 which include 0x100000003.
>
> Ah, and the comment is quite clear (the commit subject less so).

It isn't easy to describe it with so few letters.

> I tried to make sense of the assembler code, and I think the change is
> okay because L(movsb) is only reached when there is more to copy than
> twice the vector size.
>

That is correct.  I am checking it in.  I will backport it to release branches
next week.

Thanks.
  

Patch

diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
index 7d54095f04..0980c95378 100644
--- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
+++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
@@ -56,6 +56,13 @@ 
 # endif
 #endif
 
+/* Avoid short distance rep movsb only with non-SSE vector.  */
+#ifndef AVOID_SHORT_DISTANCE_REP_MOVSB
+# define AVOID_SHORT_DISTANCE_REP_MOVSB (VEC_SIZE > 16)
+#else
+# define AVOID_SHORT_DISTANCE_REP_MOVSB 0
+#endif
+
 #ifndef PREFETCH
 # define PREFETCH(addr) prefetcht0 addr
 #endif
@@ -243,7 +250,21 @@  L(movsb):
 	cmpq	%r9, %rdi
 	/* Avoid slow backward REP MOVSB.  */
 	jb	L(more_8x_vec_backward)
+# if AVOID_SHORT_DISTANCE_REP_MOVSB
+	movq	%rdi, %rcx
+	subq	%rsi, %rcx
+	jmp	2f
+# endif
 1:
+# if AVOID_SHORT_DISTANCE_REP_MOVSB
+	movq	%rsi, %rcx
+	subq	%rdi, %rcx
+2:
+/* Avoid "rep movsb" if RCX, the distance between source and destination,
+   is N*4GB + [1..63] with N >= 0.  */
+	cmpl	$63, %ecx
+	jbe	L(more_2x_vec)	/* Avoid "rep movsb" if ECX <= 63.  */
+# endif
 	mov	%RDX_LP, %RCX_LP
 	rep movsb
 L(nop):