V2 [PATCH] x86-64: Avoid rep movsb with short distance [BZ #27130]
Commit Message
When copying with "rep movsb", if the distance between source and
destination is N*4GB + [1..63] with N >= 0, performance may be very
slow. This patch updates memmove-vec-unaligned-erms.S for AVX and
AVX512 versions with the distance in RCX:
cmpl $63, %ecx
// Don't use "rep movsb" if ECX <= 63
jbe L(Don't use rep movsb")
Use "rep movsb"
Benchtests data with bench-memcpy, bench-memcpy-large, bench-memcpy-random
and bench-memcpy-walk on Skylake, Ice Lake and Tiger Lake show that its
performance impact is within noise range as "rep movsb" is only used for
data size >= 4KB.
Changes from V1:
1. Check distance of N*4GB + [1..63] with N >= 0 instead of [1..63].
---
.../multiarch/memmove-vec-unaligned-erms.S | 21 +++++++++++++++++++
1 file changed, 21 insertions(+)
Comments
* H. J. Lu via Libc-alpha:
> 1:
> +# if AVOID_SHORT_DISTANCE_REP_MOVSB
> + movq %rsi, %rcx
> + subq %rdi, %rcx
> +2:
> +/* Avoid "rep movsb" if RCX, the distance between source and destination,
> + is N*4GB + [1..63] with N >= 0. */
> + cmpl $63, %ecx
> + jbe L(more_2x_vec) /* Avoid "rep movsb" if ECX <= 63. */
> +# endif
> mov %RDX_LP, %RCX_LP
> rep movsb
> L(nop):
Why not use _LP names here? I think the %ecx comparison at least can
give false results on x86-64 (64-bit).
Thanks,
Florian
On Mon, Jan 4, 2021 at 7:22 AM Florian Weimer <fweimer@redhat.com> wrote:
>
> * H. J. Lu via Libc-alpha:
>
> > 1:
> > +# if AVOID_SHORT_DISTANCE_REP_MOVSB
> > + movq %rsi, %rcx
> > + subq %rdi, %rcx
> > +2:
> > +/* Avoid "rep movsb" if RCX, the distance between source and destination,
> > + is N*4GB + [1..63] with N >= 0. */
> > + cmpl $63, %ecx
> > + jbe L(more_2x_vec) /* Avoid "rep movsb" if ECX <= 63. */
> > +# endif
> > mov %RDX_LP, %RCX_LP
> > rep movsb
> > L(nop):
>
> Why not use _LP names here? I think the %ecx comparison at least can
> give false results on x86-64 (64-bit).
>
This is done on purpose since we want to avoid "rep movsb" for distances of
N*4GB + [1..63] with N >= 0 which include 0x100000003.
* H. J. Lu:
> On Mon, Jan 4, 2021 at 7:22 AM Florian Weimer <fweimer@redhat.com> wrote:
>>
>> * H. J. Lu via Libc-alpha:
>>
>> > 1:
>> > +# if AVOID_SHORT_DISTANCE_REP_MOVSB
>> > + movq %rsi, %rcx
>> > + subq %rdi, %rcx
>> > +2:
>> > +/* Avoid "rep movsb" if RCX, the distance between source and destination,
>> > + is N*4GB + [1..63] with N >= 0. */
>> > + cmpl $63, %ecx
>> > + jbe L(more_2x_vec) /* Avoid "rep movsb" if ECX <= 63. */
>> > +# endif
>> > mov %RDX_LP, %RCX_LP
>> > rep movsb
>> > L(nop):
>>
>> Why not use _LP names here? I think the %ecx comparison at least can
>> give false results on x86-64 (64-bit).
>>
>
> This is done on purpose since we want to avoid "rep movsb" for distances of
> N*4GB + [1..63] with N >= 0 which include 0x100000003.
Ah, and the comment is quite clear (the commit subject less so).
I tried to make sense of the assembler code, and I think the change is
okay because L(movsb) is only reached when there is more to copy than
twice the vector size.
Thanks,
Florian
On Mon, Jan 4, 2021 at 7:47 AM Florian Weimer <fweimer@redhat.com> wrote:
>
> * H. J. Lu:
>
> > On Mon, Jan 4, 2021 at 7:22 AM Florian Weimer <fweimer@redhat.com> wrote:
> >>
> >> * H. J. Lu via Libc-alpha:
> >>
> >> > 1:
> >> > +# if AVOID_SHORT_DISTANCE_REP_MOVSB
> >> > + movq %rsi, %rcx
> >> > + subq %rdi, %rcx
> >> > +2:
> >> > +/* Avoid "rep movsb" if RCX, the distance between source and destination,
> >> > + is N*4GB + [1..63] with N >= 0. */
> >> > + cmpl $63, %ecx
> >> > + jbe L(more_2x_vec) /* Avoid "rep movsb" if ECX <= 63. */
> >> > +# endif
> >> > mov %RDX_LP, %RCX_LP
> >> > rep movsb
> >> > L(nop):
> >>
> >> Why not use _LP names here? I think the %ecx comparison at least can
> >> give false results on x86-64 (64-bit).
> >>
> >
> > This is done on purpose since we want to avoid "rep movsb" for distances of
> > N*4GB + [1..63] with N >= 0 which include 0x100000003.
>
> Ah, and the comment is quite clear (the commit subject less so).
It isn't easy to describe it with so few letters.
> I tried to make sense of the assembler code, and I think the change is
> okay because L(movsb) is only reached when there is more to copy than
> twice the vector size.
>
That is correct. I am checking it in. I will backport it to release branches
next week.
Thanks.
@@ -56,6 +56,13 @@
# endif
#endif
+/* Avoid short distance rep movsb only with non-SSE vector. */
+#ifndef AVOID_SHORT_DISTANCE_REP_MOVSB
+# define AVOID_SHORT_DISTANCE_REP_MOVSB (VEC_SIZE > 16)
+#else
+# define AVOID_SHORT_DISTANCE_REP_MOVSB 0
+#endif
+
#ifndef PREFETCH
# define PREFETCH(addr) prefetcht0 addr
#endif
@@ -243,7 +250,21 @@ L(movsb):
cmpq %r9, %rdi
/* Avoid slow backward REP MOVSB. */
jb L(more_8x_vec_backward)
+# if AVOID_SHORT_DISTANCE_REP_MOVSB
+ movq %rdi, %rcx
+ subq %rsi, %rcx
+ jmp 2f
+# endif
1:
+# if AVOID_SHORT_DISTANCE_REP_MOVSB
+ movq %rsi, %rcx
+ subq %rdi, %rcx
+2:
+/* Avoid "rep movsb" if RCX, the distance between source and destination,
+ is N*4GB + [1..63] with N >= 0. */
+ cmpl $63, %ecx
+ jbe L(more_2x_vec) /* Avoid "rep movsb" if ECX <= 63. */
+# endif
mov %RDX_LP, %RCX_LP
rep movsb
L(nop):