From patchwork Tue Jul 27 16:05:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "H.J. Lu" X-Patchwork-Id: 44485 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 5013C384B13D for ; Tue, 27 Jul 2021 16:06:51 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 5013C384B13D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1627402011; bh=yVD1VWoyMyVrWZxweciOgtXTaxSg2jG1tKY5Z02JMDs=; h=References:In-Reply-To:Date:Subject:To:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=T+bCZyRhzLjWzfLJr/ICrOsxblWzID+zFIM+7duy461+s25tIkYroyHaPQCRhc7s3 a07iSMqfxajdy9Ylk8XGwwmuJgeYx6a4SLIHc8dJj0OV7HgxAAdgaj+LscI7F9S6DH LsYENeGqttqcyvMz04Dz2O7/WcFXN4XvIDgWlz2g= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by sourceware.org (Postfix) with ESMTPS id 722A3385801A for ; Tue, 27 Jul 2021 16:06:27 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 722A3385801A Received: by mail-pj1-x102d.google.com with SMTP id pf12-20020a17090b1d8cb0290175c085e7a5so5089484pjb.0 for ; Tue, 27 Jul 2021 09:06:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=yVD1VWoyMyVrWZxweciOgtXTaxSg2jG1tKY5Z02JMDs=; b=KAdMChkfI/236pLbEcpy2cvKdPR0p+DWyaBrb+tAmWRI4ffaGLXjOPdksapwS2Avvq jOuj+GerVVnEcYq3XmqszLgXmLPTDHPdJjzh4GyO/qlGrjxAep1UiWil93MjxqJwd+8x nDjJaZ+TMVbFOkfbSF3WTKE8c0h3prT6/fuvDPTBit0NS3s041qY9OuxQgkolYFxNvG8 u8rQsjfiYIHXEpWdomlK38vaSM0P4uy8B87l6XCTto8ByCAM30QevmqJdnpIrmMzrar7 nEt/eryvof4J81VGfCGTAsRoAhD7AC7kPx8Z9W8OtmLyHzKMqNELFft5R6O4bxeTJo5L D4Tw== X-Gm-Message-State: AOAM533NuRbbk/fJnfuDsMloXUy/X0EURjYUUe/vqKBVuieh6KybrAKa 12LKrUdJv2EvIHNnJKcyklHEDZV30WXPWqhMKyQ= X-Google-Smtp-Source: ABdhPJw0twlbjtBLlpplp/NlzdK85DJ1F+wSzdhM+IaCFvc9NM6Xn/PTSggiUTrrPrLXyzksFzauayfqGEeKUrboZ1A= X-Received: by 2002:a17:902:f681:b029:12c:d3a:61a2 with SMTP id l1-20020a170902f681b029012c0d3a61a2mr11728753plg.62.1627401986449; Tue, 27 Jul 2021 09:06:26 -0700 (PDT) MIME-Version: 1.0 References: <20210726120055.1089971-1-hjl.tools@gmail.com> In-Reply-To: Date: Tue, 27 Jul 2021 09:05:50 -0700 Message-ID: Subject: [PATCH v2] x86-64: Add Avoid_Short_Distance_REP_MOVSB To: Noah Goldstein X-Spam-Status: No, score=-3031.6 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: "H.J. Lu via Libc-alpha" From: "H.J. Lu" Reply-To: "H.J. Lu" Cc: GNU C Library Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" On Mon, Jul 26, 2021 at 9:06 PM Noah Goldstein wrote: > > > > On Mon, Jul 26, 2021 at 11:11 PM H.J. Lu via Libc-alpha wrote: >> >> On Mon, Jul 26, 2021 at 7:15 PM Carlos O'Donell wrote: >> > >> > On 7/26/21 8:00 AM, H.J. Lu via Libc-alpha wrote: >> > > commit 3ec5d83d2a237d39e7fd6ef7a0bc8ac4c171a4a5 >> > > Author: H.J. Lu >> > > Date: Sat Jan 25 14:19:40 2020 -0800 >> > > >> > > x86-64: Avoid rep movsb with short distance [BZ #27130] >> > > introduced some regressions on Intel processors without Fast Short REP >> > > MOV (FSRM). Add Avoid_Short_Distance_REP_MOVSB to avoid rep movsb with >> > > short distance only on Intel processors with FSRM. bench-memmove-large >> > > on Skylake server shows that cycles of __memmove_evex_unaligned_erms are >> > > improved for the following data size: >> > > >> > > before after Improvement >> > > length=4127, align1=3, align2=0: 479.38 343.00 28% >> > > length=4223, align1=9, align2=5: 405.62 335.50 17% >> > > length=8223, align1=3, align2=0: 786.12 495.00 37% >> > > length=8319, align1=9, align2=5: 256.69 170.38 33% >> > > length=16415, align1=3, align2=0: 1436.88 839.50 41% >> > > length=16511, align1=9, align2=5: 1375.50 840.62 39% >> > > length=32799, align1=3, align2=0: 2890.00 1850.62 36% >> > > length=32895, align1=9, align2=5: 2891.38 1948.62 32% >> > > >> > > There are no regression on Ice Lake server. >> > >> > At this point we're waiting on Noah to provide feedback on the performance >> > results given the alignment nop insertion you provided as a follow-up patch > > > The results with the padding look good! > >> >> >> We are testing 25 byte nop padding now: >> >> >> https://gitlab.com/x86-glibc/glibc/-/commit/de8985640a568786a59576716db54e0749d420e8 >> > How did you come to the exact padding choice used? I first replaced the 9 byte instructions: andl $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, __x86_string_control(%rip) jz 3f with a 9-byte NOP and reproduced the regression on Tiger Lake. It confirmed that the code layout caused the regression. I first tried adding ".p2align 4" to branch targets and they made no differences. Then I started adding different size of nops after andl $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, __x86_string_control(%rip) jz 3f movq %rdi, %rcx subq %rsi, %rcx jmp 2f with ".nops N". I started with N == 1 and doubled N in each step. I noticed that improvement started at N == 32. I started bisecting between 16 and 32: 1. 24 and 32 are good. 2. 24 and 28 are good. 3. 25 is the best overall. >> >> > (unless you can confirm this yourself). >> > >> > Looking forward to a v2 the incorporates the alignment fix (pending Noah's >> > comments), and my suggestions below. >> >> > >> > > --- >> > > sysdeps/x86/cacheinfo.h | 7 +++++++ >> > > sysdeps/x86/cpu-features.c | 5 +++++ >> > > .../x86/include/cpu-features-preferred_feature_index_1.def | 1 + >> > > sysdeps/x86/sysdep.h | 3 +++ >> > > sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S | 5 +++++ >> > > 5 files changed, 21 insertions(+) >> > > >> > > diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h >> > > index eba8dbc4a6..174ea38f5b 100644 >> > > --- a/sysdeps/x86/cacheinfo.h >> > > +++ b/sysdeps/x86/cacheinfo.h >> > > @@ -49,6 +49,9 @@ long int __x86_rep_stosb_threshold attribute_hidden = 2048; >> > > /* Threshold to stop using Enhanced REP MOVSB. */ >> > > long int __x86_rep_movsb_stop_threshold attribute_hidden; >> > > >> > > +/* String/memory function control. */ >> > > +int __x86_string_control attribute_hidden; >> > >> > Please expand comment. >> > >> > Suggest: >> > >> > /* A bit-wise OR of string/memory requirements for optimal performance >> > e.g. X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB. These bits >> > are used at runtime to tune implementation behavior. */ >> > int __x86_string_control attribute_hidden; >> >> I will fix it in the v2 patch. >> >> Thanks. >> >> > > + >> > > static void >> > > init_cacheinfo (void) >> > > { >> > > @@ -71,5 +74,9 @@ init_cacheinfo (void) >> > > __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold; >> > > __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold; >> > > __x86_rep_movsb_stop_threshold = cpu_features->rep_movsb_stop_threshold; >> > > + >> > > + if (CPU_FEATURES_ARCH_P (cpu_features, Avoid_Short_Distance_REP_MOVSB)) >> > > + __x86_string_control >> > > + |= X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB; >> > >> > OK. >> > >> > > } >> > > #endif >> > > diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c >> > > index 706a172ba9..645bba6314 100644 >> > > --- a/sysdeps/x86/cpu-features.c >> > > +++ b/sysdeps/x86/cpu-features.c >> > > @@ -555,6 +555,11 @@ init_cpu_features (struct cpu_features *cpu_features) >> > > cpu_features->preferred[index_arch_Prefer_AVX2_STRCMP] >> > > |= bit_arch_Prefer_AVX2_STRCMP; >> > > } >> > > + >> > > + /* Avoid avoid short distance REP MOVSB on processor with FSRM. */ >> > > + if (CPU_FEATURES_CPU_P (cpu_features, FSRM)) >> > > + cpu_features->preferred[index_arch_Avoid_Short_Distance_REP_MOVSB] >> > > + |= bit_arch_Avoid_Short_Distance_REP_MOVSB; >> > >> > OK. >> > >> > > } >> > > /* This spells out "AuthenticAMD" or "HygonGenuine". */ >> > > else if ((ebx == 0x68747541 && ecx == 0x444d4163 && edx == 0x69746e65) >> > > diff --git a/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def b/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def >> > > index 133aab19f1..d7c93f00c5 100644 >> > > --- a/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def >> > > +++ b/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def >> > > @@ -33,3 +33,4 @@ BIT (Prefer_No_AVX512) >> > > BIT (MathVec_Prefer_No_AVX512) >> > > BIT (Prefer_FSRM) >> > > BIT (Prefer_AVX2_STRCMP) >> > > +BIT (Avoid_Short_Distance_REP_MOVSB) >> > >> > OK. >> > >> > > diff --git a/sysdeps/x86/sysdep.h b/sysdeps/x86/sysdep.h >> > > index 51c069bfe1..35cb90d507 100644 >> > > --- a/sysdeps/x86/sysdep.h >> > > +++ b/sysdeps/x86/sysdep.h >> > > @@ -57,6 +57,9 @@ enum cf_protection_level >> > > #define STATE_SAVE_MASK \ >> > > ((1 << 1) | (1 << 2) | (1 << 3) | (1 << 5) | (1 << 6) | (1 << 7)) >> > > >> > >> > Suggest adding: >> > >> > /* Constants for bits in __x86_string_control: */ >> > >> > > +/* Avoid short distance REP MOVSB. */ >> > > +#define X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB (1 << 0) >> > >> > OK. >> > >> > > + >> > > #ifdef __ASSEMBLER__ >> > > >> > > /* Syntactic details of assembler. */ >> > > diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S >> > > index a783da5de2..9f02624375 100644 >> > > --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S >> > > +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S >> > > @@ -325,12 +325,16 @@ L(movsb): >> > > /* Avoid slow backward REP MOVSB. */ >> > > jb L(more_8x_vec_backward) >> > > # if AVOID_SHORT_DISTANCE_REP_MOVSB >> > > + andl $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, __x86_string_control(%rip) >> > > + jz 3f >> > >> > OK. >> > >> > > movq %rdi, %rcx >> > > subq %rsi, %rcx >> > > jmp 2f >> > > # endif >> > > 1: >> > > # if AVOID_SHORT_DISTANCE_REP_MOVSB >> > > + andl $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, __x86_string_control(%rip) >> > > + jz 3f >> > >> > OK. >> > >> > > movq %rsi, %rcx >> > > subq %rdi, %rcx >> > > 2: >> > > @@ -338,6 +342,7 @@ L(movsb): >> > > is N*4GB + [1..63] with N >= 0. */ >> > > cmpl $63, %ecx >> > > jbe L(more_2x_vec) /* Avoid "rep movsb" if ECX <= 63. */ >> > > +3: >> > >> > OK. >> > >> > > # endif >> > > mov %RDX_LP, %RCX_LP >> > > rep movsb >> > > >> > >> > >> > -- >> > Cheers, >> > Carlos. >> > >> >> >> -- >> H.J. Here is the v2 patch: 1. Add a 25-byte NOP padding after JMP for Avoid_Short_Distance_REP_MOVSB, which improves bench-memcpy-random performance on Tiger Lake by ~30%. 2. Update comments for __x86_string_control. From 7fbe4770ecd7b9d86b733af02f1182d214e52c45 Mon Sep 17 00:00:00 2001 From: "H.J. Lu" Date: Thu, 22 Jul 2021 20:26:25 -0700 Subject: [PATCH v2] x86-64: Add Avoid_Short_Distance_REP_MOVSB commit 3ec5d83d2a237d39e7fd6ef7a0bc8ac4c171a4a5 Author: H.J. Lu Date: Sat Jan 25 14:19:40 2020 -0800 x86-64: Avoid rep movsb with short distance [BZ #27130] introduced some regressions on Intel processors without Fast Short REP MOV (FSRM). Add Avoid_Short_Distance_REP_MOVSB to avoid rep movsb with short distance only on Intel processors with FSRM. bench-memmove-large on Skylake server shows that cycles of __memmove_evex_unaligned_erms improves for the following data size: before after Improvement length=4127, align1=3, align2=0: 479.38 343.00 28% length=4223, align1=9, align2=5: 405.62 335.50 17% length=8223, align1=3, align2=0: 786.12 495.00 37% length=8319, align1=9, align2=5: 256.69 170.38 33% length=16415, align1=3, align2=0: 1436.88 839.50 41% length=16511, align1=9, align2=5: 1375.50 840.62 39% length=32799, align1=3, align2=0: 2890.00 1850.62 36% length=32895, align1=9, align2=5: 2891.38 1948.62 32% Add a 25-byte NOP padding after JMP for Avoid_Short_Distance_REP_MOVSB, which improves bench-memcpy-random performance on Tiger Lake by ~30%. --- sysdeps/x86/cacheinfo.h | 9 +++++++++ sysdeps/x86/cpu-features.c | 5 +++++ .../cpu-features-preferred_feature_index_1.def | 1 + sysdeps/x86/sysdep.h | 5 +++++ .../x86_64/multiarch/memmove-vec-unaligned-erms.S | 13 +++++++++++++ 5 files changed, 33 insertions(+) diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h index eba8dbc4a6..41d2c81369 100644 --- a/sysdeps/x86/cacheinfo.h +++ b/sysdeps/x86/cacheinfo.h @@ -49,6 +49,11 @@ long int __x86_rep_stosb_threshold attribute_hidden = 2048; /* Threshold to stop using Enhanced REP MOVSB. */ long int __x86_rep_movsb_stop_threshold attribute_hidden; +/* A bit-wise OR of string/memory requirements for optimal performance + e.g. X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB. These bits + are used at runtime to tune implementation behavior. */ +int __x86_string_control attribute_hidden; + static void init_cacheinfo (void) { @@ -71,5 +76,9 @@ init_cacheinfo (void) __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold; __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold; __x86_rep_movsb_stop_threshold = cpu_features->rep_movsb_stop_threshold; + + if (CPU_FEATURES_ARCH_P (cpu_features, Avoid_Short_Distance_REP_MOVSB)) + __x86_string_control + |= X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB; } #endif diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c index 706a172ba9..645bba6314 100644 --- a/sysdeps/x86/cpu-features.c +++ b/sysdeps/x86/cpu-features.c @@ -555,6 +555,11 @@ init_cpu_features (struct cpu_features *cpu_features) cpu_features->preferred[index_arch_Prefer_AVX2_STRCMP] |= bit_arch_Prefer_AVX2_STRCMP; } + + /* Avoid avoid short distance REP MOVSB on processor with FSRM. */ + if (CPU_FEATURES_CPU_P (cpu_features, FSRM)) + cpu_features->preferred[index_arch_Avoid_Short_Distance_REP_MOVSB] + |= bit_arch_Avoid_Short_Distance_REP_MOVSB; } /* This spells out "AuthenticAMD" or "HygonGenuine". */ else if ((ebx == 0x68747541 && ecx == 0x444d4163 && edx == 0x69746e65) diff --git a/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def b/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def index 133aab19f1..d7c93f00c5 100644 --- a/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def +++ b/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def @@ -33,3 +33,4 @@ BIT (Prefer_No_AVX512) BIT (MathVec_Prefer_No_AVX512) BIT (Prefer_FSRM) BIT (Prefer_AVX2_STRCMP) +BIT (Avoid_Short_Distance_REP_MOVSB) diff --git a/sysdeps/x86/sysdep.h b/sysdeps/x86/sysdep.h index 51c069bfe1..cac1d762fb 100644 --- a/sysdeps/x86/sysdep.h +++ b/sysdeps/x86/sysdep.h @@ -57,6 +57,11 @@ enum cf_protection_level #define STATE_SAVE_MASK \ ((1 << 1) | (1 << 2) | (1 << 3) | (1 << 5) | (1 << 6) | (1 << 7)) +/* Constants for bits in __x86_string_control: */ + +/* Avoid short distance REP MOVSB. */ +#define X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB (1 << 0) + #ifdef __ASSEMBLER__ /* Syntactic details of assembler. */ diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S index a783da5de2..8d42fe517b 100644 --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S @@ -325,12 +325,24 @@ L(movsb): /* Avoid slow backward REP MOVSB. */ jb L(more_8x_vec_backward) # if AVOID_SHORT_DISTANCE_REP_MOVSB + andl $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, __x86_string_control(%rip) + jz 3f movq %rdi, %rcx subq %rsi, %rcx jmp 2f + /* Add a 25-byte NOP padding here to improve bench-memcpy-random + performance on Skylake and Tiger Lake. */ + /* data16 cs nopw 0x0(%rax,%rax,1) */ + .byte 0x66, 0x66, 0x2e, 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00 + /* data16 cs nopw 0x0(%rax,%rax,1) */ + .byte 0x66, 0x66, 0x2e, 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00 + /* nopl (%rax) */ + .byte 0x0f, 0x1f, 0x00 # endif 1: # if AVOID_SHORT_DISTANCE_REP_MOVSB + andl $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, __x86_string_control(%rip) + jz 3f movq %rsi, %rcx subq %rdi, %rcx 2: @@ -338,6 +350,7 @@ L(movsb): is N*4GB + [1..63] with N >= 0. */ cmpl $63, %ecx jbe L(more_2x_vec) /* Avoid "rep movsb" if ECX <= 63. */ +3: # endif mov %RDX_LP, %RCX_LP rep movsb -- 2.31.1