From patchwork Mon Jul 26 12:00:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "H.J. Lu" X-Patchwork-Id: 44478 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 291303893C7B for ; Mon, 26 Jul 2021 12:03:07 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 291303893C7B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1627300987; bh=dWLTACajhkB6k1m7QMixivBZSon2VUjdUQ4lADthAIk=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=pSux9UPzTqGQDzoer1bSB2ZZ+DuE3BKgF5cEw9GOes9bCZ9DMZCRmkl0+hqFY5EQi /ajTyIAigLNLiyO2+PRfogASB3i78Pmf1Mak5oNmhsMEEONfsiK5g0evwLDCtEG1lc zyNPu6uN+AvXuvSF2Sk1fCXXJ1K7AqDmLCedoj+I= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by sourceware.org (Postfix) with ESMTPS id 679E73894C13 for ; Mon, 26 Jul 2021 12:00:58 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 679E73894C13 Received: by mail-pl1-x635.google.com with SMTP id a20so11514221plm.0 for ; Mon, 26 Jul 2021 05:00:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:mime-version :content-transfer-encoding; bh=dWLTACajhkB6k1m7QMixivBZSon2VUjdUQ4lADthAIk=; b=B02vRmOr/rrb7i/neq0p0wmqDsvmFKXqM33gXvG8eSgFoYT270u0reNYmpURVahbz5 rjIoj/bu//OSNyfbJKoXZtxN30Xz0fv2o+/tTzglifzd9V30RvBmWmET/zMQdmdkuaPs HW8/hFj1vQFidKBdN4QRIKajHkZ1s37tIibSC6Nd8AFldwq61Pbci357ZViADdevYgNH 86iJ1TjicucFdFiUwgbpv694N+6aeb86J4h+1XMoXX7ySVfTU2H8o6GZVxNifs8Mv62q hKIS93hPdaz/iBItLv4fZSIVQPG1PYAKGDSmwBNgJCIts1mPPqX95tvnZOJBVZVZ2HAe edyQ== X-Gm-Message-State: AOAM531NKoaqoZHaNc72NdtCBcBuzU1/ZYt+ftrSlCaOTlSBKklmlvxz 59Po8+35oG5/7hbS6J3qRi+1GHyujzA= X-Google-Smtp-Source: ABdhPJxx0y6OBdr/VCHSkc0ApmLVhu8F0dOcHy1mT9U4jlnRWFKz86wkqlve2esaJVAsa4Rx7JatHA== X-Received: by 2002:aa7:8c47:0:b029:340:aa57:f65 with SMTP id e7-20020aa78c470000b0290340aa570f65mr17552711pfd.56.1627300857255; Mon, 26 Jul 2021 05:00:57 -0700 (PDT) Received: from gnu-cfl-2.localdomain ([172.58.38.240]) by smtp.gmail.com with ESMTPSA id j21sm36131274pjz.26.2021.07.26.05.00.56 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jul 2021 05:00:56 -0700 (PDT) Received: from gnu-cfl-2.. (localhost [IPv6:::1]) by gnu-cfl-2.localdomain (Postfix) with ESMTP id 9012BC0057 for ; Mon, 26 Jul 2021 05:00:55 -0700 (PDT) To: libc-alpha@sourceware.org Subject: [PATCH] x86-64: Add Avoid_Short_Distance_REP_MOVSB Date: Mon, 26 Jul 2021 05:00:55 -0700 Message-Id: <20210726120055.1089971-1-hjl.tools@gmail.com> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 X-Spam-Status: No, score=-3033.8 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: "H.J. Lu via Libc-alpha" From: "H.J. Lu" Reply-To: "H.J. Lu" Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" commit 3ec5d83d2a237d39e7fd6ef7a0bc8ac4c171a4a5 Author: H.J. Lu Date: Sat Jan 25 14:19:40 2020 -0800 x86-64: Avoid rep movsb with short distance [BZ #27130] introduced some regressions on Intel processors without Fast Short REP MOV (FSRM). Add Avoid_Short_Distance_REP_MOVSB to avoid rep movsb with short distance only on Intel processors with FSRM. bench-memmove-large on Skylake server shows that cycles of __memmove_evex_unaligned_erms are improved for the following data size: before after Improvement length=4127, align1=3, align2=0: 479.38 343.00 28% length=4223, align1=9, align2=5: 405.62 335.50 17% length=8223, align1=3, align2=0: 786.12 495.00 37% length=8319, align1=9, align2=5: 256.69 170.38 33% length=16415, align1=3, align2=0: 1436.88 839.50 41% length=16511, align1=9, align2=5: 1375.50 840.62 39% length=32799, align1=3, align2=0: 2890.00 1850.62 36% length=32895, align1=9, align2=5: 2891.38 1948.62 32% There are no regression on Ice Lake server. --- sysdeps/x86/cacheinfo.h | 7 +++++++ sysdeps/x86/cpu-features.c | 5 +++++ .../x86/include/cpu-features-preferred_feature_index_1.def | 1 + sysdeps/x86/sysdep.h | 3 +++ sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S | 5 +++++ 5 files changed, 21 insertions(+) diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h index eba8dbc4a6..174ea38f5b 100644 --- a/sysdeps/x86/cacheinfo.h +++ b/sysdeps/x86/cacheinfo.h @@ -49,6 +49,9 @@ long int __x86_rep_stosb_threshold attribute_hidden = 2048; /* Threshold to stop using Enhanced REP MOVSB. */ long int __x86_rep_movsb_stop_threshold attribute_hidden; +/* String/memory function control. */ +int __x86_string_control attribute_hidden; + static void init_cacheinfo (void) { @@ -71,5 +74,9 @@ init_cacheinfo (void) __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold; __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold; __x86_rep_movsb_stop_threshold = cpu_features->rep_movsb_stop_threshold; + + if (CPU_FEATURES_ARCH_P (cpu_features, Avoid_Short_Distance_REP_MOVSB)) + __x86_string_control + |= X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB; } #endif diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c index 706a172ba9..645bba6314 100644 --- a/sysdeps/x86/cpu-features.c +++ b/sysdeps/x86/cpu-features.c @@ -555,6 +555,11 @@ init_cpu_features (struct cpu_features *cpu_features) cpu_features->preferred[index_arch_Prefer_AVX2_STRCMP] |= bit_arch_Prefer_AVX2_STRCMP; } + + /* Avoid avoid short distance REP MOVSB on processor with FSRM. */ + if (CPU_FEATURES_CPU_P (cpu_features, FSRM)) + cpu_features->preferred[index_arch_Avoid_Short_Distance_REP_MOVSB] + |= bit_arch_Avoid_Short_Distance_REP_MOVSB; } /* This spells out "AuthenticAMD" or "HygonGenuine". */ else if ((ebx == 0x68747541 && ecx == 0x444d4163 && edx == 0x69746e65) diff --git a/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def b/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def index 133aab19f1..d7c93f00c5 100644 --- a/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def +++ b/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def @@ -33,3 +33,4 @@ BIT (Prefer_No_AVX512) BIT (MathVec_Prefer_No_AVX512) BIT (Prefer_FSRM) BIT (Prefer_AVX2_STRCMP) +BIT (Avoid_Short_Distance_REP_MOVSB) diff --git a/sysdeps/x86/sysdep.h b/sysdeps/x86/sysdep.h index 51c069bfe1..35cb90d507 100644 --- a/sysdeps/x86/sysdep.h +++ b/sysdeps/x86/sysdep.h @@ -57,6 +57,9 @@ enum cf_protection_level #define STATE_SAVE_MASK \ ((1 << 1) | (1 << 2) | (1 << 3) | (1 << 5) | (1 << 6) | (1 << 7)) +/* Avoid short distance REP MOVSB. */ +#define X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB (1 << 0) + #ifdef __ASSEMBLER__ /* Syntactic details of assembler. */ diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S index a783da5de2..9f02624375 100644 --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S @@ -325,12 +325,16 @@ L(movsb): /* Avoid slow backward REP MOVSB. */ jb L(more_8x_vec_backward) # if AVOID_SHORT_DISTANCE_REP_MOVSB + andl $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, __x86_string_control(%rip) + jz 3f movq %rdi, %rcx subq %rsi, %rcx jmp 2f # endif 1: # if AVOID_SHORT_DISTANCE_REP_MOVSB + andl $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, __x86_string_control(%rip) + jz 3f movq %rsi, %rcx subq %rdi, %rcx 2: @@ -338,6 +342,7 @@ L(movsb): is N*4GB + [1..63] with N >= 0. */ cmpl $63, %ecx jbe L(more_2x_vec) /* Avoid "rep movsb" if ECX <= 63. */ +3: # endif mov %RDX_LP, %RCX_LP rep movsb