From patchwork Sat Jul 4 12:03:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "H.J. Lu" X-Patchwork-Id: 39910 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 643BA387085E; Sat, 4 Jul 2020 12:03:13 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 643BA387085E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1593864193; bh=az/fn4txXa9U7bi7rMzgLYLvfanyn5+Ge9Fcvgg56w0=; h=Date:To:Subject:References:In-Reply-To:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=Fyl6vESsTGogVlWXv2LYGIRUaFc+R+LcMonrE2fKRVzEmigOpxU/MwTangX7hDQuC yKrVnKLujqOMcEl12SYFePiNPvf3F8sJiY7Jf5Gmob3nqtnm0+DfqovMeSauXDW+sM 9qRAnmmqfbkNujKppv7ndBNp7oNjc4SKPk04z+8Q= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by sourceware.org (Postfix) with ESMTPS id 0B27E3844028 for ; Sat, 4 Jul 2020 12:03:10 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 0B27E3844028 Received: by mail-pg1-x542.google.com with SMTP id g67so15356849pgc.8 for ; Sat, 04 Jul 2020 05:03:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=az/fn4txXa9U7bi7rMzgLYLvfanyn5+Ge9Fcvgg56w0=; b=txI5BdzAJCZN7SJMBiuu/A24yYeP8+7ShC4awMLFkpCJ93NlX+g2k2oYVccRiOq9Pt zIcTtGbX9hB60qKciut3FbH+/X/PkXStAyrCEltQ6zpnnKxwqAMkqXvYa47yiUVZcJ7x FyhKf+4NrteYQ8VDUahKmYZEf5T1tYkXHCMEgU6mq6shkuJf3e6TzhF6ilGkQpuninej iU9zn+sNMqQ+3i3PYMBltDTBxH4tZaAtqK+s/G6umvUw0aQkaAugR9qgjODvoWCYkAwj S3cFR31Th0e0WYVEAvb4bwek4C0/CeEIYulzcFvFBDrLIl3C/T4rI7WgtbdcfvRW4kyi WpJw== X-Gm-Message-State: AOAM531NTI77DElXlvcMBHs7AtnGLmngVmsagb+KBMchpnSrPUAu8W3c vzN9LGvOmT85MTm5LnLaMuY= X-Google-Smtp-Source: ABdhPJy9vf7EnmAkJRhiSzPi7Nmc4h8KH7GLBAm5q4tyeuuPEm2bJb7X/3AXuME4xJhAn2OsT/I0uQ== X-Received: by 2002:a63:5b07:: with SMTP id p7mr32221452pgb.250.1593864188798; Sat, 04 Jul 2020 05:03:08 -0700 (PDT) Received: from gnu-cfl-2.localdomain (c-69-181-90-243.hsd1.ca.comcast.net. [69.181.90.243]) by smtp.gmail.com with ESMTPSA id a2sm14399002pgf.53.2020.07.04.05.03.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 04 Jul 2020 05:03:08 -0700 (PDT) Received: by gnu-cfl-2.localdomain (Postfix, from userid 1000) id 39C0E1A0116; Sat, 4 Jul 2020 05:03:07 -0700 (PDT) Date: Sat, 4 Jul 2020 05:03:07 -0700 To: Carlos O'Donell Subject: V2 [PATCH] x86: Add thresholds for "rep movsb/stosb" to tunables Message-ID: <20200704120307.GA1117522@gmail.com> References: <20200703175220.1178840-1-hjl.tools@gmail.com> <20200703175220.1178840-3-hjl.tools@gmail.com> <8cef5b4a-cdda-6eaa-a859-5a410560a4ce@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <8cef5b4a-cdda-6eaa-a859-5a410560a4ce@redhat.com> X-Spam-Status: No, score=-13.0 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: "H.J. Lu via Libc-alpha" From: "H.J. Lu" Reply-To: "H.J. Lu" Cc: libc-alpha@sourceware.org Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" On Fri, Jul 03, 2020 at 03:49:21PM -0400, Carlos O'Donell wrote: > On 7/3/20 1:52 PM, H.J. Lu wrote: > > Add x86_rep_movsb_threshold and x86_rep_stosb_threshold to tunables > > to update thresholds for "rep movsb" and "rep stosb" at run-time. > > > > Note that the user specified threshold for "rep movsb" smaller than the > > minimum threshold will be ignored. > > Post v2 please. Almost there. > > > --- > > manual/tunables.texi | 14 +++++++ > > sysdeps/x86/cacheinfo.c | 20 ++++++++++ > > sysdeps/x86/cpu-features.h | 4 ++ > > sysdeps/x86/dl-cacheinfo.c | 38 +++++++++++++++++++ > > sysdeps/x86/dl-tunables.list | 6 +++ > > .../multiarch/memmove-vec-unaligned-erms.S | 16 +------- > > .../multiarch/memset-vec-unaligned-erms.S | 12 +----- > > 7 files changed, 84 insertions(+), 26 deletions(-) > > > > diff --git a/manual/tunables.texi b/manual/tunables.texi > > index ec18b10834..61edd62425 100644 > > --- a/manual/tunables.texi > > +++ b/manual/tunables.texi > > @@ -396,6 +396,20 @@ to set threshold in bytes for non temporal store. > > This tunable is specific to i386 and x86-64. > > @end deftp > > > > +@deftp Tunable glibc.cpu.x86_rep_movsb_threshold > > +The @code{glibc.cpu.x86_rep_movsb_threshold} tunable allows the user > > +to set threshold in bytes to start using "rep movsb". > > + > > +This tunable is specific to i386 and x86-64. > > +@end deftp > > + > > +@deftp Tunable glibc.cpu.x86_rep_stosb_threshold > > +The @code{glibc.cpu.x86_rep_stosb_threshold} tunable allows the user > > +to set threshold in bytes to start using "rep stosb". > > + > > +This tunable is specific to i386 and x86-64. > > +@end deftp > > + > > @deftp Tunable glibc.cpu.x86_ibt > > The @code{glibc.cpu.x86_ibt} tunable allows the user to control how > > indirect branch tracking (IBT) should be enabled. Accepted values are > > diff --git a/sysdeps/x86/cacheinfo.c b/sysdeps/x86/cacheinfo.c > > index 8c4c7f9972..bb536d96ef 100644 > > --- a/sysdeps/x86/cacheinfo.c > > +++ b/sysdeps/x86/cacheinfo.c > > @@ -41,6 +41,23 @@ long int __x86_raw_shared_cache_size attribute_hidden = 1024 * 1024; > > /* Threshold to use non temporal store. */ > > long int __x86_shared_non_temporal_threshold attribute_hidden; > > > > +/* Threshold to use Enhanced REP MOVSB. Since there is overhead to set > > + up REP MOVSB operation, REP MOVSB isn't faster on short data. The > > + memcpy micro benchmark in glibc shows that 2KB is the approximate > > + value above which REP MOVSB becomes faster than SSE2 optimization > > + on processors with Enhanced REP MOVSB. Since larger register size > > + can move more data with a single load and store, the threshold is > > + higher with larger register size. */ > > +long int __x86_rep_movsb_threshold attribute_hidden = 2048; > > + > > +/* Threshold to use Enhanced REP STOSB. Since there is overhead to set > > + up REP STOSB operation, REP STOSB isn't faster on short data. The > > + memset micro benchmark in glibc shows that 2KB is the approximate > > + value above which REP STOSB becomes faster on processors with > > + Enhanced REP STOSB. Since the stored value is fixed, larger register > > + size has minimal impact on threshold. */ > > +long int __x86_rep_stosb_threshold attribute_hidden = 2048; > > + > > #ifndef __x86_64__ > > /* PREFETCHW support flag for use in memory and string routines. */ > > int __x86_prefetchw attribute_hidden; > > @@ -117,6 +134,9 @@ init_cacheinfo (void) > > __x86_shared_non_temporal_threshold > > = cpu_features->non_temporal_threshold; > > > > + __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold; > > + __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold; > > + > > OK. Update global from cpu_features with values. > > I would really like to see some kind of "assert (cpu_features->initialized);" > that way we know we didn't break the startup sequence unintentionally. > > > #ifndef __x86_64__ > > __x86_prefetchw = cpu_features->prefetchw; > > #endif > > diff --git a/sysdeps/x86/cpu-features.h b/sysdeps/x86/cpu-features.h > > index 3aaed33cbc..002e12e11f 100644 > > --- a/sysdeps/x86/cpu-features.h > > +++ b/sysdeps/x86/cpu-features.h > > @@ -128,6 +128,10 @@ struct cpu_features > > /* PREFETCHW support flag for use in memory and string routines. */ > > unsigned long int prefetchw; > > #endif > > + /* Threshold to use "rep movsb". */ > > + unsigned long int rep_movsb_threshold; > > + /* Threshold to use "rep stosb". */ > > + unsigned long int rep_stosb_threshold; > > OK. > > > }; > > > > /* Used from outside of glibc to get access to the CPU features > > diff --git a/sysdeps/x86/dl-cacheinfo.c b/sysdeps/x86/dl-cacheinfo.c > > index 8e2a6f552c..aff9bd1067 100644 > > --- a/sysdeps/x86/dl-cacheinfo.c > > +++ b/sysdeps/x86/dl-cacheinfo.c > > @@ -860,6 +860,31 @@ __init_cacheinfo (void) > > total shared cache size. */ > > unsigned long int non_temporal_threshold = (shared * threads * 3 / 4); > > > > + /* NB: The REP MOVSB threshold must be greater than VEC_SIZE * 8. */ > > + unsigned long int minimum_rep_movsb_threshold; > > + /* NB: The default REP MOVSB threshold is 2048 * (VEC_SIZE / 16). See > > + comments for __x86_rep_movsb_threshold in cacheinfo.c. */ > > + unsigned long int rep_movsb_threshold; > > + if (CPU_FEATURES_ARCH_P (cpu_features, AVX512F_Usable) > > + && !CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_AVX512)) > > + { > > + rep_movsb_threshold = 2048 * (64 / 16); > > + minimum_rep_movsb_threshold = 64 * 8; > > + } > > + else if (CPU_FEATURES_ARCH_P (cpu_features, > > + AVX_Fast_Unaligned_Load)) > > + { > > + rep_movsb_threshold = 2048 * (32 / 16); > > + minimum_rep_movsb_threshold = 32 * 8; > > + } > > + else > > + { > > + rep_movsb_threshold = 2048 * (16 / 16); > > + minimum_rep_movsb_threshold = 16 * 8; > > + } > > + /* NB: See comments for __x86_rep_stosb_threshold in cacheinfo.c. */ > > + unsigned long int rep_stosb_threshold = 2048; > > + > > #if HAVE_TUNABLES > > long int tunable_size; > > tunable_size = TUNABLE_GET (x86_data_cache_size, long int, NULL); > > @@ -871,11 +896,19 @@ __init_cacheinfo (void) > > tunable_size = TUNABLE_GET (x86_non_temporal_threshold, long int, NULL); > > if (tunable_size != 0) > > non_temporal_threshold = tunable_size; > > > + tunable_size = TUNABLE_GET (x86_rep_movsb_threshold, long int, NULL); > > + if (tunable_size > minimum_rep_movsb_threshold) > > + rep_movsb_threshold = tunable_size; > > OK. Good, we only set rep_movsb_threshold if it's greater than min. > > > + tunable_size = TUNABLE_GET (x86_rep_stosb_threshold, long int, NULL); > > + if (tunable_size != 0) > > + rep_stosb_threshold = tunable_size; > > This should be min=1, default=2048 in dl-tunables.list, and would remove > this code since the range is not dynamic. > > The point of the tunables framework is to remove such boiler plate for > range a default processing and clearing parameters for security settings. > > > #endif > > > > cpu_features->data_cache_size = data; > > cpu_features->shared_cache_size = shared; > > cpu_features->non_temporal_threshold = non_temporal_threshold; > > + cpu_features->rep_movsb_threshold = rep_movsb_threshold; > > + cpu_features->rep_stosb_threshold = rep_stosb_threshold; > > > > #if HAVE_TUNABLES > > TUNABLE_UPDATE (x86_data_cache_size, long int, > > @@ -884,5 +917,10 @@ __init_cacheinfo (void) > > shared, 0, (long int) -1); > > TUNABLE_UPDATE (x86_non_temporal_threshold, long int, > > non_temporal_threshold, 0, (long int) -1); > > + TUNABLE_UPDATE (x86_rep_movsb_threshold, long int, > > + rep_movsb_threshold, minimum_rep_movsb_threshold, > > + (long int) -1); > > OK. Store the new value and the computed minimum. > > > + TUNABLE_UPDATE (x86_rep_stosb_threshold, long int, > > + rep_stosb_threshold, 0, (long int) -1); > > This one can be deleted. > We should go with this simple one for 2.32. H.J. --- Add x86_rep_movsb_threshold and x86_rep_stosb_threshold to tunables to update thresholds for "rep movsb" and "rep stosb" at run-time. Note that the user specified threshold for "rep movsb" smaller than the minimum threshold will be ignored. --- manual/tunables.texi | 14 ++++++ sysdeps/x86/cacheinfo.c | 46 +++++++++++++++++++ sysdeps/x86/cpu-features.c | 4 ++ sysdeps/x86/cpu-features.h | 4 ++ sysdeps/x86/dl-tunables.list | 6 +++ .../multiarch/memmove-vec-unaligned-erms.S | 16 +------ .../multiarch/memset-vec-unaligned-erms.S | 12 +---- 7 files changed, 76 insertions(+), 26 deletions(-) diff --git a/manual/tunables.texi b/manual/tunables.texi index ec18b10834..61edd62425 100644 --- a/manual/tunables.texi +++ b/manual/tunables.texi @@ -396,6 +396,20 @@ to set threshold in bytes for non temporal store. This tunable is specific to i386 and x86-64. @end deftp +@deftp Tunable glibc.cpu.x86_rep_movsb_threshold +The @code{glibc.cpu.x86_rep_movsb_threshold} tunable allows the user +to set threshold in bytes to start using "rep movsb". + +This tunable is specific to i386 and x86-64. +@end deftp + +@deftp Tunable glibc.cpu.x86_rep_stosb_threshold +The @code{glibc.cpu.x86_rep_stosb_threshold} tunable allows the user +to set threshold in bytes to start using "rep stosb". + +This tunable is specific to i386 and x86-64. +@end deftp + @deftp Tunable glibc.cpu.x86_ibt The @code{glibc.cpu.x86_ibt} tunable allows the user to control how indirect branch tracking (IBT) should be enabled. Accepted values are diff --git a/sysdeps/x86/cacheinfo.c b/sysdeps/x86/cacheinfo.c index 311502dee3..4322328a1b 100644 --- a/sysdeps/x86/cacheinfo.c +++ b/sysdeps/x86/cacheinfo.c @@ -530,6 +530,23 @@ long int __x86_raw_shared_cache_size attribute_hidden = 1024 * 1024; /* Threshold to use non temporal store. */ long int __x86_shared_non_temporal_threshold attribute_hidden; +/* Threshold to use Enhanced REP MOVSB. Since there is overhead to set + up REP MOVSB operation, REP MOVSB isn't faster on short data. The + memcpy micro benchmark in glibc shows that 2KB is the approximate + value above which REP MOVSB becomes faster than SSE2 optimization + on processors with Enhanced REP MOVSB. Since larger register size + can move more data with a single load and store, the threshold is + higher with larger register size. */ +long int __x86_rep_movsb_threshold attribute_hidden = 2048; + +/* Threshold to use Enhanced REP STOSB. Since there is overhead to set + up REP STOSB operation, REP STOSB isn't faster on short data. The + memset micro benchmark in glibc shows that 2KB is the approximate + value above which REP STOSB becomes faster on processors with + Enhanced REP STOSB. Since the stored value is fixed, larger register + size has minimal impact on threshold. */ +long int __x86_rep_stosb_threshold attribute_hidden = 2048; + #ifndef DISABLE_PREFETCHW /* PREFETCHW support flag for use in memory and string routines. */ int __x86_prefetchw attribute_hidden; @@ -872,6 +889,35 @@ init_cacheinfo (void) = (cpu_features->non_temporal_threshold != 0 ? cpu_features->non_temporal_threshold : __x86_shared_cache_size * threads * 3 / 4); + + /* NB: The REP MOVSB threshold must be greater than VEC_SIZE * 8. */ + unsigned int minimum_rep_movsb_threshold; + /* NB: The default REP MOVSB threshold is 2048 * (VEC_SIZE / 16). */ + unsigned int rep_movsb_threshold; + if (CPU_FEATURES_ARCH_P (cpu_features, AVX512F_Usable) + && !CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_AVX512)) + { + rep_movsb_threshold = 2048 * (64 / 16); + minimum_rep_movsb_threshold = 64 * 8; + } + else if (CPU_FEATURES_ARCH_P (cpu_features, + AVX_Fast_Unaligned_Load)) + { + rep_movsb_threshold = 2048 * (32 / 16); + minimum_rep_movsb_threshold = 32 * 8; + } + else + { + rep_movsb_threshold = 2048 * (16 / 16); + minimum_rep_movsb_threshold = 16 * 8; + } + if (cpu_features->rep_movsb_threshold > minimum_rep_movsb_threshold) + __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold; + else + __x86_rep_movsb_threshold = rep_movsb_threshold; + + if (cpu_features->rep_stosb_threshold) + __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold; } #endif diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c index c351bdd54a..c7673a2eb9 100644 --- a/sysdeps/x86/cpu-features.c +++ b/sysdeps/x86/cpu-features.c @@ -606,6 +606,10 @@ no_cpuid: TUNABLE_GET (hwcaps, tunable_val_t *, TUNABLE_CALLBACK (set_hwcaps)); cpu_features->non_temporal_threshold = TUNABLE_GET (x86_non_temporal_threshold, long int, NULL); + cpu_features->rep_movsb_threshold + = TUNABLE_GET (x86_rep_movsb_threshold, long int, NULL); + cpu_features->rep_stosb_threshold + = TUNABLE_GET (x86_rep_stosb_threshold, long int, NULL); cpu_features->data_cache_size = TUNABLE_GET (x86_data_cache_size, long int, NULL); cpu_features->shared_cache_size diff --git a/sysdeps/x86/cpu-features.h b/sysdeps/x86/cpu-features.h index d66dc206f7..39d2b59d63 100644 --- a/sysdeps/x86/cpu-features.h +++ b/sysdeps/x86/cpu-features.h @@ -102,6 +102,10 @@ struct cpu_features unsigned long int shared_cache_size; /* Threshold to use non temporal store. */ unsigned long int non_temporal_threshold; + /* Threshold to use "rep movsb". */ + unsigned long int rep_movsb_threshold; + /* Threshold to use "rep stosb". */ + unsigned long int rep_stosb_threshold; }; /* Used from outside of glibc to get access to the CPU features diff --git a/sysdeps/x86/dl-tunables.list b/sysdeps/x86/dl-tunables.list index 251b926ce4..43bf6c2389 100644 --- a/sysdeps/x86/dl-tunables.list +++ b/sysdeps/x86/dl-tunables.list @@ -30,6 +30,12 @@ glibc { x86_non_temporal_threshold { type: SIZE_T } + x86_rep_movsb_threshold { + type: SIZE_T + } + x86_rep_stosb_threshold { + type: SIZE_T + } x86_data_cache_size { type: SIZE_T } diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S index 74953245aa..bd5dc1a3f3 100644 --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S @@ -56,17 +56,6 @@ # endif #endif -/* Threshold to use Enhanced REP MOVSB. Since there is overhead to set - up REP MOVSB operation, REP MOVSB isn't faster on short data. The - memcpy micro benchmark in glibc shows that 2KB is the approximate - value above which REP MOVSB becomes faster than SSE2 optimization - on processors with Enhanced REP MOVSB. Since larger register size - can move more data with a single load and store, the threshold is - higher with larger register size. */ -#ifndef REP_MOVSB_THRESHOLD -# define REP_MOVSB_THRESHOLD (2048 * (VEC_SIZE / 16)) -#endif - #ifndef PREFETCH # define PREFETCH(addr) prefetcht0 addr #endif @@ -253,9 +242,6 @@ L(movsb): leaq (%rsi,%rdx), %r9 cmpq %r9, %rdi /* Avoid slow backward REP MOVSB. */ -# if REP_MOVSB_THRESHOLD <= (VEC_SIZE * 8) -# error Unsupported REP_MOVSB_THRESHOLD and VEC_SIZE! -# endif jb L(more_8x_vec_backward) 1: mov %RDX_LP, %RCX_LP @@ -331,7 +317,7 @@ L(between_2_3): #if defined USE_MULTIARCH && IS_IN (libc) L(movsb_more_2x_vec): - cmpq $REP_MOVSB_THRESHOLD, %rdx + cmp __x86_rep_movsb_threshold(%rip), %RDX_LP ja L(movsb) #endif L(more_2x_vec): diff --git a/sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S index af2299709c..2bfc95de05 100644 --- a/sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S +++ b/sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S @@ -58,16 +58,6 @@ # endif #endif -/* Threshold to use Enhanced REP STOSB. Since there is overhead to set - up REP STOSB operation, REP STOSB isn't faster on short data. The - memset micro benchmark in glibc shows that 2KB is the approximate - value above which REP STOSB becomes faster on processors with - Enhanced REP STOSB. Since the stored value is fixed, larger register - size has minimal impact on threshold. */ -#ifndef REP_STOSB_THRESHOLD -# define REP_STOSB_THRESHOLD 2048 -#endif - #ifndef SECTION # error SECTION is not defined! #endif @@ -181,7 +171,7 @@ ENTRY (MEMSET_SYMBOL (__memset, unaligned_erms)) ret L(stosb_more_2x_vec): - cmpq $REP_STOSB_THRESHOLD, %rdx + cmp __x86_rep_stosb_threshold(%rip), %RDX_LP ja L(stosb) #endif L(more_2x_vec):