From patchwork Wed May 12 09:26:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Tamura X-Patchwork-Id: 43386 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 483743890410; Wed, 12 May 2021 09:27:50 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from esa10.hc1455-7.c3s2.iphmx.com (esa10.hc1455-7.c3s2.iphmx.com [139.138.36.225]) by sourceware.org (Postfix) with ESMTPS id 5AC483890410 for ; Wed, 12 May 2021 09:27:47 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 5AC483890410 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=fujitsu.com Authentication-Results: sourceware.org; spf=fail smtp.mailfrom=naohirot@fujitsu.com IronPort-SDR: pfGuECWfg6CECQlEPkaNKv8f1FZYqwxtUPozn/fwXmTCr0pNJlNJyewgJZGF0g0kV6ukrDsHoy +EoJXduYm3yWwD4uPrnRBSpqxxXBubef0u6l3LxQl6uMHEHA4tcx0eAfyuFT9m+snhr/O/NaDv xkaJCEJ20d9jx0YBNb4kcqP3oj/1YPeXqZbQxCuUD/VxpkylVr/jUlza8L4NdKuB0UgnoQQ67A dGjQX/02C5TndMU8cgsbMs33awJwzyF7B175c/dK4xwfiB4cpsnnxPW4d6hlLQSVQWCW4iRsQw MfU= X-IronPort-AV: E=McAfee;i="6200,9189,9981"; a="17178836" X-IronPort-AV: E=Sophos;i="5.82,293,1613401200"; d="scan'208";a="17178836" Received: from unknown (HELO oym-r3.gw.nic.fujitsu.com) ([210.162.30.91]) by esa10.hc1455-7.c3s2.iphmx.com with ESMTP; 12 May 2021 18:27:44 +0900 Received: from oym-m3.gw.nic.fujitsu.com (oym-nat-oym-m3.gw.nic.fujitsu.com [192.168.87.60]) by oym-r3.gw.nic.fujitsu.com (Postfix) with ESMTP id 39EC91FB301 for ; Wed, 12 May 2021 18:27:44 +0900 (JST) Received: from m3051.s.css.fujitsu.com (m3051.s.css.fujitsu.com [10.134.21.209]) by oym-m3.gw.nic.fujitsu.com (Postfix) with ESMTP id 694B41533F for ; Wed, 12 May 2021 18:27:43 +0900 (JST) Received: from bionic.lxd (unknown [10.126.53.116]) by m3051.s.css.fujitsu.com (Postfix) with ESMTP id 482DD93; Wed, 12 May 2021 18:27:43 +0900 (JST) From: Naohiro Tamura To: libc-alpha@sourceware.org Subject: [PATCH v2 1/6] config: Added HAVE_AARCH64_SVE_ASM for aarch64 Date: Wed, 12 May 2021 09:26:40 +0000 Message-Id: <20210512092640.901076-1-naohirot@fujitsu.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210512092308.900998-1-naohirot@fujitsu.com> References: <20210512092308.900998-1-naohirot@fujitsu.com> X-TM-AS-GCONF: 00 X-Spam-Status: No, score=-10.8 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_NUMSUBJECT, SPF_HELO_PASS, SPF_NEUTRAL, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Naohiro Tamura Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" From: Naohiro Tamura This patch checks if assembler supports '-march=armv8.2-a+sve' to generate SVE code or not, and then define HAVE_AARCH64_SVE_ASM macro. --- config.h.in | 5 +++++ sysdeps/aarch64/configure | 28 ++++++++++++++++++++++++++++ sysdeps/aarch64/configure.ac | 15 +++++++++++++++ 3 files changed, 48 insertions(+) diff --git a/config.h.in b/config.h.in index 99036b887f..13fba9bb8d 100644 --- a/config.h.in +++ b/config.h.in @@ -121,6 +121,11 @@ /* AArch64 PAC-RET code generation is enabled. */ #define HAVE_AARCH64_PAC_RET 0 +/* Assembler support ARMv8.2-A SVE. + This macro becomes obsolete when glibc increased the minimum + required version of GNU 'binutils' to 2.28 or later. */ +#define HAVE_AARCH64_SVE_ASM 0 + /* ARC big endian ABI */ #undef HAVE_ARC_BE diff --git a/sysdeps/aarch64/configure b/sysdeps/aarch64/configure index 83c3a23e44..4c1fac49f3 100644 --- a/sysdeps/aarch64/configure +++ b/sysdeps/aarch64/configure @@ -304,3 +304,31 @@ fi $as_echo "$libc_cv_aarch64_variant_pcs" >&6; } config_vars="$config_vars aarch64-variant-pcs = $libc_cv_aarch64_variant_pcs" + +# Check if asm support armv8.2-a+sve +{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for SVE support in assembler" >&5 +$as_echo_n "checking for SVE support in assembler... " >&6; } +if ${libc_cv_asm_sve+:} false; then : + $as_echo_n "(cached) " >&6 +else + cat > conftest.s <<\EOF + ptrue p0.b +EOF +if { ac_try='${CC-cc} -c -march=armv8.2-a+sve conftest.s 1>&5' + { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_try\""; } >&5 + (eval $ac_try) 2>&5 + ac_status=$? + $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 + test $ac_status = 0; }; }; then + libc_cv_aarch64_sve_asm=yes +else + libc_cv_aarch64_sve_asm=no +fi +rm -f conftest* +fi +{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $libc_cv_asm_sve" >&5 +$as_echo "$libc_cv_asm_sve" >&6; } +if test $libc_cv_aarch64_sve_asm = yes; then + $as_echo "#define HAVE_AARCH64_SVE_ASM 1" >>confdefs.h + +fi diff --git a/sysdeps/aarch64/configure.ac b/sysdeps/aarch64/configure.ac index 66f755078a..3347c13fa1 100644 --- a/sysdeps/aarch64/configure.ac +++ b/sysdeps/aarch64/configure.ac @@ -90,3 +90,18 @@ EOF fi rm -rf conftest.*]) LIBC_CONFIG_VAR([aarch64-variant-pcs], [$libc_cv_aarch64_variant_pcs]) + +# Check if asm support armv8.2-a+sve +AC_CACHE_CHECK(for SVE support in assembler, libc_cv_asm_sve, [dnl +cat > conftest.s <<\EOF + ptrue p0.b +EOF +if AC_TRY_COMMAND(${CC-cc} -c -march=armv8.2-a+sve conftest.s 1>&AS_MESSAGE_LOG_FD); then + libc_cv_aarch64_sve_asm=yes +else + libc_cv_aarch64_sve_asm=no +fi +rm -f conftest*]) +if test $libc_cv_aarch64_sve_asm = yes; then + AC_DEFINE(HAVE_AARCH64_SVE_ASM) +fi From patchwork Wed May 12 09:27:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Tamura X-Patchwork-Id: 43387 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 65BB03891C04; Wed, 12 May 2021 09:28:29 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from esa3.hc1455-7.c3s2.iphmx.com (esa3.hc1455-7.c3s2.iphmx.com [207.54.90.49]) by sourceware.org (Postfix) with ESMTPS id 398103891C00 for ; Wed, 12 May 2021 09:28:27 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 398103891C00 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=fujitsu.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=naohirot@fujitsu.com IronPort-SDR: wL/aS36oDytpKPqmolzItGPja8ECedYpqOxfM94W+pAj1biPTqe2x1hekjh4EIMYg3NqB4k7d5 Z0Gq9k5gwL00aNueQlwDCHam8mkiRs5GRD9mRnE4UqfYK9mCayMK/SePOf/GaZqQjiQH2SW1Ou xY7H8n6wxduyINXFNglnoXsYfo9nJz+6/6yKT4RKf0aZBUBgj/b3g3negSuWayujs1y1/Ry40M 3nMo0OLjcz7MoU+XW2u5q6fNFEzrG0LvpOW8o440XA+G4EMsGz6nq5eLNOaB6cnIHqzhp1nAPF f6U= X-IronPort-AV: E=McAfee;i="6200,9189,9981"; a="29262706" X-IronPort-AV: E=Sophos;i="5.82,293,1613401200"; d="scan'208";a="29262706" Received: from unknown (HELO oym-r3.gw.nic.fujitsu.com) ([210.162.30.91]) by esa3.hc1455-7.c3s2.iphmx.com with ESMTP; 12 May 2021 18:28:25 +0900 Received: from oym-m1.gw.nic.fujitsu.com (oym-nat-oym-m1.gw.nic.fujitsu.com [192.168.87.58]) by oym-r3.gw.nic.fujitsu.com (Postfix) with ESMTP id 85AE81FB301 for ; Wed, 12 May 2021 18:28:24 +0900 (JST) Received: from m3050.s.css.fujitsu.com (msm.b.css.fujitsu.com [10.134.21.208]) by oym-m1.gw.nic.fujitsu.com (Postfix) with ESMTP id C3C84B4E43 for ; Wed, 12 May 2021 18:28:23 +0900 (JST) Received: from bionic.lxd (unknown [10.126.53.116]) by m3050.s.css.fujitsu.com (Postfix) with ESMTP id A36E1AF; Wed, 12 May 2021 18:28:23 +0900 (JST) From: Naohiro Tamura To: libc-alpha@sourceware.org Subject: [PATCH v2 2/6] aarch64: define BTI_C and BTI_J macros as NOP unless HAVE_AARCH64_BTI Date: Wed, 12 May 2021 09:27:20 +0000 Message-Id: <20210512092720.901129-1-naohirot@fujitsu.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210512092308.900998-1-naohirot@fujitsu.com> References: <20210512092308.900998-1-naohirot@fujitsu.com> X-TM-AS-GCONF: 00 X-Spam-Status: No, score=-12.7 required=5.0 tests=BAYES_00, GIT_PATCH_0, JMQ_SPF_NEUTRAL, KAM_DMARC_STATUS, SPF_HELO_PASS, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Naohiro Tamura Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" From: Naohiro Tamura This patch defines BTI_C and BTI_J macros conditionally for performance. If HAVE_AARCH64_BTI is true, BTI_C and BTI_J are defined as HINT instruction for ARMv8.5 BTI (Branch Target Identification). If HAVE_AARCH64_BTI is false, both BTI_C and BTI_J are defined as NOP. --- sysdeps/aarch64/sysdep.h | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/sysdeps/aarch64/sysdep.h b/sysdeps/aarch64/sysdep.h index 90acca4e42..b936e29cbd 100644 --- a/sysdeps/aarch64/sysdep.h +++ b/sysdeps/aarch64/sysdep.h @@ -62,8 +62,13 @@ strip_pac (void *p) #define ASM_SIZE_DIRECTIVE(name) .size name,.-name /* Branch Target Identitication support. */ -#define BTI_C hint 34 -#define BTI_J hint 36 +#if HAVE_AARCH64_BTI +# define BTI_C hint 34 +# define BTI_J hint 36 +#else +# define BTI_C nop +# define BTI_J nop +#endif /* Return address signing support (pac-ret). */ #define PACIASP hint 25 From patchwork Wed May 12 09:28:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Tamura X-Patchwork-Id: 43388 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id E37973891C00; Wed, 12 May 2021 09:29:19 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from esa6.hc1455-7.c3s2.iphmx.com (esa6.hc1455-7.c3s2.iphmx.com [68.232.139.139]) by sourceware.org (Postfix) with ESMTPS id 9F8A8388E80F for ; Wed, 12 May 2021 09:29:15 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 9F8A8388E80F Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=fujitsu.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=naohirot@fujitsu.com IronPort-SDR: rPflpikXpUT8uygPHnciiDWTuaYUAdlxvOBBH56m3Xc50QAsCYrZYZd2eVzoDJfgYKfaj6jvYt 9NPZAfF6eJzoWMAcW517iwWBgX7rkChDhjn4muoEahUitOENecItDR+M0g33bsRF3Qun/opRq5 t3swy3ag5+yN0hgRf9S4uhVkcefaAs4lyUF3vLIGihg/GauAfCpnQbLIt+9t8OsZUuoQrBYW/i IyW4rtUSxrh7iwgXRYYy1PrHB9vaTK5hDYiXJ4Tk4NT1DJRlTpaKl20NxPbQDJBbVExaUmbRrX Xs8= X-IronPort-AV: E=McAfee;i="6200,9189,9981"; a="29237125" X-IronPort-AV: E=Sophos;i="5.82,293,1613401200"; d="scan'208";a="29237125" Received: from unknown (HELO oym-r2.gw.nic.fujitsu.com) ([210.162.30.90]) by esa6.hc1455-7.c3s2.iphmx.com with ESMTP; 12 May 2021 18:29:13 +0900 Received: from oym-m3.gw.nic.fujitsu.com (oym-nat-oym-m3.gw.nic.fujitsu.com [192.168.87.60]) by oym-r2.gw.nic.fujitsu.com (Postfix) with ESMTP id 2621FE6158 for ; Wed, 12 May 2021 18:29:12 +0900 (JST) Received: from m3051.s.css.fujitsu.com (m3051.s.css.fujitsu.com [10.134.21.209]) by oym-m3.gw.nic.fujitsu.com (Postfix) with ESMTP id 09558A93C5 for ; Wed, 12 May 2021 18:29:11 +0900 (JST) Received: from bionic.lxd (unknown [10.126.53.116]) by m3051.s.css.fujitsu.com (Postfix) with ESMTP id E4D5893; Wed, 12 May 2021 18:29:10 +0900 (JST) From: Naohiro Tamura To: libc-alpha@sourceware.org Subject: [PATCH v2 3/6] aarch64: Added optimized memcpy and memmove for A64FX Date: Wed, 12 May 2021 09:28:09 +0000 Message-Id: <20210512092809.901182-1-naohirot@fujitsu.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210512092308.900998-1-naohirot@fujitsu.com> References: <20210512092308.900998-1-naohirot@fujitsu.com> X-TM-AS-GCONF: 00 X-Spam-Status: No, score=-13.9 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_SHORT, RCVD_IN_DNSWL_LOW, SPF_HELO_PASS, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Naohiro Tamura Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" From: Naohiro Tamura This patch optimizes the performance of memcpy/memmove for A64FX [1] which implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB cache per NUMA node. The performance optimization makes use of Scalable Vector Register with several techniques such as loop unrolling, memory access alignment, cache zero fill, and software pipelining. SVE assembler code for memcpy/memmove is implemented as Vector Length Agnostic code so theoretically it can be run on any SOC which supports ARMv8-A SVE standard. We confirmed that all testcases have been passed by running 'make check' and 'make xcheck' not only on A64FX but also on ThunderX2. And also we confirmed that the SVE 512 bit vector register performance is roughly 4 times better than Advanced SIMD 128 bit register and 8 times better than scalar 64 bit register by running 'make bench'. [1] https://github.com/fujitsu/A64FX --- manual/tunables.texi | 3 +- sysdeps/aarch64/multiarch/Makefile | 2 +- sysdeps/aarch64/multiarch/ifunc-impl-list.c | 8 +- sysdeps/aarch64/multiarch/init-arch.h | 4 +- sysdeps/aarch64/multiarch/memcpy.c | 12 +- sysdeps/aarch64/multiarch/memcpy_a64fx.S | 405 ++++++++++++++++++ sysdeps/aarch64/multiarch/memmove.c | 12 +- .../unix/sysv/linux/aarch64/cpu-features.c | 4 + .../unix/sysv/linux/aarch64/cpu-features.h | 4 + 9 files changed, 446 insertions(+), 8 deletions(-) create mode 100644 sysdeps/aarch64/multiarch/memcpy_a64fx.S diff --git a/manual/tunables.texi b/manual/tunables.texi index 6de647b426..fe7c1313cc 100644 --- a/manual/tunables.texi +++ b/manual/tunables.texi @@ -454,7 +454,8 @@ This tunable is specific to powerpc, powerpc64 and powerpc64le. The @code{glibc.cpu.name=xxx} tunable allows the user to tell @theglibc{} to assume that the CPU is @code{xxx} where xxx may have one of these values: @code{generic}, @code{falkor}, @code{thunderxt88}, @code{thunderx2t99}, -@code{thunderx2t99p1}, @code{ares}, @code{emag}, @code{kunpeng}. +@code{thunderx2t99p1}, @code{ares}, @code{emag}, @code{kunpeng}, +@code{a64fx}. This tunable is specific to aarch64. @end deftp diff --git a/sysdeps/aarch64/multiarch/Makefile b/sysdeps/aarch64/multiarch/Makefile index dc3efffb36..04c3f17121 100644 --- a/sysdeps/aarch64/multiarch/Makefile +++ b/sysdeps/aarch64/multiarch/Makefile @@ -1,6 +1,6 @@ ifeq ($(subdir),string) sysdep_routines += memcpy_generic memcpy_advsimd memcpy_thunderx memcpy_thunderx2 \ - memcpy_falkor \ + memcpy_falkor memcpy_a64fx \ memset_generic memset_falkor memset_emag memset_kunpeng \ memchr_generic memchr_nosimd \ strlen_mte strlen_asimd diff --git a/sysdeps/aarch64/multiarch/ifunc-impl-list.c b/sysdeps/aarch64/multiarch/ifunc-impl-list.c index 99a8c68aac..911393565c 100644 --- a/sysdeps/aarch64/multiarch/ifunc-impl-list.c +++ b/sysdeps/aarch64/multiarch/ifunc-impl-list.c @@ -25,7 +25,7 @@ #include /* Maximum number of IFUNC implementations. */ -#define MAX_IFUNC 4 +#define MAX_IFUNC 7 size_t __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, @@ -43,12 +43,18 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, IFUNC_IMPL_ADD (array, i, memcpy, !bti, __memcpy_thunderx2) IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_falkor) IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_simd) +#if HAVE_AARCH64_SVE_ASM + IFUNC_IMPL_ADD (array, i, memcpy, sve, __memcpy_a64fx) +#endif IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_generic)) IFUNC_IMPL (i, name, memmove, IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_thunderx) IFUNC_IMPL_ADD (array, i, memmove, !bti, __memmove_thunderx2) IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_falkor) IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_simd) +#if HAVE_AARCH64_SVE_ASM + IFUNC_IMPL_ADD (array, i, memmove, sve, __memmove_a64fx) +#endif IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_generic)) IFUNC_IMPL (i, name, memset, /* Enable this on non-falkor processors too so that other cores diff --git a/sysdeps/aarch64/multiarch/init-arch.h b/sysdeps/aarch64/multiarch/init-arch.h index a167699e74..6d92c1bcff 100644 --- a/sysdeps/aarch64/multiarch/init-arch.h +++ b/sysdeps/aarch64/multiarch/init-arch.h @@ -33,4 +33,6 @@ bool __attribute__((unused)) bti = \ HAVE_AARCH64_BTI && GLRO(dl_aarch64_cpu_features).bti; \ bool __attribute__((unused)) mte = \ - MTE_ENABLED (); + MTE_ENABLED (); \ + bool __attribute__((unused)) sve = \ + GLRO(dl_aarch64_cpu_features).sve; diff --git a/sysdeps/aarch64/multiarch/memcpy.c b/sysdeps/aarch64/multiarch/memcpy.c index 0e0a5cbcfb..d90ee51ffc 100644 --- a/sysdeps/aarch64/multiarch/memcpy.c +++ b/sysdeps/aarch64/multiarch/memcpy.c @@ -33,6 +33,9 @@ extern __typeof (__redirect_memcpy) __memcpy_simd attribute_hidden; extern __typeof (__redirect_memcpy) __memcpy_thunderx attribute_hidden; extern __typeof (__redirect_memcpy) __memcpy_thunderx2 attribute_hidden; extern __typeof (__redirect_memcpy) __memcpy_falkor attribute_hidden; +#if HAVE_AARCH64_SVE_ASM +extern __typeof (__redirect_memcpy) __memcpy_a64fx attribute_hidden; +#endif libc_ifunc (__libc_memcpy, (IS_THUNDERX (midr) @@ -44,8 +47,13 @@ libc_ifunc (__libc_memcpy, : (IS_NEOVERSE_N1 (midr) || IS_NEOVERSE_N2 (midr) || IS_NEOVERSE_V1 (midr) ? __memcpy_simd - : __memcpy_generic))))); - +#if HAVE_AARCH64_SVE_ASM + : (IS_A64FX (midr) + ? __memcpy_a64fx + : __memcpy_generic)))))); +#else + : __memcpy_generic))))); +#endif # undef memcpy strong_alias (__libc_memcpy, memcpy); #endif diff --git a/sysdeps/aarch64/multiarch/memcpy_a64fx.S b/sysdeps/aarch64/multiarch/memcpy_a64fx.S new file mode 100644 index 0000000000..e28afd708f --- /dev/null +++ b/sysdeps/aarch64/multiarch/memcpy_a64fx.S @@ -0,0 +1,405 @@ +/* Optimized memcpy for Fujitsu A64FX processor. + Copyright (C) 2012-2021 Free Software Foundation, Inc. + + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library. If not, see + . */ + +#include + +#if HAVE_AARCH64_SVE_ASM +#if IS_IN (libc) +# define MEMCPY __memcpy_a64fx +# define MEMMOVE __memmove_a64fx + +/* Assumptions: + * + * ARMv8.2-a, AArch64, unaligned accesses, sve + * + */ + +#define L2_SIZE (8*1024*1024)/2 // L2 8MB/2 +#define CACHE_LINE_SIZE 256 +#define ZF_DIST (CACHE_LINE_SIZE * 21) // Zerofill distance +#define dest x0 +#define src x1 +#define n x2 // size +#define tmp1 x3 +#define tmp2 x4 +#define tmp3 x5 +#define rest x6 +#define dest_ptr x7 +#define src_ptr x8 +#define vector_length x9 +#define cl_remainder x10 // CACHE_LINE_SIZE remainder + + .arch armv8.2-a+sve + + .macro dc_zva times + dc zva, tmp1 + add tmp1, tmp1, CACHE_LINE_SIZE + .if \times-1 + dc_zva "(\times-1)" + .endif + .endm + + .macro ld1b_unroll8 + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + .endm + + .macro stld1b_unroll4a + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + .endm + + .macro stld1b_unroll4b + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + .endm + + .macro stld1b_unroll8 + stld1b_unroll4a + stld1b_unroll4b + .endm + + .macro st1b_unroll8 + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + .endm + + .macro shortcut_for_small_size exit + // if rest <= vector_length * 2 + whilelo p0.b, xzr, n + whilelo p1.b, vector_length, n + b.last 1f + ld1b z0.b, p0/z, [src, #0, mul vl] + ld1b z1.b, p1/z, [src, #1, mul vl] + st1b z0.b, p0, [dest, #0, mul vl] + st1b z1.b, p1, [dest, #1, mul vl] + ret +1: // if rest > vector_length * 8 + cmp n, vector_length, lsl 3 // vector_length * 8 + b.hi \exit + // if rest <= vector_length * 4 + lsl tmp1, vector_length, 1 // vector_length * 2 + whilelo p2.b, tmp1, n + incb tmp1 + whilelo p3.b, tmp1, n + b.last 1f + ld1b z0.b, p0/z, [src, #0, mul vl] + ld1b z1.b, p1/z, [src, #1, mul vl] + ld1b z2.b, p2/z, [src, #2, mul vl] + ld1b z3.b, p3/z, [src, #3, mul vl] + st1b z0.b, p0, [dest, #0, mul vl] + st1b z1.b, p1, [dest, #1, mul vl] + st1b z2.b, p2, [dest, #2, mul vl] + st1b z3.b, p3, [dest, #3, mul vl] + ret +1: // if rest <= vector_length * 8 + lsl tmp1, vector_length, 2 // vector_length * 4 + whilelo p4.b, tmp1, n + incb tmp1 + whilelo p5.b, tmp1, n + b.last 1f + ld1b z0.b, p0/z, [src, #0, mul vl] + ld1b z1.b, p1/z, [src, #1, mul vl] + ld1b z2.b, p2/z, [src, #2, mul vl] + ld1b z3.b, p3/z, [src, #3, mul vl] + ld1b z4.b, p4/z, [src, #4, mul vl] + ld1b z5.b, p5/z, [src, #5, mul vl] + st1b z0.b, p0, [dest, #0, mul vl] + st1b z1.b, p1, [dest, #1, mul vl] + st1b z2.b, p2, [dest, #2, mul vl] + st1b z3.b, p3, [dest, #3, mul vl] + st1b z4.b, p4, [dest, #4, mul vl] + st1b z5.b, p5, [dest, #5, mul vl] + ret +1: lsl tmp1, vector_length, 2 // vector_length * 4 + incb tmp1 // vector_length * 5 + incb tmp1 // vector_length * 6 + whilelo p6.b, tmp1, n + incb tmp1 + whilelo p7.b, tmp1, n + ld1b z0.b, p0/z, [src, #0, mul vl] + ld1b z1.b, p1/z, [src, #1, mul vl] + ld1b z2.b, p2/z, [src, #2, mul vl] + ld1b z3.b, p3/z, [src, #3, mul vl] + ld1b z4.b, p4/z, [src, #4, mul vl] + ld1b z5.b, p5/z, [src, #5, mul vl] + ld1b z6.b, p6/z, [src, #6, mul vl] + ld1b z7.b, p7/z, [src, #7, mul vl] + st1b z0.b, p0, [dest, #0, mul vl] + st1b z1.b, p1, [dest, #1, mul vl] + st1b z2.b, p2, [dest, #2, mul vl] + st1b z3.b, p3, [dest, #3, mul vl] + st1b z4.b, p4, [dest, #4, mul vl] + st1b z5.b, p5, [dest, #5, mul vl] + st1b z6.b, p6, [dest, #6, mul vl] + st1b z7.b, p7, [dest, #7, mul vl] + ret + .endm + +ENTRY (MEMCPY) + + PTR_ARG (0) + PTR_ARG (1) + SIZE_ARG (2) + +L(memcpy): + cntb vector_length + // shortcut for less than vector_length * 8 + // gives a free ptrue to p0.b for n >= vector_length + shortcut_for_small_size L(vl_agnostic) + // end of shortcut + +L(vl_agnostic): // VL Agnostic + mov rest, n + mov dest_ptr, dest + mov src_ptr, src + // if rest >= L2_SIZE && vector_length == 64 then L(L2) + mov tmp1, 64 + cmp rest, L2_SIZE + ccmp vector_length, tmp1, 0, cs + b.eq L(L2) + +L(unroll8): // unrolling and software pipeline + lsl tmp1, vector_length, 3 // vector_length * 8 + .p2align 3 + cmp rest, tmp1 + b.cc L(last) + ld1b_unroll8 + add src_ptr, src_ptr, tmp1 + sub rest, rest, tmp1 + cmp rest, tmp1 + b.cc 2f + .p2align 3 +1: stld1b_unroll8 + add dest_ptr, dest_ptr, tmp1 + add src_ptr, src_ptr, tmp1 + sub rest, rest, tmp1 + cmp rest, tmp1 + b.ge 1b +2: st1b_unroll8 + add dest_ptr, dest_ptr, tmp1 + + .p2align 3 +L(last): + whilelo p0.b, xzr, rest + whilelo p1.b, vector_length, rest + b.last 1f + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p1/z, [src_ptr, #1, mul vl] + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p1, [dest_ptr, #1, mul vl] + ret +1: lsl tmp1, vector_length, 1 // vector_length * 2 + whilelo p2.b, tmp1, rest + incb tmp1 + whilelo p3.b, tmp1, rest + b.last 1f + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p1/z, [src_ptr, #1, mul vl] + ld1b z2.b, p2/z, [src_ptr, #2, mul vl] + ld1b z3.b, p3/z, [src_ptr, #3, mul vl] + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p1, [dest_ptr, #1, mul vl] + st1b z2.b, p2, [dest_ptr, #2, mul vl] + st1b z3.b, p3, [dest_ptr, #3, mul vl] + ret +1: lsl tmp1, vector_length, 2 // vector_length * 4 + whilelo p4.b, tmp1, rest + incb tmp1 + whilelo p5.b, tmp1, rest + incb tmp1 + whilelo p6.b, tmp1, rest + incb tmp1 + whilelo p7.b, tmp1, rest + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p1/z, [src_ptr, #1, mul vl] + ld1b z2.b, p2/z, [src_ptr, #2, mul vl] + ld1b z3.b, p3/z, [src_ptr, #3, mul vl] + ld1b z4.b, p4/z, [src_ptr, #4, mul vl] + ld1b z5.b, p5/z, [src_ptr, #5, mul vl] + ld1b z6.b, p6/z, [src_ptr, #6, mul vl] + ld1b z7.b, p7/z, [src_ptr, #7, mul vl] + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p1, [dest_ptr, #1, mul vl] + st1b z2.b, p2, [dest_ptr, #2, mul vl] + st1b z3.b, p3, [dest_ptr, #3, mul vl] + st1b z4.b, p4, [dest_ptr, #4, mul vl] + st1b z5.b, p5, [dest_ptr, #5, mul vl] + st1b z6.b, p6, [dest_ptr, #6, mul vl] + st1b z7.b, p7, [dest_ptr, #7, mul vl] + ret + +L(L2): + // align dest address at CACHE_LINE_SIZE byte boundary + mov tmp1, CACHE_LINE_SIZE + ands tmp2, dest_ptr, CACHE_LINE_SIZE - 1 + // if cl_remainder == 0 + b.eq L(L2_dc_zva) + sub cl_remainder, tmp1, tmp2 + // process remainder until the first CACHE_LINE_SIZE boundary + whilelo p1.b, xzr, cl_remainder // keep p0.b all true + whilelo p2.b, vector_length, cl_remainder + b.last 1f + ld1b z1.b, p1/z, [src_ptr, #0, mul vl] + ld1b z2.b, p2/z, [src_ptr, #1, mul vl] + st1b z1.b, p1, [dest_ptr, #0, mul vl] + st1b z2.b, p2, [dest_ptr, #1, mul vl] + b 2f +1: lsl tmp1, vector_length, 1 // vector_length * 2 + whilelo p3.b, tmp1, cl_remainder + incb tmp1 + whilelo p4.b, tmp1, cl_remainder + ld1b z1.b, p1/z, [src_ptr, #0, mul vl] + ld1b z2.b, p2/z, [src_ptr, #1, mul vl] + ld1b z3.b, p3/z, [src_ptr, #2, mul vl] + ld1b z4.b, p4/z, [src_ptr, #3, mul vl] + st1b z1.b, p1, [dest_ptr, #0, mul vl] + st1b z2.b, p2, [dest_ptr, #1, mul vl] + st1b z3.b, p3, [dest_ptr, #2, mul vl] + st1b z4.b, p4, [dest_ptr, #3, mul vl] +2: add dest_ptr, dest_ptr, cl_remainder + add src_ptr, src_ptr, cl_remainder + sub rest, rest, cl_remainder + +L(L2_dc_zva): + // zero fill + and tmp1, dest, 0xffffffffffffff + and tmp2, src, 0xffffffffffffff + subs tmp1, tmp1, tmp2 // diff + b.ge 1f + neg tmp1, tmp1 +1: mov tmp3, ZF_DIST + CACHE_LINE_SIZE * 2 + cmp tmp1, tmp3 + b.lo L(unroll8) + mov tmp1, dest_ptr + dc_zva (ZF_DIST / CACHE_LINE_SIZE) - 1 + // unroll + ld1b_unroll8 // this line has to be after "b.lo L(unroll8)" + add src_ptr, src_ptr, CACHE_LINE_SIZE * 2 + sub rest, rest, CACHE_LINE_SIZE * 2 + mov tmp1, ZF_DIST + .p2align 3 +1: stld1b_unroll4a + add tmp2, dest_ptr, tmp1 // dest_ptr + ZF_DIST + dc zva, tmp2 + stld1b_unroll4b + add tmp2, tmp2, CACHE_LINE_SIZE + dc zva, tmp2 + add dest_ptr, dest_ptr, CACHE_LINE_SIZE * 2 + add src_ptr, src_ptr, CACHE_LINE_SIZE * 2 + sub rest, rest, CACHE_LINE_SIZE * 2 + cmp rest, tmp3 // ZF_DIST + CACHE_LINE_SIZE * 2 + b.ge 1b + st1b_unroll8 + add dest_ptr, dest_ptr, CACHE_LINE_SIZE * 2 + b L(unroll8) + +END (MEMCPY) +libc_hidden_builtin_def (MEMCPY) + + +ENTRY (MEMMOVE) + + PTR_ARG (0) + PTR_ARG (1) + SIZE_ARG (2) + + // remove tag address + // dest has to be immutable because it is the return value + // src has to be immutable because it is used in L(bwd_last) + and tmp2, dest, 0xffffffffffffff // save dest_notag into tmp2 + and tmp3, src, 0xffffffffffffff // save src_notag intp tmp3 + cmp n, 0 + ccmp tmp2, tmp3, 4, ne + b.ne 1f + ret +1: cntb vector_length + // shortcut for less than vector_length * 8 + // gives a free ptrue to p0.b for n >= vector_length + // tmp2 and tmp3 should not be used in this macro to keep notag addresses + shortcut_for_small_size L(dispatch) + // end of shortcut + +L(dispatch): + // tmp2 = dest_notag, tmp3 = src_notag + // diff = dest_notag - src_notag + sub tmp1, tmp2, tmp3 + // if diff <= 0 || diff >= n then memcpy + cmp tmp1, 0 + ccmp tmp1, n, 2, gt + b.cs L(vl_agnostic) + +L(bwd_start): + mov rest, n + add dest_ptr, dest, n // dest_end + add src_ptr, src, n // src_end + +L(bwd_unroll8): // unrolling and software pipeline + lsl tmp1, vector_length, 3 // vector_length * 8 + .p2align 3 + cmp rest, tmp1 + b.cc L(bwd_last) + sub src_ptr, src_ptr, tmp1 + ld1b_unroll8 + sub rest, rest, tmp1 + cmp rest, tmp1 + b.cc 2f + .p2align 3 +1: sub src_ptr, src_ptr, tmp1 + sub dest_ptr, dest_ptr, tmp1 + stld1b_unroll8 + sub rest, rest, tmp1 + cmp rest, tmp1 + b.ge 1b +2: sub dest_ptr, dest_ptr, tmp1 + st1b_unroll8 + +L(bwd_last): + mov dest_ptr, dest + mov src_ptr, src + b L(last) + +END (MEMMOVE) +libc_hidden_builtin_def (MEMMOVE) +#endif /* IS_IN (libc) */ +#endif /* HAVE_AARCH64_SVE_ASM */ diff --git a/sysdeps/aarch64/multiarch/memmove.c b/sysdeps/aarch64/multiarch/memmove.c index 12d77818a9..be2d35a251 100644 --- a/sysdeps/aarch64/multiarch/memmove.c +++ b/sysdeps/aarch64/multiarch/memmove.c @@ -33,6 +33,9 @@ extern __typeof (__redirect_memmove) __memmove_simd attribute_hidden; extern __typeof (__redirect_memmove) __memmove_thunderx attribute_hidden; extern __typeof (__redirect_memmove) __memmove_thunderx2 attribute_hidden; extern __typeof (__redirect_memmove) __memmove_falkor attribute_hidden; +#if HAVE_AARCH64_SVE_ASM +extern __typeof (__redirect_memmove) __memmove_a64fx attribute_hidden; +#endif libc_ifunc (__libc_memmove, (IS_THUNDERX (midr) @@ -44,8 +47,13 @@ libc_ifunc (__libc_memmove, : (IS_NEOVERSE_N1 (midr) || IS_NEOVERSE_N2 (midr) || IS_NEOVERSE_V1 (midr) ? __memmove_simd - : __memmove_generic))))); - +#if HAVE_AARCH64_SVE_ASM + : (IS_A64FX (midr) + ? __memmove_a64fx + : __memmove_generic)))))); +#else + : __memmove_generic))))); +#endif # undef memmove strong_alias (__libc_memmove, memmove); #endif diff --git a/sysdeps/unix/sysv/linux/aarch64/cpu-features.c b/sysdeps/unix/sysv/linux/aarch64/cpu-features.c index db6aa3516c..6206a2f618 100644 --- a/sysdeps/unix/sysv/linux/aarch64/cpu-features.c +++ b/sysdeps/unix/sysv/linux/aarch64/cpu-features.c @@ -46,6 +46,7 @@ static struct cpu_list cpu_list[] = { {"ares", 0x411FD0C0}, {"emag", 0x503F0001}, {"kunpeng920", 0x481FD010}, + {"a64fx", 0x460F0010}, {"generic", 0x0} }; @@ -116,4 +117,7 @@ init_cpu_features (struct cpu_features *cpu_features) (PR_TAGGED_ADDR_ENABLE | PR_MTE_TCF_ASYNC | MTE_ALLOWED_TAGS), 0, 0, 0); #endif + + /* Check if SVE is supported. */ + cpu_features->sve = GLRO (dl_hwcap) & HWCAP_SVE; } diff --git a/sysdeps/unix/sysv/linux/aarch64/cpu-features.h b/sysdeps/unix/sysv/linux/aarch64/cpu-features.h index 3b9bfed134..2b322e5414 100644 --- a/sysdeps/unix/sysv/linux/aarch64/cpu-features.h +++ b/sysdeps/unix/sysv/linux/aarch64/cpu-features.h @@ -65,6 +65,9 @@ #define IS_KUNPENG920(midr) (MIDR_IMPLEMENTOR(midr) == 'H' \ && MIDR_PARTNUM(midr) == 0xd01) +#define IS_A64FX(midr) (MIDR_IMPLEMENTOR(midr) == 'F' \ + && MIDR_PARTNUM(midr) == 0x001) + struct cpu_features { uint64_t midr_el1; @@ -72,6 +75,7 @@ struct cpu_features bool bti; /* Currently, the GLIBC memory tagging tunable only defines 8 bits. */ uint8_t mte_state; + bool sve; }; #endif /* _CPU_FEATURES_AARCH64_H */ From patchwork Wed May 12 09:28:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Tamura X-Patchwork-Id: 43389 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id E66DF3891C0E; Wed, 12 May 2021 09:29:54 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from esa12.hc1455-7.c3s2.iphmx.com (esa12.hc1455-7.c3s2.iphmx.com [139.138.37.100]) by sourceware.org (Postfix) with ESMTPS id 932C1388E82A for ; Wed, 12 May 2021 09:29:50 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 932C1388E82A Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=fujitsu.com Authentication-Results: sourceware.org; spf=fail smtp.mailfrom=naohirot@fujitsu.com IronPort-SDR: BwdAu4ZJgL4rUitVnn1SVOcRZ/xtRQlzfCLI8//gZem179+1aCoORtuBuPuRlIwrIdRoI9eK61 tM3e2fEKdI/vK/hIDFtRSlGbWyjWdPtGRqC51PypDS7kqkRBPdbntwr6Tm0OHzeuTNwcJiI4XF j0RnduaY6sGGFBuBpsgxrPHqYYZEwvhLtu2+GQ4XxVit6B7W188DsjXhqfzW8sDM7lznOS8Bqp wA307bHGAXZD17gy5cHiRKo/TrFRlTq/+butFetFcTmbJaHWnTvLLK3pdyQOeYX0EnhfpNQP4D NXI= X-IronPort-AV: E=McAfee;i="6200,9189,9981"; a="9207386" X-IronPort-AV: E=Sophos;i="5.82,293,1613401200"; d="scan'208";a="9207386" Received: from unknown (HELO oym-r3.gw.nic.fujitsu.com) ([210.162.30.91]) by esa12.hc1455-7.c3s2.iphmx.com with ESMTP; 12 May 2021 18:29:48 +0900 Received: from oym-m4.gw.nic.fujitsu.com (oym-nat-oym-m4.gw.nic.fujitsu.com [192.168.87.61]) by oym-r3.gw.nic.fujitsu.com (Postfix) with ESMTP id 466FD1FB303 for ; Wed, 12 May 2021 18:29:48 +0900 (JST) Received: from m3050.s.css.fujitsu.com (msm.b.css.fujitsu.com [10.134.21.208]) by oym-m4.gw.nic.fujitsu.com (Postfix) with ESMTP id 7219C216016 for ; Wed, 12 May 2021 18:29:47 +0900 (JST) Received: from bionic.lxd (unknown [10.126.53.116]) by m3050.s.css.fujitsu.com (Postfix) with ESMTP id 4F12DAF; Wed, 12 May 2021 18:29:47 +0900 (JST) From: Naohiro Tamura To: libc-alpha@sourceware.org Subject: [PATCH v2 4/6] aarch64: Added optimized memset for A64FX Date: Wed, 12 May 2021 09:28:42 +0000 Message-Id: <20210512092842.901235-1-naohirot@fujitsu.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210512092308.900998-1-naohirot@fujitsu.com> References: <20210512092308.900998-1-naohirot@fujitsu.com> X-TM-AS-GCONF: 00 X-Spam-Status: No, score=-11.2 required=5.0 tests=BAYES_00, GIT_PATCH_0, JMQ_SPF_NEUTRAL, KAM_DMARC_STATUS, KAM_SHORT, SPF_HELO_PASS, SPF_NEUTRAL, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Naohiro Tamura Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" From: Naohiro Tamura This patch optimizes the performance of memset for A64FX [1] which implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB cache per NUMA node. The performance optimization makes use of Scalable Vector Register with several techniques such as loop unrolling, memory access alignment, cache zero fill and prefetch. SVE assembler code for memset is implemented as Vector Length Agnostic code so theoretically it can be run on any SOC which supports ARMv8-A SVE standard. We confirmed that all testcases have been passed by running 'make check' and 'make xcheck' not only on A64FX but also on ThunderX2. And also we confirmed that the SVE 512 bit vector register performance is roughly 4 times better than Advanced SIMD 128 bit register and 8 times better than scalar 64 bit register by running 'make bench'. [1] https://github.com/fujitsu/A64FX --- sysdeps/aarch64/multiarch/Makefile | 1 + sysdeps/aarch64/multiarch/ifunc-impl-list.c | 5 +- sysdeps/aarch64/multiarch/memset.c | 11 +- sysdeps/aarch64/multiarch/memset_a64fx.S | 268 ++++++++++++++++++++ 4 files changed, 283 insertions(+), 2 deletions(-) create mode 100644 sysdeps/aarch64/multiarch/memset_a64fx.S diff --git a/sysdeps/aarch64/multiarch/Makefile b/sysdeps/aarch64/multiarch/Makefile index 04c3f17121..7500cf1e93 100644 --- a/sysdeps/aarch64/multiarch/Makefile +++ b/sysdeps/aarch64/multiarch/Makefile @@ -2,6 +2,7 @@ ifeq ($(subdir),string) sysdep_routines += memcpy_generic memcpy_advsimd memcpy_thunderx memcpy_thunderx2 \ memcpy_falkor memcpy_a64fx \ memset_generic memset_falkor memset_emag memset_kunpeng \ + memset_a64fx \ memchr_generic memchr_nosimd \ strlen_mte strlen_asimd endif diff --git a/sysdeps/aarch64/multiarch/ifunc-impl-list.c b/sysdeps/aarch64/multiarch/ifunc-impl-list.c index 911393565c..4e1a641d9f 100644 --- a/sysdeps/aarch64/multiarch/ifunc-impl-list.c +++ b/sysdeps/aarch64/multiarch/ifunc-impl-list.c @@ -37,7 +37,7 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, INIT_ARCH (); - /* Support sysdeps/aarch64/multiarch/memcpy.c and memmove.c. */ + /* Support sysdeps/aarch64/multiarch/memcpy.c, memmove.c and memset.c. */ IFUNC_IMPL (i, name, memcpy, IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_thunderx) IFUNC_IMPL_ADD (array, i, memcpy, !bti, __memcpy_thunderx2) @@ -62,6 +62,9 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, IFUNC_IMPL_ADD (array, i, memset, (zva_size == 64), __memset_falkor) IFUNC_IMPL_ADD (array, i, memset, (zva_size == 64), __memset_emag) IFUNC_IMPL_ADD (array, i, memset, 1, __memset_kunpeng) +#if HAVE_AARCH64_SVE_ASM + IFUNC_IMPL_ADD (array, i, memset, sve, __memset_a64fx) +#endif IFUNC_IMPL_ADD (array, i, memset, 1, __memset_generic)) IFUNC_IMPL (i, name, memchr, IFUNC_IMPL_ADD (array, i, memchr, !mte, __memchr_nosimd) diff --git a/sysdeps/aarch64/multiarch/memset.c b/sysdeps/aarch64/multiarch/memset.c index 28d3926bc2..48a59574dd 100644 --- a/sysdeps/aarch64/multiarch/memset.c +++ b/sysdeps/aarch64/multiarch/memset.c @@ -31,6 +31,9 @@ extern __typeof (__redirect_memset) __libc_memset; extern __typeof (__redirect_memset) __memset_falkor attribute_hidden; extern __typeof (__redirect_memset) __memset_emag attribute_hidden; extern __typeof (__redirect_memset) __memset_kunpeng attribute_hidden; +#if HAVE_AARCH64_SVE_ASM +extern __typeof (__redirect_memset) __memset_a64fx attribute_hidden; +#endif extern __typeof (__redirect_memset) __memset_generic attribute_hidden; libc_ifunc (__libc_memset, @@ -40,7 +43,13 @@ libc_ifunc (__libc_memset, ? __memset_falkor : (IS_EMAG (midr) && zva_size == 64 ? __memset_emag - : __memset_generic))); +#if HAVE_AARCH64_SVE_ASM + : (IS_A64FX (midr) + ? __memset_a64fx + : __memset_generic)))); +#else + : __memset_generic))); +#endif # undef memset strong_alias (__libc_memset, memset); diff --git a/sysdeps/aarch64/multiarch/memset_a64fx.S b/sysdeps/aarch64/multiarch/memset_a64fx.S new file mode 100644 index 0000000000..9bd58cab6d --- /dev/null +++ b/sysdeps/aarch64/multiarch/memset_a64fx.S @@ -0,0 +1,268 @@ +/* Optimized memset for Fujitsu A64FX processor. + Copyright (C) 2012-2021 Free Software Foundation, Inc. + + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library. If not, see + . */ + +#include +#include + +#if HAVE_AARCH64_SVE_ASM +#if IS_IN (libc) +# define MEMSET __memset_a64fx + +/* Assumptions: + * + * ARMv8.2-a, AArch64, unaligned accesses, sve + * + */ + +#define L1_SIZE (64*1024) // L1 64KB +#define L2_SIZE (8*1024*1024) // L2 8MB - 1MB +#define CACHE_LINE_SIZE 256 +#define PF_DIST_L1 (CACHE_LINE_SIZE * 16) // Prefetch distance L1 +#define ZF_DIST (CACHE_LINE_SIZE * 21) // Zerofill distance +#define rest x8 +#define vector_length x9 +#define vl_remainder x10 // vector_length remainder +#define cl_remainder x11 // CACHE_LINE_SIZE remainder + + .arch armv8.2-a+sve + + .macro dc_zva times + dc zva, tmp1 + add tmp1, tmp1, CACHE_LINE_SIZE + .if \times-1 + dc_zva "(\times-1)" + .endif + .endm + + .macro st1b_unroll first=0, last=7 + st1b z0.b, p0, [dst, #\first, mul vl] + .if \last-\first + st1b_unroll "(\first+1)", \last + .endif + .endm + + .macro shortcut_for_small_size exit + // if rest <= vector_length * 2 + whilelo p0.b, xzr, count + whilelo p1.b, vector_length, count + b.last 1f + st1b z0.b, p0, [dstin, #0, mul vl] + st1b z0.b, p1, [dstin, #1, mul vl] + ret +1: // if rest > vector_length * 8 + cmp count, vector_length, lsl 3 // vector_length * 8 + b.hi \exit + // if rest <= vector_length * 4 + lsl tmp1, vector_length, 1 // vector_length * 2 + whilelo p2.b, tmp1, count + incb tmp1 + whilelo p3.b, tmp1, count + b.last 1f + st1b z0.b, p0, [dstin, #0, mul vl] + st1b z0.b, p1, [dstin, #1, mul vl] + st1b z0.b, p2, [dstin, #2, mul vl] + st1b z0.b, p3, [dstin, #3, mul vl] + ret +1: // if rest <= vector_length * 8 + lsl tmp1, vector_length, 2 // vector_length * 4 + whilelo p4.b, tmp1, count + incb tmp1 + whilelo p5.b, tmp1, count + b.last 1f + st1b z0.b, p0, [dstin, #0, mul vl] + st1b z0.b, p1, [dstin, #1, mul vl] + st1b z0.b, p2, [dstin, #2, mul vl] + st1b z0.b, p3, [dstin, #3, mul vl] + st1b z0.b, p4, [dstin, #4, mul vl] + st1b z0.b, p5, [dstin, #5, mul vl] + ret +1: lsl tmp1, vector_length, 2 // vector_length * 4 + incb tmp1 // vector_length * 5 + incb tmp1 // vector_length * 6 + whilelo p6.b, tmp1, count + incb tmp1 + whilelo p7.b, tmp1, count + st1b z0.b, p0, [dstin, #0, mul vl] + st1b z0.b, p1, [dstin, #1, mul vl] + st1b z0.b, p2, [dstin, #2, mul vl] + st1b z0.b, p3, [dstin, #3, mul vl] + st1b z0.b, p4, [dstin, #4, mul vl] + st1b z0.b, p5, [dstin, #5, mul vl] + st1b z0.b, p6, [dstin, #6, mul vl] + st1b z0.b, p7, [dstin, #7, mul vl] + ret + .endm + +ENTRY (MEMSET) + + PTR_ARG (0) + SIZE_ARG (2) + + cbnz count, 1f + ret +1: dup z0.b, valw + cntb vector_length + // shortcut for less than vector_length * 8 + // gives a free ptrue to p0.b for n >= vector_length + shortcut_for_small_size L(vl_agnostic) + // end of shortcut + +L(vl_agnostic): // VL Agnostic + mov rest, count + mov dst, dstin + add dstend, dstin, count + // if rest >= L2_SIZE && vector_length == 64 then L(L2) + mov tmp1, 64 + cmp rest, L2_SIZE + ccmp vector_length, tmp1, 0, cs + b.eq L(L2) + // if rest >= L1_SIZE && vector_length == 64 then L(L1_prefetch) + cmp rest, L1_SIZE + ccmp vector_length, tmp1, 0, cs + b.eq L(L1_prefetch) + +L(unroll32): + lsl tmp1, vector_length, 3 // vector_length * 8 + lsl tmp2, vector_length, 5 // vector_length * 32 + .p2align 3 +1: cmp rest, tmp2 + b.cc L(unroll8) + st1b_unroll + add dst, dst, tmp1 + st1b_unroll + add dst, dst, tmp1 + st1b_unroll + add dst, dst, tmp1 + st1b_unroll + add dst, dst, tmp1 + sub rest, rest, tmp2 + b 1b + +L(unroll8): + lsl tmp1, vector_length, 3 + .p2align 3 +1: cmp rest, tmp1 + b.cc L(last) + st1b_unroll + add dst, dst, tmp1 + sub rest, rest, tmp1 + b 1b + +L(last): + whilelo p0.b, xzr, rest + whilelo p1.b, vector_length, rest + b.last 1f + st1b z0.b, p0, [dst, #0, mul vl] + st1b z0.b, p1, [dst, #1, mul vl] + ret +1: lsl tmp1, vector_length, 1 // vector_length * 2 + whilelo p2.b, tmp1, rest + incb tmp1 + whilelo p3.b, tmp1, rest + b.last 1f + st1b z0.b, p0, [dst, #0, mul vl] + st1b z0.b, p1, [dst, #1, mul vl] + st1b z0.b, p2, [dst, #2, mul vl] + st1b z0.b, p3, [dst, #3, mul vl] + ret +1: lsl tmp1, vector_length, 2 // vector_length * 4 + whilelo p4.b, tmp1, rest + incb tmp1 + whilelo p5.b, tmp1, rest + incb tmp1 + whilelo p6.b, tmp1, rest + incb tmp1 + whilelo p7.b, tmp1, rest + st1b z0.b, p0, [dst, #0, mul vl] + st1b z0.b, p1, [dst, #1, mul vl] + st1b z0.b, p2, [dst, #2, mul vl] + st1b z0.b, p3, [dst, #3, mul vl] + st1b z0.b, p4, [dst, #4, mul vl] + st1b z0.b, p5, [dst, #5, mul vl] + st1b z0.b, p6, [dst, #6, mul vl] + st1b z0.b, p7, [dst, #7, mul vl] + ret + +L(L1_prefetch): // if rest >= L1_SIZE + .p2align 3 +1: st1b_unroll 0, 3 + prfm pstl1keep, [dst, PF_DIST_L1] + st1b_unroll 4, 7 + prfm pstl1keep, [dst, PF_DIST_L1 + CACHE_LINE_SIZE] + add dst, dst, CACHE_LINE_SIZE * 2 + sub rest, rest, CACHE_LINE_SIZE * 2 + cmp rest, L1_SIZE + b.ge 1b + cbnz rest, L(unroll32) + ret + +L(L2): + // align dst address at vector_length byte boundary + sub tmp1, vector_length, 1 + ands tmp2, dst, tmp1 + // if vl_remainder == 0 + b.eq 1f + sub vl_remainder, vector_length, tmp2 + // process remainder until the first vector_length boundary + whilelt p2.b, xzr, vl_remainder + st1b z0.b, p2, [dst] + add dst, dst, vl_remainder + sub rest, rest, vl_remainder + // align dstin address at CACHE_LINE_SIZE byte boundary +1: mov tmp1, CACHE_LINE_SIZE + ands tmp2, dst, CACHE_LINE_SIZE - 1 + // if cl_remainder == 0 + b.eq L(L2_dc_zva) + sub cl_remainder, tmp1, tmp2 + // process remainder until the first CACHE_LINE_SIZE boundary + mov tmp1, xzr // index +2: whilelt p2.b, tmp1, cl_remainder + st1b z0.b, p2, [dst, tmp1] + incb tmp1 + cmp tmp1, cl_remainder + b.lo 2b + add dst, dst, cl_remainder + sub rest, rest, cl_remainder + +L(L2_dc_zva): + // zero fill + mov tmp1, dst + dc_zva (ZF_DIST / CACHE_LINE_SIZE) - 1 + mov zva_len, ZF_DIST + add tmp1, zva_len, CACHE_LINE_SIZE * 2 + // unroll + .p2align 3 +1: st1b_unroll 0, 3 + add tmp2, dst, zva_len + dc zva, tmp2 + st1b_unroll 4, 7 + add tmp2, tmp2, CACHE_LINE_SIZE + dc zva, tmp2 + add dst, dst, CACHE_LINE_SIZE * 2 + sub rest, rest, CACHE_LINE_SIZE * 2 + cmp rest, tmp1 // ZF_DIST + CACHE_LINE_SIZE * 2 + b.ge 1b + cbnz rest, L(unroll8) + ret + +END (MEMSET) +libc_hidden_builtin_def (MEMSET) + +#endif /* IS_IN (libc) */ +#endif /* HAVE_AARCH64_SVE_ASM */ From patchwork Wed May 12 09:29:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Tamura X-Patchwork-Id: 43390 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 5C86B3891C06; Wed, 12 May 2021 09:30:31 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from esa12.hc1455-7.c3s2.iphmx.com (esa12.hc1455-7.c3s2.iphmx.com [139.138.37.100]) by sourceware.org (Postfix) with ESMTPS id 4634D388E80F for ; Wed, 12 May 2021 09:30:28 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 4634D388E80F Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=fujitsu.com Authentication-Results: sourceware.org; spf=fail smtp.mailfrom=naohirot@fujitsu.com IronPort-SDR: AHpz9DwFV1xP62G4f2uzMZRu3Sr3j1ZYr63ay0irpXaDGTxGAkdHY2cX46SqctUZttjJkEByPG QEEBxxH9PZD/YXXv1ac5NTtljsHsVc7PgUnt2esBYSdUAIgC7IjIZmCDUi4QDo/JIU5UokXGdL O7trmriLTNsSYS7r3Doyoy3AmK+f5MCk7eDgOa4dks2OIdF3SYYXlYvKNT7+lQ7MJ514V2TukK qWUGiADDSqdYQT3vTMJLGbHA9Y2+PD8Y44T49M8BVop3Pr3E+nrHg5hnh2vj5fZ+oRuSCoK+Vx hrQ= X-IronPort-AV: E=McAfee;i="6200,9189,9981"; a="9207494" X-IronPort-AV: E=Sophos;i="5.82,293,1613401200"; d="scan'208";a="9207494" Received: from unknown (HELO yto-r4.gw.nic.fujitsu.com) ([218.44.52.220]) by esa12.hc1455-7.c3s2.iphmx.com with ESMTP; 12 May 2021 18:30:26 +0900 Received: from yto-m1.gw.nic.fujitsu.com (yto-nat-yto-m1.gw.nic.fujitsu.com [192.168.83.64]) by yto-r4.gw.nic.fujitsu.com (Postfix) with ESMTP id 60A9221EC60 for ; Wed, 12 May 2021 18:30:25 +0900 (JST) Received: from m3051.s.css.fujitsu.com (m3051.s.css.fujitsu.com [10.134.21.209]) by yto-m1.gw.nic.fujitsu.com (Postfix) with ESMTP id A79FFC9CFD for ; Wed, 12 May 2021 18:30:24 +0900 (JST) Received: from bionic.lxd (unknown [10.126.53.116]) by m3051.s.css.fujitsu.com (Postfix) with ESMTP id 9AD3C93; Wed, 12 May 2021 18:30:24 +0900 (JST) From: Naohiro Tamura To: libc-alpha@sourceware.org Subject: [PATCH v2 5/6] scripts: Added Vector Length Set test helper script Date: Wed, 12 May 2021 09:29:22 +0000 Message-Id: <20210512092922.901289-1-naohirot@fujitsu.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210512092308.900998-1-naohirot@fujitsu.com> References: <20210512092308.900998-1-naohirot@fujitsu.com> X-TM-AS-GCONF: 00 X-Spam-Status: No, score=-12.0 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_SHORT, SPF_HELO_PASS, SPF_NEUTRAL, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Naohiro Tamura Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" From: Naohiro Tamura This patch is a test helper script to change Vector Length for child process. This script can be used as test-wrapper for 'make check'. Usage examples: ubuntu@bionic:~/build$ make check subdirs=string \ test-wrapper='~/glibc/scripts/vltest.py 16' ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 16 make test \ t=string/test-memcpy ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 32 ./debugglibc.sh \ string/test-memmove ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 64 ./testrun.sh string/test-memset --- scripts/vltest.py | 82 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 82 insertions(+) create mode 100755 scripts/vltest.py diff --git a/scripts/vltest.py b/scripts/vltest.py new file mode 100755 index 0000000000..264dfa449f --- /dev/null +++ b/scripts/vltest.py @@ -0,0 +1,82 @@ +#!/usr/bin/python3 +# Set Scalable Vector Length test helper +# Copyright (C) 2019-2021 Free Software Foundation, Inc. +# This file is part of the GNU C Library. +# +# The GNU C Library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# The GNU C Library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with the GNU C Library; if not, see +# . +"""Set Scalable Vector Length test helper. + +Set Scalable Vector Length for child process. + +examples: + +ubuntu@bionic:~/build$ make check subdirs=string \ +test-wrapper='~/glibc/scripts/vltest.py 16' + +ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 16 make test \ +t=string/test-memcpy + +ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 32 ./debugglibc.sh \ +string/test-memmove + +ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 64 ./testrun.sh \ +string/test-memset +""" +import argparse +from ctypes import cdll, CDLL +import os +import sys + +EXIT_SUCCESS = 0 +EXIT_FAILURE = 1 +EXIT_UNSUPPORTED = 77 + +AT_HWCAP = 16 +HWCAP_SVE = (1 << 22) + +PR_SVE_GET_VL = 51 +PR_SVE_SET_VL = 50 +PR_SVE_SET_VL_ONEXEC = (1 << 18) +PR_SVE_VL_INHERIT = (1 << 17) +PR_SVE_VL_LEN_MASK = 0xffff + +def main(args): + libc = CDLL("libc.so.6") + if not libc.getauxval(AT_HWCAP) & HWCAP_SVE: + print("CPU doesn't support SVE") + sys.exit(EXIT_UNSUPPORTED) + + libc.prctl(PR_SVE_SET_VL, + args.vl[0] | PR_SVE_SET_VL_ONEXEC | PR_SVE_VL_INHERIT) + os.execvp(args.args[0], args.args) + print("exec system call failure") + sys.exit(EXIT_FAILURE) + +if __name__ == '__main__': + parser = argparse.ArgumentParser(description= + "Set Scalable Vector Length test helper", + formatter_class=argparse.ArgumentDefaultsHelpFormatter) + + # positional argument + parser.add_argument("vl", nargs=1, type=int, + choices=range(16, 257, 16), + help=('vector length '\ + 'which is multiples of 16 from 16 to 256')) + # remainDer arguments + parser.add_argument('args', nargs=argparse.REMAINDER, + help=('args '\ + 'which is passed to child process')) + args = parser.parse_args() + main(args) From patchwork Wed May 12 09:29:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Tamura X-Patchwork-Id: 43391 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id C3A1E3891C18; Wed, 12 May 2021 09:31:11 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from esa5.hc1455-7.c3s2.iphmx.com (esa5.hc1455-7.c3s2.iphmx.com [68.232.139.130]) by sourceware.org (Postfix) with ESMTPS id DAC54388E80F for ; Wed, 12 May 2021 09:31:09 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org DAC54388E80F Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=fujitsu.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=naohirot@fujitsu.com IronPort-SDR: Jb2svWoy5TPg6uCMD/ZOtX7KkdroILA34FlJccicUrSkQVQXgcd/RxH6HhmeoIFP9LT6ld4EhL i7SrLPk/MeNw1D0LxpkIk9rb+bT1qX2uqPSas/KiqNOgkN3hyRZ4yLKed046uhY7q701UeGlXh qj0owyHaQdad/F6fOGh7Q5vn6dk+h8n7PH2F08yDn8R6G6IUgVrK1KTeXkcEwCyhiDpuA4tsDs wTNi2564RlRJ8AeGDSBlsCMAQRtXVq9RsWy2LkAM6Q/Lz8j7G6gGGKXN1TCXKBSCsq9o8VV920 Zro= X-IronPort-AV: E=McAfee;i="6200,9189,9981"; a="29026507" X-IronPort-AV: E=Sophos;i="5.82,293,1613401200"; d="scan'208";a="29026507" Received: from unknown (HELO oym-r4.gw.nic.fujitsu.com) ([210.162.30.92]) by esa5.hc1455-7.c3s2.iphmx.com with ESMTP; 12 May 2021 18:30:58 +0900 Received: from oym-m3.gw.nic.fujitsu.com (oym-nat-oym-m3.gw.nic.fujitsu.com [192.168.87.60]) by oym-r4.gw.nic.fujitsu.com (Postfix) with ESMTP id D072732F7C1 for ; Wed, 12 May 2021 18:30:56 +0900 (JST) Received: from m3050.s.css.fujitsu.com (msm.b.css.fujitsu.com [10.134.21.208]) by oym-m3.gw.nic.fujitsu.com (Postfix) with ESMTP id 11B671533F for ; Wed, 12 May 2021 18:30:56 +0900 (JST) Received: from bionic.lxd (unknown [10.126.53.116]) by m3050.s.css.fujitsu.com (Postfix) with ESMTP id E5FD3AF; Wed, 12 May 2021 18:30:55 +0900 (JST) From: Naohiro Tamura To: libc-alpha@sourceware.org Subject: [PATCH v2 6/6] benchtests: Fixed bench-memcpy-random: buf1: mprotect failed Date: Wed, 12 May 2021 09:29:54 +0000 Message-Id: <20210512092954.901342-1-naohirot@fujitsu.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210512092308.900998-1-naohirot@fujitsu.com> References: <20210512092308.900998-1-naohirot@fujitsu.com> X-TM-AS-GCONF: 00 X-Spam-Status: No, score=-13.2 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_SHORT, SPF_HELO_PASS, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Naohiro Tamura Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" From: Naohiro Tamura This patch fixed mprotect system call failure on AArch64. This failure happened on not only A64FX but also ThunderX2. Also this patch updated a JSON key from "max-size" to "length" so that 'plot_strings.py' can process 'bench-memcpy-random.out' --- benchtests/bench-memcpy-random.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/benchtests/bench-memcpy-random.c b/benchtests/bench-memcpy-random.c index 9b62033379..c490b73ed0 100644 --- a/benchtests/bench-memcpy-random.c +++ b/benchtests/bench-memcpy-random.c @@ -16,7 +16,7 @@ License along with the GNU C Library; if not, see . */ -#define MIN_PAGE_SIZE (512*1024+4096) +#define MIN_PAGE_SIZE (512*1024+getpagesize()) #define TEST_MAIN #define TEST_NAME "memcpy" #include "bench-string.h" @@ -160,7 +160,7 @@ do_test (json_ctx_t *json_ctx, size_t max_size) } json_element_object_begin (json_ctx); - json_attr_uint (json_ctx, "max-size", (double) max_size); + json_attr_uint (json_ctx, "length", (double) max_size); json_array_begin (json_ctx, "timings"); FOR_EACH_IMPL (impl, 0)