From patchwork Wed Mar 17 02:33:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Tamura X-Patchwork-Id: 42661 X-Patchwork-Delegate: szabolcs.nagy@arm.com Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id C80DC385043E; Wed, 17 Mar 2021 02:35:18 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from esa11.hc1455-7.c3s2.iphmx.com (esa11.hc1455-7.c3s2.iphmx.com [207.54.90.137]) by sourceware.org (Postfix) with ESMTPS id 3BAC53851C26 for ; Wed, 17 Mar 2021 02:35:15 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 3BAC53851C26 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=fujitsu.com Authentication-Results: sourceware.org; spf=fail smtp.mailfrom=naohirot@fujitsu.com IronPort-SDR: MnXogZXDatGwz2p/w3nnKj2npMbht5eea9POqvIdwVw5JyrwBBlF9jVrv/58MVDbtcSEM2eewr WwggyqW/ijkixiI7gImxjhHnl4URdDpnMEaeqVMdGeYVOxacoPj+hbgZnA1RwnVJtfNFv9inoU d6LJSK/PO+2Bgk24VczULoBI7tnoHKDe2JACJBIrD4ga/rniv1BRdbk39PBb5eUIvA+2C3DvEw 8gsdYZMJYbn86Rk/XPvkiXLTHsMl8NRrw0Np+P5HYXuDuunMtcNNBtiV8A8bVkKpEWoov0CSna r28= X-IronPort-AV: E=McAfee;i="6000,8403,9925"; a="2876879" X-IronPort-AV: E=Sophos;i="5.81,254,1610377200"; d="scan'208";a="2876879" Received: from unknown (HELO yto-r2.gw.nic.fujitsu.com) ([218.44.52.218]) by esa11.hc1455-7.c3s2.iphmx.com with ESMTP; 17 Mar 2021 11:35:12 +0900 Received: from yto-m4.gw.nic.fujitsu.com (yto-nat-yto-m4.gw.nic.fujitsu.com [192.168.83.67]) by yto-r2.gw.nic.fujitsu.com (Postfix) with ESMTP id 074ADA80CF for ; Wed, 17 Mar 2021 11:35:12 +0900 (JST) Received: from m3051.s.css.fujitsu.com (m3051.s.css.fujitsu.com [10.134.21.209]) by yto-m4.gw.nic.fujitsu.com (Postfix) with ESMTP id 594D85907E5 for ; Wed, 17 Mar 2021 11:35:11 +0900 (JST) Received: from bionic.lxd (unknown [10.126.53.116]) by m3051.s.css.fujitsu.com (Postfix) with ESMTP id 566CC93; Wed, 17 Mar 2021 11:35:11 +0900 (JST) From: Naohiro Tamura To: libc-alpha@sourceware.org Subject: [PATCH 1/5] config: Added HAVE_SVE_ASM_SUPPORT for aarch64 Date: Wed, 17 Mar 2021 02:33:44 +0000 Message-Id: <20210317023344.323099-1-naohirot@fujitsu.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210317022849.323046-1-naohirot@fujitsu.com> References: <20210317022849.323046-1-naohirot@fujitsu.com> X-TM-AS-GCONF: 00 X-Spam-Status: No, score=-12.0 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_NUMSUBJECT, SPF_HELO_PASS, SPF_NEUTRAL, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Naohiro Tamura Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" From: Naohiro Tamura This patch checks if assembler supports '-march=armv8.2-a+sve' to generate SVE code or not, and then define HAVE_SVE_ASM_SUPPORT macro. --- config.h.in | 3 +++ sysdeps/aarch64/configure | 28 ++++++++++++++++++++++++++++ sysdeps/aarch64/configure.ac | 15 +++++++++++++++ 3 files changed, 46 insertions(+) diff --git a/config.h.in b/config.h.in index f21bf04e47..2073816af8 100644 --- a/config.h.in +++ b/config.h.in @@ -118,6 +118,9 @@ /* AArch64 PAC-RET code generation is enabled. */ #define HAVE_AARCH64_PAC_RET 0 +/* Assembler support ARMv8.2-A SVE */ +#define HAVE_SVE_ASM_SUPPORT 0 + /* ARC big endian ABI */ #undef HAVE_ARC_BE diff --git a/sysdeps/aarch64/configure b/sysdeps/aarch64/configure index 83c3a23e44..ac16250f8a 100644 --- a/sysdeps/aarch64/configure +++ b/sysdeps/aarch64/configure @@ -304,3 +304,31 @@ fi $as_echo "$libc_cv_aarch64_variant_pcs" >&6; } config_vars="$config_vars aarch64-variant-pcs = $libc_cv_aarch64_variant_pcs" + +# Check if asm support armv8.2-a+sve +{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for SVE support in assembler" >&5 +$as_echo_n "checking for SVE support in assembler... " >&6; } +if ${libc_cv_asm_sve+:} false; then : + $as_echo_n "(cached) " >&6 +else + cat > conftest.s <<\EOF + ptrue p0.b +EOF +if { ac_try='${CC-cc} -c -march=armv8.2-a+sve conftest.s 1>&5' + { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_try\""; } >&5 + (eval $ac_try) 2>&5 + ac_status=$? + $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 + test $ac_status = 0; }; }; then + libc_cv_asm_sve=yes +else + libc_cv_asm_sve=no +fi +rm -f conftest* +fi +{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $libc_cv_asm_sve" >&5 +$as_echo "$libc_cv_asm_sve" >&6; } +if test $libc_cv_asm_sve = yes; then + $as_echo "#define HAVE_SVE_ASM_SUPPORT 1" >>confdefs.h + +fi diff --git a/sysdeps/aarch64/configure.ac b/sysdeps/aarch64/configure.ac index 66f755078a..389a0b4e8d 100644 --- a/sysdeps/aarch64/configure.ac +++ b/sysdeps/aarch64/configure.ac @@ -90,3 +90,18 @@ EOF fi rm -rf conftest.*]) LIBC_CONFIG_VAR([aarch64-variant-pcs], [$libc_cv_aarch64_variant_pcs]) + +# Check if asm support armv8.2-a+sve +AC_CACHE_CHECK(for SVE support in assembler, libc_cv_asm_sve, [dnl +cat > conftest.s <<\EOF + ptrue p0.b +EOF +if AC_TRY_COMMAND(${CC-cc} -c -march=armv8.2-a+sve conftest.s 1>&AS_MESSAGE_LOG_FD); then + libc_cv_asm_sve=yes +else + libc_cv_asm_sve=no +fi +rm -f conftest*]) +if test $libc_cv_asm_sve = yes; then + AC_DEFINE(HAVE_SVE_ASM_SUPPORT) +fi From patchwork Wed Mar 17 02:34:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Tamura X-Patchwork-Id: 42662 X-Patchwork-Delegate: szabolcs.nagy@arm.com Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 31D5838618AA; Wed, 17 Mar 2021 02:35:49 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from esa1.hc1455-7.c3s2.iphmx.com (esa1.hc1455-7.c3s2.iphmx.com [207.54.90.47]) by sourceware.org (Postfix) with ESMTPS id C5AF63851C26 for ; Wed, 17 Mar 2021 02:35:42 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org C5AF63851C26 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=fujitsu.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=naohirot@fujitsu.com IronPort-SDR: RAnbda8B8dUBZMfWNs9r7ewbMOBBrjpDy8Q1RuEemXfbtE/o4zDkN2Qbj9wr8NNCuIHLE5kIrO F25WqOgUSaswlixGEWv430pzSfqq/exGGA5hNJMPu87w30ExCfC4/MG7hnoXHvum9FrtwRHYLq T/t1D5PRSj2XVHgTSICCclIREifH2Nbsm0j3yl9mL+XuVaks81olDalBVQThDyZkKZql1iz5Kd 1omkC1TaDCWvboy/9uKOnRTy3CyN2IKZ34A4wOoME7Fpm9HSpg528apKqeW12ZkGV1MEUI+bLj it0= X-IronPort-AV: E=McAfee;i="6000,8403,9925"; a="23133245" X-IronPort-AV: E=Sophos;i="5.81,254,1610377200"; d="scan'208";a="23133245" Received: from unknown (HELO yto-r4.gw.nic.fujitsu.com) ([218.44.52.220]) by esa1.hc1455-7.c3s2.iphmx.com with ESMTP; 17 Mar 2021 11:35:40 +0900 Received: from yto-m3.gw.nic.fujitsu.com (yto-nat-yto-m3.gw.nic.fujitsu.com [192.168.83.66]) by yto-r4.gw.nic.fujitsu.com (Postfix) with ESMTP id 58F7F21EC0C for ; Wed, 17 Mar 2021 11:35:39 +0900 (JST) Received: from m3050.s.css.fujitsu.com (msm.b.css.fujitsu.com [10.134.21.208]) by yto-m3.gw.nic.fujitsu.com (Postfix) with ESMTP id 87EFC1559A for ; Wed, 17 Mar 2021 11:35:38 +0900 (JST) Received: from bionic.lxd (unknown [10.126.53.116]) by m3050.s.css.fujitsu.com (Postfix) with ESMTP id 7E899A2; Wed, 17 Mar 2021 11:35:38 +0900 (JST) From: Naohiro Tamura To: libc-alpha@sourceware.org Subject: [PATCH 2/5] aarch64: Added optimized memcpy and memmove for A64FX Date: Wed, 17 Mar 2021 02:34:17 +0000 Message-Id: <20210317023417.323152-1-naohirot@fujitsu.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210317022849.323046-1-naohirot@fujitsu.com> References: <20210317022849.323046-1-naohirot@fujitsu.com> X-TM-AS-GCONF: 00 X-Spam-Status: No, score=-13.7 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_SHORT, RCVD_IN_DNSWL_LOW, SPF_HELO_PASS, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Naohiro Tamura Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" From: Naohiro Tamura This patch optimizes the performance of memcpy/memmove for A64FX [1] which implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB cache per NUMA node. The performance optimization makes use of Scalable Vector Register with several techniques such as loop unrolling, memory access alignment, cache zero fill, prefetch, and software pipelining. SVE assembler code for memcpy/memmove is implemented as Vector Length Agnostic code so theoretically it can be run on any SOC which supports ARMv8-A SVE standard. We confirmed that all testcases have been passed by running 'make check' and 'make xcheck' not only on A64FX but also on ThunderX2. And also we confirmed that the SVE 512 bit vector register performance is roughly 4 times better than Advanced SIMD 128 bit register and 8 times better than scalar 64 bit register by running 'make bench'. [1] https://github.com/fujitsu/A64FX --- manual/tunables.texi | 3 +- sysdeps/aarch64/multiarch/Makefile | 2 +- sysdeps/aarch64/multiarch/ifunc-impl-list.c | 12 +- sysdeps/aarch64/multiarch/init-arch.h | 4 +- sysdeps/aarch64/multiarch/memcpy.c | 12 +- sysdeps/aarch64/multiarch/memcpy_a64fx.S | 979 ++++++++++++++++++ sysdeps/aarch64/multiarch/memmove.c | 12 +- .../unix/sysv/linux/aarch64/cpu-features.c | 4 + .../unix/sysv/linux/aarch64/cpu-features.h | 4 + 9 files changed, 1024 insertions(+), 8 deletions(-) create mode 100644 sysdeps/aarch64/multiarch/memcpy_a64fx.S diff --git a/manual/tunables.texi b/manual/tunables.texi index 1b746c0fa1..81ed5366fc 100644 --- a/manual/tunables.texi +++ b/manual/tunables.texi @@ -453,7 +453,8 @@ This tunable is specific to powerpc, powerpc64 and powerpc64le. The @code{glibc.cpu.name=xxx} tunable allows the user to tell @theglibc{} to assume that the CPU is @code{xxx} where xxx may have one of these values: @code{generic}, @code{falkor}, @code{thunderxt88}, @code{thunderx2t99}, -@code{thunderx2t99p1}, @code{ares}, @code{emag}, @code{kunpeng}. +@code{thunderx2t99p1}, @code{ares}, @code{emag}, @code{kunpeng}, +@code{a64fx}. This tunable is specific to aarch64. @end deftp diff --git a/sysdeps/aarch64/multiarch/Makefile b/sysdeps/aarch64/multiarch/Makefile index dc3efffb36..04c3f17121 100644 --- a/sysdeps/aarch64/multiarch/Makefile +++ b/sysdeps/aarch64/multiarch/Makefile @@ -1,6 +1,6 @@ ifeq ($(subdir),string) sysdep_routines += memcpy_generic memcpy_advsimd memcpy_thunderx memcpy_thunderx2 \ - memcpy_falkor \ + memcpy_falkor memcpy_a64fx \ memset_generic memset_falkor memset_emag memset_kunpeng \ memchr_generic memchr_nosimd \ strlen_mte strlen_asimd diff --git a/sysdeps/aarch64/multiarch/ifunc-impl-list.c b/sysdeps/aarch64/multiarch/ifunc-impl-list.c index 99a8c68aac..cb78da9692 100644 --- a/sysdeps/aarch64/multiarch/ifunc-impl-list.c +++ b/sysdeps/aarch64/multiarch/ifunc-impl-list.c @@ -25,7 +25,11 @@ #include /* Maximum number of IFUNC implementations. */ -#define MAX_IFUNC 4 +#if HAVE_SVE_ASM_SUPPORT +# define MAX_IFUNC 7 +#else +# define MAX_IFUNC 6 +#endif size_t __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, @@ -43,12 +47,18 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, IFUNC_IMPL_ADD (array, i, memcpy, !bti, __memcpy_thunderx2) IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_falkor) IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_simd) +#if HAVE_SVE_ASM_SUPPORT + IFUNC_IMPL_ADD (array, i, memcpy, sve, __memcpy_a64fx) +#endif IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_generic)) IFUNC_IMPL (i, name, memmove, IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_thunderx) IFUNC_IMPL_ADD (array, i, memmove, !bti, __memmove_thunderx2) IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_falkor) IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_simd) +#if HAVE_SVE_ASM_SUPPORT + IFUNC_IMPL_ADD (array, i, memmove, sve, __memmove_a64fx) +#endif IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_generic)) IFUNC_IMPL (i, name, memset, /* Enable this on non-falkor processors too so that other cores diff --git a/sysdeps/aarch64/multiarch/init-arch.h b/sysdeps/aarch64/multiarch/init-arch.h index a167699e74..d20e7e1b8e 100644 --- a/sysdeps/aarch64/multiarch/init-arch.h +++ b/sysdeps/aarch64/multiarch/init-arch.h @@ -33,4 +33,6 @@ bool __attribute__((unused)) bti = \ HAVE_AARCH64_BTI && GLRO(dl_aarch64_cpu_features).bti; \ bool __attribute__((unused)) mte = \ - MTE_ENABLED (); + MTE_ENABLED (); \ + unsigned __attribute__((unused)) sve = \ + GLRO(dl_aarch64_cpu_features).sve; diff --git a/sysdeps/aarch64/multiarch/memcpy.c b/sysdeps/aarch64/multiarch/memcpy.c index 0e0a5cbcfb..0006f38eb0 100644 --- a/sysdeps/aarch64/multiarch/memcpy.c +++ b/sysdeps/aarch64/multiarch/memcpy.c @@ -33,6 +33,9 @@ extern __typeof (__redirect_memcpy) __memcpy_simd attribute_hidden; extern __typeof (__redirect_memcpy) __memcpy_thunderx attribute_hidden; extern __typeof (__redirect_memcpy) __memcpy_thunderx2 attribute_hidden; extern __typeof (__redirect_memcpy) __memcpy_falkor attribute_hidden; +#if HAVE_SVE_ASM_SUPPORT +extern __typeof (__redirect_memcpy) __memcpy_a64fx attribute_hidden; +#endif libc_ifunc (__libc_memcpy, (IS_THUNDERX (midr) @@ -44,8 +47,13 @@ libc_ifunc (__libc_memcpy, : (IS_NEOVERSE_N1 (midr) || IS_NEOVERSE_N2 (midr) || IS_NEOVERSE_V1 (midr) ? __memcpy_simd - : __memcpy_generic))))); - +#if HAVE_SVE_ASM_SUPPORT + : (IS_A64FX (midr) + ? __memcpy_a64fx + : __memcpy_generic)))))); +#else + : __memcpy_generic))))); +#endif # undef memcpy strong_alias (__libc_memcpy, memcpy); #endif diff --git a/sysdeps/aarch64/multiarch/memcpy_a64fx.S b/sysdeps/aarch64/multiarch/memcpy_a64fx.S new file mode 100644 index 0000000000..23438e4e3d --- /dev/null +++ b/sysdeps/aarch64/multiarch/memcpy_a64fx.S @@ -0,0 +1,979 @@ +/* Optimized memcpy for Fujitsu A64FX processor. + Copyright (C) 2012-2021 Free Software Foundation, Inc. + + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library. If not, see + . */ + +#include + +#if HAVE_SVE_ASM_SUPPORT +#if IS_IN (libc) +# define MEMCPY __memcpy_a64fx +# define MEMMOVE __memmove_a64fx + +/* Assumptions: + * + * ARMv8.2-a, AArch64, unaligned accesses, sve + * + */ + +#define L1_SIZE (64*1024)/2 // L1 64KB +#define L2_SIZE (7*1024*1024)/2 // L2 8MB - 1MB +#define CACHE_LINE_SIZE 256 +#define PF_DIST_L1 (CACHE_LINE_SIZE * 16) +#define PF_DIST_L2 (CACHE_LINE_SIZE * 64) +#define dest x0 +#define src x1 +#define n x2 // size +#define tmp1 x3 +#define tmp2 x4 +#define rest x5 +#define dest_ptr x6 +#define src_ptr x7 +#define vector_length x8 +#define vl_remainder x9 // vector_length remainder +#define cl_remainder x10 // CACHE_LINE_SIZE remainder + + .arch armv8.2-a+sve + +ENTRY_ALIGN (MEMCPY, 6) + + PTR_ARG (0) + SIZE_ARG (2) + +L(fwd_start): + cmp n, 0 + ccmp dest, src, 4, ne + b.ne L(init) + ret + +L(init): + mov rest, n + mov dest_ptr, dest + mov src_ptr, src + cntb vector_length + ptrue p0.b + +L(L2): + // get block_size + mrs tmp1, dczid_el0 + cmp tmp1, 6 // CACHE_LINE_SIZE 256 + b.ne L(vl_agnostic) + + // if rest >= L2_SIZE + cmp rest, L2_SIZE + b.cc L(L1_prefetch) + // align dest address at vector_length byte boundary + sub tmp1, vector_length, 1 + and tmp2, dest_ptr, tmp1 + // if vl_remainder == 0 + cmp tmp2, 0 + b.eq 1f + sub vl_remainder, vector_length, tmp2 + // process remainder until the first vector_length boundary + whilelt p0.b, xzr, vl_remainder + ld1b z0.b, p0/z, [src_ptr] + st1b z0.b, p0, [dest_ptr] + add dest_ptr, dest_ptr, vl_remainder + add src_ptr, src_ptr, vl_remainder + sub rest, rest, vl_remainder + // align dest address at CACHE_LINE_SIZE byte boundary +1: mov tmp1, CACHE_LINE_SIZE + and tmp2, dest_ptr, CACHE_LINE_SIZE - 1 + // if cl_remainder == 0 + cmp tmp2, 0 + b.eq L(L2_dc_zva) + sub cl_remainder, tmp1, tmp2 + // process remainder until the first CACHE_LINE_SIZE boundary + mov tmp1, xzr // index +2: whilelt p0.b, tmp1, cl_remainder + ld1b z0.b, p0/z, [src_ptr, tmp1] + st1b z0.b, p0, [dest_ptr, tmp1] + incb tmp1 + cmp tmp1, cl_remainder + b.lo 2b + add dest_ptr, dest_ptr, cl_remainder + add src_ptr, src_ptr, cl_remainder + sub rest, rest, cl_remainder + +L(L2_dc_zva): // unroll zero fill + and tmp1, dest, 0xffffffffffffff + and tmp2, src, 0xffffffffffffff + sub tmp1, tmp2, tmp1 // diff + mov tmp2, CACHE_LINE_SIZE * 20 + cmp tmp1, tmp2 + b.lo L(L1_prefetch) + mov tmp1, dest_ptr + dc zva, tmp1 // 1 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 2 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 3 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 4 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 5 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 6 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 7 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 8 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 9 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 10 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 11 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 12 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 13 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 14 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 15 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 16 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 17 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 18 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 19 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 20 + +L(L2_vl_64): // VL64 unroll8 + cmp vector_length, 64 + b.ne L(L2_vl_32) + ptrue p0.b + .p2align 3 + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + add src_ptr, src_ptr, CACHE_LINE_SIZE * 2 + sub rest, rest, CACHE_LINE_SIZE * 2 +1: st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + mov tmp1, PF_DIST_L1 + prfm pstl1keep, [dest_ptr, tmp1] + mov tmp1, PF_DIST_L2 + prfm pstl2keep, [dest_ptr, tmp1] + mov tmp2, CACHE_LINE_SIZE * 19 + add tmp2, dest_ptr, tmp2 + dc zva, tmp2 // distance CACHE_LINE_SIZE * 19 + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + mov tmp1, PF_DIST_L1 + CACHE_LINE_SIZE + prfm pstl1keep, [dest_ptr, tmp1] + mov tmp1, PF_DIST_L2 + CACHE_LINE_SIZE + prfm pstl2keep, [dest_ptr, tmp1] + add tmp2, tmp2, CACHE_LINE_SIZE + dc zva, tmp2 // distance CACHE_LINE_SIZE * 20 + add dest_ptr, dest_ptr, CACHE_LINE_SIZE * 2 + add src_ptr, src_ptr, CACHE_LINE_SIZE * 2 + sub rest, rest, CACHE_LINE_SIZE * 2 + cmp rest, L2_SIZE + b.ge 1b + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + add dest_ptr, dest_ptr, CACHE_LINE_SIZE * 2 + +L(L2_vl_32): // VL32 unroll6 + cmp vector_length, 32 + b.ne L(L2_vl_16) + ptrue p0.b + .p2align 3 + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + add src_ptr, src_ptr, CACHE_LINE_SIZE + sub rest, rest, CACHE_LINE_SIZE +1: st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + mov tmp1, PF_DIST_L1 + prfm pstl1keep, [dest_ptr, tmp1] + mov tmp1, PF_DIST_L2 + prfm pstl2keep, [dest_ptr, tmp1] + mov tmp2, CACHE_LINE_SIZE * 19 + add tmp2, dest_ptr, tmp2 + dc zva, tmp2 // distance CACHE_LINE_SIZE * 19 + add dest_ptr, dest_ptr, CACHE_LINE_SIZE + add src_ptr, src_ptr, CACHE_LINE_SIZE + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + mov tmp1, PF_DIST_L1 + CACHE_LINE_SIZE + prfm pstl1keep, [dest_ptr, tmp1] + mov tmp1, PF_DIST_L2 + CACHE_LINE_SIZE + prfm pstl2keep, [dest_ptr, tmp1] + add tmp2, tmp2, CACHE_LINE_SIZE + dc zva, tmp2 // distance CACHE_LINE_SIZE * 20 + add dest_ptr, dest_ptr, CACHE_LINE_SIZE + add src_ptr, src_ptr, CACHE_LINE_SIZE + sub rest, rest, CACHE_LINE_SIZE * 2 + cmp rest, L2_SIZE + b.ge 1b + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + add dest_ptr, dest_ptr, CACHE_LINE_SIZE + +L(L2_vl_16): // VL16 unroll32 + cmp vector_length, 16 + b.ne L(L1_prefetch) + ptrue p0.b + .p2align 3 + add src_ptr, src_ptr, CACHE_LINE_SIZE / 2 + ld1b z16.b, p0/z, [src_ptr, #-8, mul vl] + ld1b z17.b, p0/z, [src_ptr, #-7, mul vl] + ld1b z18.b, p0/z, [src_ptr, #-6, mul vl] + ld1b z19.b, p0/z, [src_ptr, #-5, mul vl] + ld1b z20.b, p0/z, [src_ptr, #-4, mul vl] + ld1b z21.b, p0/z, [src_ptr, #-3, mul vl] + ld1b z22.b, p0/z, [src_ptr, #-2, mul vl] + ld1b z23.b, p0/z, [src_ptr, #-1, mul vl] + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + add src_ptr, src_ptr, CACHE_LINE_SIZE / 2 + sub rest, rest, CACHE_LINE_SIZE +1: add dest_ptr, dest_ptr, CACHE_LINE_SIZE / 2 + add src_ptr, src_ptr, CACHE_LINE_SIZE / 2 + st1b z16.b, p0, [dest_ptr, #-8, mul vl] + st1b z17.b, p0, [dest_ptr, #-7, mul vl] + ld1b z16.b, p0/z, [src_ptr, #-8, mul vl] + ld1b z17.b, p0/z, [src_ptr, #-7, mul vl] + st1b z18.b, p0, [dest_ptr, #-6, mul vl] + st1b z19.b, p0, [dest_ptr, #-5, mul vl] + ld1b z18.b, p0/z, [src_ptr, #-6, mul vl] + ld1b z19.b, p0/z, [src_ptr, #-5, mul vl] + st1b z20.b, p0, [dest_ptr, #-4, mul vl] + st1b z21.b, p0, [dest_ptr, #-3, mul vl] + ld1b z20.b, p0/z, [src_ptr, #-4, mul vl] + ld1b z21.b, p0/z, [src_ptr, #-3, mul vl] + st1b z22.b, p0, [dest_ptr, #-2, mul vl] + st1b z23.b, p0, [dest_ptr, #-1, mul vl] + ld1b z22.b, p0/z, [src_ptr, #-2, mul vl] + ld1b z23.b, p0/z, [src_ptr, #-1, mul vl] + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + mov tmp1, PF_DIST_L1 + prfm pstl1keep, [dest_ptr, tmp1] + mov tmp1, PF_DIST_L2 + prfm pstl2keep, [dest_ptr, tmp1] + mov tmp2, CACHE_LINE_SIZE * 19 + add tmp2, dest_ptr, tmp2 + dc zva, tmp2 // distance CACHE_LINE_SIZE * 19 + add dest_ptr, dest_ptr, CACHE_LINE_SIZE + add src_ptr, src_ptr, CACHE_LINE_SIZE + st1b z16.b, p0, [dest_ptr, #-8, mul vl] + st1b z17.b, p0, [dest_ptr, #-7, mul vl] + ld1b z16.b, p0/z, [src_ptr, #-8, mul vl] + ld1b z17.b, p0/z, [src_ptr, #-7, mul vl] + st1b z18.b, p0, [dest_ptr, #-6, mul vl] + st1b z19.b, p0, [dest_ptr, #-5, mul vl] + ld1b z18.b, p0/z, [src_ptr, #-6, mul vl] + ld1b z19.b, p0/z, [src_ptr, #-5, mul vl] + st1b z20.b, p0, [dest_ptr, #-4, mul vl] + st1b z21.b, p0, [dest_ptr, #-3, mul vl] + ld1b z20.b, p0/z, [src_ptr, #-4, mul vl] + ld1b z21.b, p0/z, [src_ptr, #-3, mul vl] + st1b z22.b, p0, [dest_ptr, #-2, mul vl] + st1b z23.b, p0, [dest_ptr, #-1, mul vl] + ld1b z22.b, p0/z, [src_ptr, #-2, mul vl] + ld1b z23.b, p0/z, [src_ptr, #-1, mul vl] + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + mov tmp1, PF_DIST_L1 + CACHE_LINE_SIZE + prfm pstl1keep, [dest_ptr, tmp1] + mov tmp1, PF_DIST_L2 + CACHE_LINE_SIZE + prfm pstl2keep, [dest_ptr, tmp1] + add tmp2, tmp2, CACHE_LINE_SIZE + dc zva, tmp2 // distance CACHE_LINE_SIZE * 20 + add dest_ptr, dest_ptr, CACHE_LINE_SIZE / 2 + add src_ptr, src_ptr, CACHE_LINE_SIZE / 2 + sub rest, rest, CACHE_LINE_SIZE * 2 + cmp rest, L2_SIZE + b.ge 1b + add dest_ptr, dest_ptr, CACHE_LINE_SIZE / 2 + st1b z16.b, p0, [dest_ptr, #-8, mul vl] + st1b z17.b, p0, [dest_ptr, #-7, mul vl] + st1b z18.b, p0, [dest_ptr, #-6, mul vl] + st1b z19.b, p0, [dest_ptr, #-5, mul vl] + st1b z20.b, p0, [dest_ptr, #-4, mul vl] + st1b z21.b, p0, [dest_ptr, #-3, mul vl] + st1b z22.b, p0, [dest_ptr, #-2, mul vl] + st1b z23.b, p0, [dest_ptr, #-1, mul vl] + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + add dest_ptr, dest_ptr, CACHE_LINE_SIZE / 2 + +L(L1_prefetch): // if rest >= L1_SIZE + cmp rest, L1_SIZE + b.cc L(vl_agnostic) +L(L1_vl_64): + cmp vector_length, 64 + b.ne L(L1_vl_32) + ptrue p0.b + .p2align 3 + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + add src_ptr, src_ptr, CACHE_LINE_SIZE * 2 + sub rest, rest, CACHE_LINE_SIZE * 2 +1: st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + mov tmp1, PF_DIST_L1 + prfm pstl1keep, [dest_ptr, tmp1] + mov tmp1, PF_DIST_L2 + prfm pstl2keep, [dest_ptr, tmp1] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + mov tmp1, PF_DIST_L1 + CACHE_LINE_SIZE + prfm pstl1keep, [dest_ptr, tmp1] + mov tmp1, PF_DIST_L2 + CACHE_LINE_SIZE + prfm pstl2keep, [dest_ptr, tmp1] + add dest_ptr, dest_ptr, CACHE_LINE_SIZE * 2 + add src_ptr, src_ptr, CACHE_LINE_SIZE * 2 + sub rest, rest, CACHE_LINE_SIZE * 2 + cmp rest, L1_SIZE + b.ge 1b + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + add dest_ptr, dest_ptr, CACHE_LINE_SIZE * 2 + +L(L1_vl_32): + cmp vector_length, 32 + b.ne L(L1_vl_16) + ptrue p0.b + .p2align 3 + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + add src_ptr, src_ptr, CACHE_LINE_SIZE + sub rest, rest, CACHE_LINE_SIZE +1: st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + mov tmp1, PF_DIST_L1 + prfm pstl1keep, [dest_ptr, tmp1] + mov tmp1, PF_DIST_L2 + prfm pstl2keep, [dest_ptr, tmp1] + add dest_ptr, dest_ptr, CACHE_LINE_SIZE + add src_ptr, src_ptr, CACHE_LINE_SIZE + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + mov tmp1, PF_DIST_L1 + CACHE_LINE_SIZE + prfm pstl1keep, [dest_ptr, tmp1] + mov tmp1, PF_DIST_L2 + CACHE_LINE_SIZE + prfm pstl2keep, [dest_ptr, tmp1] + add dest_ptr, dest_ptr, CACHE_LINE_SIZE + add src_ptr, src_ptr, CACHE_LINE_SIZE + sub rest, rest, CACHE_LINE_SIZE * 2 + cmp rest, L1_SIZE + b.ge 1b + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + add dest_ptr, dest_ptr, CACHE_LINE_SIZE + +L(L1_vl_16): + cmp vector_length, 16 + b.ne L(vl_agnostic) + ptrue p0.b + .p2align 3 + add src_ptr, src_ptr, CACHE_LINE_SIZE / 2 + ld1b z16.b, p0/z, [src_ptr, #-8, mul vl] + ld1b z17.b, p0/z, [src_ptr, #-7, mul vl] + ld1b z18.b, p0/z, [src_ptr, #-6, mul vl] + ld1b z19.b, p0/z, [src_ptr, #-5, mul vl] + ld1b z20.b, p0/z, [src_ptr, #-4, mul vl] + ld1b z21.b, p0/z, [src_ptr, #-3, mul vl] + ld1b z22.b, p0/z, [src_ptr, #-2, mul vl] + ld1b z23.b, p0/z, [src_ptr, #-1, mul vl] + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + add src_ptr, src_ptr, CACHE_LINE_SIZE / 2 + sub rest, rest, CACHE_LINE_SIZE +1: add dest_ptr, dest_ptr, CACHE_LINE_SIZE / 2 + add src_ptr, src_ptr, CACHE_LINE_SIZE / 2 + st1b z16.b, p0, [dest_ptr, #-8, mul vl] + st1b z17.b, p0, [dest_ptr, #-7, mul vl] + ld1b z16.b, p0/z, [src_ptr, #-8, mul vl] + ld1b z17.b, p0/z, [src_ptr, #-7, mul vl] + st1b z18.b, p0, [dest_ptr, #-6, mul vl] + st1b z19.b, p0, [dest_ptr, #-5, mul vl] + ld1b z18.b, p0/z, [src_ptr, #-6, mul vl] + ld1b z19.b, p0/z, [src_ptr, #-5, mul vl] + st1b z20.b, p0, [dest_ptr, #-4, mul vl] + st1b z21.b, p0, [dest_ptr, #-3, mul vl] + ld1b z20.b, p0/z, [src_ptr, #-4, mul vl] + ld1b z21.b, p0/z, [src_ptr, #-3, mul vl] + st1b z22.b, p0, [dest_ptr, #-2, mul vl] + st1b z23.b, p0, [dest_ptr, #-1, mul vl] + ld1b z22.b, p0/z, [src_ptr, #-2, mul vl] + ld1b z23.b, p0/z, [src_ptr, #-1, mul vl] + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + mov tmp1, PF_DIST_L1 + prfm pstl1keep, [dest_ptr, tmp1] + mov tmp1, PF_DIST_L2 + prfm pstl2keep, [dest_ptr, tmp1] + add dest_ptr, dest_ptr, CACHE_LINE_SIZE + add src_ptr, src_ptr, CACHE_LINE_SIZE + st1b z16.b, p0, [dest_ptr, #-8, mul vl] + st1b z17.b, p0, [dest_ptr, #-7, mul vl] + ld1b z16.b, p0/z, [src_ptr, #-8, mul vl] + ld1b z17.b, p0/z, [src_ptr, #-7, mul vl] + st1b z18.b, p0, [dest_ptr, #-6, mul vl] + st1b z19.b, p0, [dest_ptr, #-5, mul vl] + ld1b z18.b, p0/z, [src_ptr, #-6, mul vl] + ld1b z19.b, p0/z, [src_ptr, #-5, mul vl] + st1b z20.b, p0, [dest_ptr, #-4, mul vl] + st1b z21.b, p0, [dest_ptr, #-3, mul vl] + ld1b z20.b, p0/z, [src_ptr, #-4, mul vl] + ld1b z21.b, p0/z, [src_ptr, #-3, mul vl] + st1b z22.b, p0, [dest_ptr, #-2, mul vl] + st1b z23.b, p0, [dest_ptr, #-1, mul vl] + ld1b z22.b, p0/z, [src_ptr, #-2, mul vl] + ld1b z23.b, p0/z, [src_ptr, #-1, mul vl] + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + mov tmp1, PF_DIST_L1 + CACHE_LINE_SIZE + prfm pstl1keep, [dest_ptr, tmp1] + mov tmp1, PF_DIST_L2 + CACHE_LINE_SIZE + prfm pstl2keep, [dest_ptr, tmp1] + add dest_ptr, dest_ptr, CACHE_LINE_SIZE / 2 + add src_ptr, src_ptr, CACHE_LINE_SIZE / 2 + sub rest, rest, CACHE_LINE_SIZE * 2 + cmp rest, L1_SIZE + b.ge 1b + add dest_ptr, dest_ptr, CACHE_LINE_SIZE / 2 + st1b z16.b, p0, [dest_ptr, #-8, mul vl] + st1b z17.b, p0, [dest_ptr, #-7, mul vl] + st1b z18.b, p0, [dest_ptr, #-6, mul vl] + st1b z19.b, p0, [dest_ptr, #-5, mul vl] + st1b z20.b, p0, [dest_ptr, #-4, mul vl] + st1b z21.b, p0, [dest_ptr, #-3, mul vl] + st1b z22.b, p0, [dest_ptr, #-2, mul vl] + st1b z23.b, p0, [dest_ptr, #-1, mul vl] + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + add dest_ptr, dest_ptr, CACHE_LINE_SIZE / 2 + +L(vl_agnostic): // VL Agnostic + +L(unroll32): // unrolling and software pipeline + lsl tmp1, vector_length, 3 // vector_length * 8 + lsl tmp2, vector_length, 5 // vector_length * 32 + ptrue p0.b + .p2align 3 +1: cmp rest, tmp2 + b.cc L(unroll8) + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + add dest_ptr, dest_ptr, tmp1 + add src_ptr, src_ptr, tmp1 + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + add dest_ptr, dest_ptr, tmp1 + add src_ptr, src_ptr, tmp1 + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + add dest_ptr, dest_ptr, tmp1 + add src_ptr, src_ptr, tmp1 + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + add dest_ptr, dest_ptr, tmp1 + add src_ptr, src_ptr, tmp1 + sub rest, rest, tmp2 + b 1b + +L(unroll8): // unrolling and software pipeline + lsl tmp1, vector_length, 3 // vector_length * 8 + ptrue p0.b + .p2align 3 +1: cmp rest, tmp1 + b.cc L(unroll1) + ld1b z0.b, p0/z, [src_ptr, #0, mul vl] + ld1b z1.b, p0/z, [src_ptr, #1, mul vl] + st1b z0.b, p0, [dest_ptr, #0, mul vl] + st1b z1.b, p0, [dest_ptr, #1, mul vl] + ld1b z2.b, p0/z, [src_ptr, #2, mul vl] + ld1b z3.b, p0/z, [src_ptr, #3, mul vl] + st1b z2.b, p0, [dest_ptr, #2, mul vl] + st1b z3.b, p0, [dest_ptr, #3, mul vl] + ld1b z4.b, p0/z, [src_ptr, #4, mul vl] + ld1b z5.b, p0/z, [src_ptr, #5, mul vl] + st1b z4.b, p0, [dest_ptr, #4, mul vl] + st1b z5.b, p0, [dest_ptr, #5, mul vl] + ld1b z6.b, p0/z, [src_ptr, #6, mul vl] + ld1b z7.b, p0/z, [src_ptr, #7, mul vl] + st1b z6.b, p0, [dest_ptr, #6, mul vl] + st1b z7.b, p0, [dest_ptr, #7, mul vl] + add dest_ptr, dest_ptr, tmp1 + add src_ptr, src_ptr, tmp1 + sub rest, rest, tmp1 + b 1b + + L(unroll1): + ptrue p0.b + .p2align 3 +1: cmp rest, vector_length + b.cc L(last) + ld1b z0.b, p0/z, [src_ptr] + st1b z0.b, p0, [dest_ptr] + add dest_ptr, dest_ptr, vector_length + add src_ptr, src_ptr, vector_length + sub rest, rest, vector_length + b 1b + +L(last): + whilelt p0.b, xzr, rest + ld1b z0.b, p0/z, [src_ptr] + st1b z0.b, p0, [dest_ptr] + ret + +END (MEMCPY) +libc_hidden_builtin_def (MEMCPY) + + + .p2align 4 +ENTRY_ALIGN (MEMMOVE, 6) + + // remove tag address + and tmp1, dest, 0xffffffffffffff + and tmp2, src, 0xffffffffffffff + sub tmp1, tmp1, tmp2 // diff + // if diff <= 0 || diff >= n then memcpy + cmp tmp1, 0 + ccmp tmp1, n, 2, gt + b.cs L(fwd_start) + +L(bwd_start): + mov rest, n + add dest_ptr, dest, n // dest_end + add src_ptr, src, n // src_end + cntb vector_length + ptrue p0.b + udiv tmp1, n, vector_length // quotient + mul tmp1, tmp1, vector_length // product + sub vl_remainder, n, tmp1 + // if bwd_remainder == 0 then skip vl_remainder bwd copy + cmp vl_remainder, 0 + b.eq L(bwd_main) + // vl_remainder bwd copy + whilelt p0.b, xzr, vl_remainder + sub src_ptr, src_ptr, vl_remainder + sub dest_ptr, dest_ptr, vl_remainder + ld1b z0.b, p0/z, [src_ptr] + st1b z0.b, p0, [dest_ptr] + sub rest, rest, vl_remainder + +L(bwd_main): + + // VL Agnostic +L(bwd_unroll32): // unrolling and software pipeline + lsl tmp1, vector_length, 3 // vector_length * 8 + lsl tmp2, vector_length, 5 // vector_length * 32 + ptrue p0.b + .p2align 3 +1: cmp rest, tmp2 + b.cc L(bwd_unroll8) + sub src_ptr, src_ptr, tmp1 + sub dest_ptr, dest_ptr, tmp1 + ld1b z0.b, p0/z, [src_ptr, #7, mul vl] + ld1b z1.b, p0/z, [src_ptr, #6, mul vl] + st1b z0.b, p0, [dest_ptr, #7, mul vl] + st1b z1.b, p0, [dest_ptr, #6, mul vl] + ld1b z2.b, p0/z, [src_ptr, #5, mul vl] + ld1b z3.b, p0/z, [src_ptr, #4, mul vl] + st1b z2.b, p0, [dest_ptr, #5, mul vl] + st1b z3.b, p0, [dest_ptr, #4, mul vl] + ld1b z4.b, p0/z, [src_ptr, #3, mul vl] + ld1b z5.b, p0/z, [src_ptr, #2, mul vl] + st1b z4.b, p0, [dest_ptr, #3, mul vl] + st1b z5.b, p0, [dest_ptr, #2, mul vl] + ld1b z6.b, p0/z, [src_ptr, #1, mul vl] + ld1b z7.b, p0/z, [src_ptr, #0, mul vl] + st1b z6.b, p0, [dest_ptr, #1, mul vl] + st1b z7.b, p0, [dest_ptr, #0, mul vl] + sub src_ptr, src_ptr, tmp1 + sub dest_ptr, dest_ptr, tmp1 + ld1b z0.b, p0/z, [src_ptr, #7, mul vl] + ld1b z1.b, p0/z, [src_ptr, #6, mul vl] + st1b z0.b, p0, [dest_ptr, #7, mul vl] + st1b z1.b, p0, [dest_ptr, #6, mul vl] + ld1b z2.b, p0/z, [src_ptr, #5, mul vl] + ld1b z3.b, p0/z, [src_ptr, #4, mul vl] + st1b z2.b, p0, [dest_ptr, #5, mul vl] + st1b z3.b, p0, [dest_ptr, #4, mul vl] + ld1b z4.b, p0/z, [src_ptr, #3, mul vl] + ld1b z5.b, p0/z, [src_ptr, #2, mul vl] + st1b z4.b, p0, [dest_ptr, #3, mul vl] + st1b z5.b, p0, [dest_ptr, #2, mul vl] + ld1b z6.b, p0/z, [src_ptr, #1, mul vl] + ld1b z7.b, p0/z, [src_ptr, #0, mul vl] + st1b z6.b, p0, [dest_ptr, #1, mul vl] + st1b z7.b, p0, [dest_ptr, #0, mul vl] + sub src_ptr, src_ptr, tmp1 + sub dest_ptr, dest_ptr, tmp1 + ld1b z0.b, p0/z, [src_ptr, #7, mul vl] + ld1b z1.b, p0/z, [src_ptr, #6, mul vl] + st1b z0.b, p0, [dest_ptr, #7, mul vl] + st1b z1.b, p0, [dest_ptr, #6, mul vl] + ld1b z2.b, p0/z, [src_ptr, #5, mul vl] + ld1b z3.b, p0/z, [src_ptr, #4, mul vl] + st1b z2.b, p0, [dest_ptr, #5, mul vl] + st1b z3.b, p0, [dest_ptr, #4, mul vl] + ld1b z4.b, p0/z, [src_ptr, #3, mul vl] + ld1b z5.b, p0/z, [src_ptr, #2, mul vl] + st1b z4.b, p0, [dest_ptr, #3, mul vl] + st1b z5.b, p0, [dest_ptr, #2, mul vl] + ld1b z6.b, p0/z, [src_ptr, #1, mul vl] + ld1b z7.b, p0/z, [src_ptr, #0, mul vl] + st1b z6.b, p0, [dest_ptr, #1, mul vl] + st1b z7.b, p0, [dest_ptr, #0, mul vl] + sub src_ptr, src_ptr, tmp1 + sub dest_ptr, dest_ptr, tmp1 + ld1b z0.b, p0/z, [src_ptr, #7, mul vl] + ld1b z1.b, p0/z, [src_ptr, #6, mul vl] + st1b z0.b, p0, [dest_ptr, #7, mul vl] + st1b z1.b, p0, [dest_ptr, #6, mul vl] + ld1b z2.b, p0/z, [src_ptr, #5, mul vl] + ld1b z3.b, p0/z, [src_ptr, #4, mul vl] + st1b z2.b, p0, [dest_ptr, #5, mul vl] + st1b z3.b, p0, [dest_ptr, #4, mul vl] + ld1b z4.b, p0/z, [src_ptr, #3, mul vl] + ld1b z5.b, p0/z, [src_ptr, #2, mul vl] + st1b z4.b, p0, [dest_ptr, #3, mul vl] + st1b z5.b, p0, [dest_ptr, #2, mul vl] + ld1b z6.b, p0/z, [src_ptr, #1, mul vl] + ld1b z7.b, p0/z, [src_ptr, #0, mul vl] + st1b z6.b, p0, [dest_ptr, #1, mul vl] + st1b z7.b, p0, [dest_ptr, #0, mul vl] + sub rest, rest, tmp2 + b 1b + +L(bwd_unroll8): // unrolling and software pipeline + lsl tmp1, vector_length, 3 // vector_length * 8 + ptrue p0.b + .p2align 3 +1: cmp rest, tmp1 + b.cc L(bwd_unroll1) + sub src_ptr, src_ptr, tmp1 + sub dest_ptr, dest_ptr, tmp1 + ld1b z0.b, p0/z, [src_ptr, #7, mul vl] + ld1b z1.b, p0/z, [src_ptr, #6, mul vl] + st1b z0.b, p0, [dest_ptr, #7, mul vl] + st1b z1.b, p0, [dest_ptr, #6, mul vl] + ld1b z2.b, p0/z, [src_ptr, #5, mul vl] + ld1b z3.b, p0/z, [src_ptr, #4, mul vl] + st1b z2.b, p0, [dest_ptr, #5, mul vl] + st1b z3.b, p0, [dest_ptr, #4, mul vl] + ld1b z4.b, p0/z, [src_ptr, #3, mul vl] + ld1b z5.b, p0/z, [src_ptr, #2, mul vl] + st1b z4.b, p0, [dest_ptr, #3, mul vl] + st1b z5.b, p0, [dest_ptr, #2, mul vl] + ld1b z6.b, p0/z, [src_ptr, #1, mul vl] + ld1b z7.b, p0/z, [src_ptr, #0, mul vl] + st1b z6.b, p0, [dest_ptr, #1, mul vl] + st1b z7.b, p0, [dest_ptr, #0, mul vl] + sub rest, rest, tmp1 + b 1b + + .p2align 3 +L(bwd_unroll1): + ptrue p0.b +1: cmp rest, vector_length + b.cc L(bwd_last) + sub src_ptr, src_ptr, vector_length + sub dest_ptr, dest_ptr, vector_length + ld1b z0.b, p0/z, [src_ptr] + st1b z0.b, p0, [dest_ptr] + sub rest, rest, vector_length + b 1b + +L(bwd_last): + whilelt p0.b, xzr, rest + sub src_ptr, src_ptr, rest + sub dest_ptr, dest_ptr, rest + ld1b z0.b, p0/z, [src_ptr] + st1b z0.b, p0, [dest_ptr] + ret + +END (MEMMOVE) +libc_hidden_builtin_def (MEMMOVE) +#endif /* IS_IN (libc) */ +#endif /* HAVE_SVE_ASM_SUPPORT */ + diff --git a/sysdeps/aarch64/multiarch/memmove.c b/sysdeps/aarch64/multiarch/memmove.c index 12d77818a9..1e5ee1c934 100644 --- a/sysdeps/aarch64/multiarch/memmove.c +++ b/sysdeps/aarch64/multiarch/memmove.c @@ -33,6 +33,9 @@ extern __typeof (__redirect_memmove) __memmove_simd attribute_hidden; extern __typeof (__redirect_memmove) __memmove_thunderx attribute_hidden; extern __typeof (__redirect_memmove) __memmove_thunderx2 attribute_hidden; extern __typeof (__redirect_memmove) __memmove_falkor attribute_hidden; +#if HAVE_SVE_ASM_SUPPORT +extern __typeof (__redirect_memmove) __memmove_a64fx attribute_hidden; +#endif libc_ifunc (__libc_memmove, (IS_THUNDERX (midr) @@ -44,8 +47,13 @@ libc_ifunc (__libc_memmove, : (IS_NEOVERSE_N1 (midr) || IS_NEOVERSE_N2 (midr) || IS_NEOVERSE_V1 (midr) ? __memmove_simd - : __memmove_generic))))); - +#if HAVE_SVE_ASM_SUPPORT + : (IS_A64FX (midr) + ? __memmove_a64fx + : __memmove_generic)))))); +#else + : __memmove_generic))))); +#endif # undef memmove strong_alias (__libc_memmove, memmove); #endif diff --git a/sysdeps/unix/sysv/linux/aarch64/cpu-features.c b/sysdeps/unix/sysv/linux/aarch64/cpu-features.c index db6aa3516c..6206a2f618 100644 --- a/sysdeps/unix/sysv/linux/aarch64/cpu-features.c +++ b/sysdeps/unix/sysv/linux/aarch64/cpu-features.c @@ -46,6 +46,7 @@ static struct cpu_list cpu_list[] = { {"ares", 0x411FD0C0}, {"emag", 0x503F0001}, {"kunpeng920", 0x481FD010}, + {"a64fx", 0x460F0010}, {"generic", 0x0} }; @@ -116,4 +117,7 @@ init_cpu_features (struct cpu_features *cpu_features) (PR_TAGGED_ADDR_ENABLE | PR_MTE_TCF_ASYNC | MTE_ALLOWED_TAGS), 0, 0, 0); #endif + + /* Check if SVE is supported. */ + cpu_features->sve = GLRO (dl_hwcap) & HWCAP_SVE; } diff --git a/sysdeps/unix/sysv/linux/aarch64/cpu-features.h b/sysdeps/unix/sysv/linux/aarch64/cpu-features.h index 3b9bfed134..2b322e5414 100644 --- a/sysdeps/unix/sysv/linux/aarch64/cpu-features.h +++ b/sysdeps/unix/sysv/linux/aarch64/cpu-features.h @@ -65,6 +65,9 @@ #define IS_KUNPENG920(midr) (MIDR_IMPLEMENTOR(midr) == 'H' \ && MIDR_PARTNUM(midr) == 0xd01) +#define IS_A64FX(midr) (MIDR_IMPLEMENTOR(midr) == 'F' \ + && MIDR_PARTNUM(midr) == 0x001) + struct cpu_features { uint64_t midr_el1; @@ -72,6 +75,7 @@ struct cpu_features bool bti; /* Currently, the GLIBC memory tagging tunable only defines 8 bits. */ uint8_t mte_state; + bool sve; }; #endif /* _CPU_FEATURES_AARCH64_H */ From patchwork Wed Mar 17 02:34:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Tamura X-Patchwork-Id: 42663 X-Patchwork-Delegate: szabolcs.nagy@arm.com Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 0F70138618AA; Wed, 17 Mar 2021 02:36:10 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from esa7.hc1455-7.c3s2.iphmx.com (esa7.hc1455-7.c3s2.iphmx.com [139.138.61.252]) by sourceware.org (Postfix) with ESMTPS id E73D43851C26 for ; Wed, 17 Mar 2021 02:36:05 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org E73D43851C26 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=fujitsu.com Authentication-Results: sourceware.org; spf=fail smtp.mailfrom=naohirot@fujitsu.com IronPort-SDR: yYVwhSNIIArg7AVMMQT0EP5TUcy9qgGh+9ogIyT12XIy0HjMluVhVdiPLzWmPjZYqy2pUstXoU EuhQfCf9JL7fbhyt+wmWf4x/rL1nZTJDPw+M5n9VLVcBrUcAniWbOU9XvjQL9StoqjtkrGHvmf lX1cxuHqsNWcIPmZHFw+iDiEqgG4AKtUgU00ncD73rld/hOYd2jaeyG8Nyqwm6E/fyrB6D5jc8 OSifchJSCxd9jvGAhrgugsiLShKfmKr4MPs50ExcPfURWVEuwUQovP72gTGBCnVMcIz/QiFvvW zTs= X-IronPort-AV: E=McAfee;i="6000,8403,9925"; a="1982175" X-IronPort-AV: E=Sophos;i="5.81,254,1610377200"; d="scan'208";a="1982175" Received: from unknown (HELO yto-r1.gw.nic.fujitsu.com) ([218.44.52.217]) by esa7.hc1455-7.c3s2.iphmx.com with ESMTP; 17 Mar 2021 11:36:03 +0900 Received: from yto-m2.gw.nic.fujitsu.com (yto-nat-yto-m2.gw.nic.fujitsu.com [192.168.83.65]) by yto-r1.gw.nic.fujitsu.com (Postfix) with ESMTP id E284EEC7AB for ; Wed, 17 Mar 2021 11:36:02 +0900 (JST) Received: from m3051.s.css.fujitsu.com (m3051.s.css.fujitsu.com [10.134.21.209]) by yto-m2.gw.nic.fujitsu.com (Postfix) with ESMTP id E7D4B9B62E for ; Wed, 17 Mar 2021 11:36:01 +0900 (JST) Received: from bionic.lxd (unknown [10.126.53.116]) by m3051.s.css.fujitsu.com (Postfix) with ESMTP id D843B93; Wed, 17 Mar 2021 11:36:01 +0900 (JST) From: Naohiro Tamura To: libc-alpha@sourceware.org Subject: [PATCH 3/5] aarch64: Added optimized memset for A64FX Date: Wed, 17 Mar 2021 02:34:40 +0000 Message-Id: <20210317023440.323205-1-naohirot@fujitsu.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210317022849.323046-1-naohirot@fujitsu.com> References: <20210317022849.323046-1-naohirot@fujitsu.com> X-TM-AS-GCONF: 00 X-Spam-Status: No, score=-11.9 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_SHORT, RCVD_IN_DNSWL_LOW, SCC_10_SHORT_WORD_LINES, SCC_5_SHORT_WORD_LINES, SPF_HELO_PASS, SPF_NEUTRAL, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Naohiro Tamura Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" From: Naohiro Tamura This patch optimizes the performance of memset for A64FX [1] which implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB cache per NUMA node. The performance optimization makes use of Scalable Vector Register with several techniques such as loop unrolling, memory access alignment, cache zero fill and prefetch. SVE assembler code for memset is implemented as Vector Length Agnostic code so theoretically it can be run on any SOC which supports ARMv8-A SVE standard. We confirmed that all testcases have been passed by running 'make check' and 'make xcheck' not only on A64FX but also on ThunderX2. And also we confirmed that the SVE 512 bit vector register performance is roughly 4 times better than Advanced SIMD 128 bit register and 8 times better than scalar 64 bit register by running 'make bench'. [1] https://github.com/fujitsu/A64FX --- sysdeps/aarch64/multiarch/Makefile | 1 + sysdeps/aarch64/multiarch/ifunc-impl-list.c | 5 +- sysdeps/aarch64/multiarch/memset.c | 11 +- sysdeps/aarch64/multiarch/memset_a64fx.S | 574 ++++++++++++++++++++ 4 files changed, 589 insertions(+), 2 deletions(-) create mode 100644 sysdeps/aarch64/multiarch/memset_a64fx.S diff --git a/sysdeps/aarch64/multiarch/Makefile b/sysdeps/aarch64/multiarch/Makefile index 04c3f17121..7500cf1e93 100644 --- a/sysdeps/aarch64/multiarch/Makefile +++ b/sysdeps/aarch64/multiarch/Makefile @@ -2,6 +2,7 @@ ifeq ($(subdir),string) sysdep_routines += memcpy_generic memcpy_advsimd memcpy_thunderx memcpy_thunderx2 \ memcpy_falkor memcpy_a64fx \ memset_generic memset_falkor memset_emag memset_kunpeng \ + memset_a64fx \ memchr_generic memchr_nosimd \ strlen_mte strlen_asimd endif diff --git a/sysdeps/aarch64/multiarch/ifunc-impl-list.c b/sysdeps/aarch64/multiarch/ifunc-impl-list.c index cb78da9692..e252a10d88 100644 --- a/sysdeps/aarch64/multiarch/ifunc-impl-list.c +++ b/sysdeps/aarch64/multiarch/ifunc-impl-list.c @@ -41,7 +41,7 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, INIT_ARCH (); - /* Support sysdeps/aarch64/multiarch/memcpy.c and memmove.c. */ + /* Support sysdeps/aarch64/multiarch/memcpy.c, memmove.c and memset.c. */ IFUNC_IMPL (i, name, memcpy, IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_thunderx) IFUNC_IMPL_ADD (array, i, memcpy, !bti, __memcpy_thunderx2) @@ -66,6 +66,9 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, IFUNC_IMPL_ADD (array, i, memset, (zva_size == 64), __memset_falkor) IFUNC_IMPL_ADD (array, i, memset, (zva_size == 64), __memset_emag) IFUNC_IMPL_ADD (array, i, memset, 1, __memset_kunpeng) +#if HAVE_SVE_ASM_SUPPORT + IFUNC_IMPL_ADD (array, i, memset, sve, __memset_a64fx) +#endif IFUNC_IMPL_ADD (array, i, memset, 1, __memset_generic)) IFUNC_IMPL (i, name, memchr, IFUNC_IMPL_ADD (array, i, memchr, !mte, __memchr_nosimd) diff --git a/sysdeps/aarch64/multiarch/memset.c b/sysdeps/aarch64/multiarch/memset.c index 28d3926bc2..df075edddb 100644 --- a/sysdeps/aarch64/multiarch/memset.c +++ b/sysdeps/aarch64/multiarch/memset.c @@ -31,6 +31,9 @@ extern __typeof (__redirect_memset) __libc_memset; extern __typeof (__redirect_memset) __memset_falkor attribute_hidden; extern __typeof (__redirect_memset) __memset_emag attribute_hidden; extern __typeof (__redirect_memset) __memset_kunpeng attribute_hidden; +#if HAVE_SVE_ASM_SUPPORT +extern __typeof (__redirect_memset) __memset_a64fx attribute_hidden; +#endif extern __typeof (__redirect_memset) __memset_generic attribute_hidden; libc_ifunc (__libc_memset, @@ -40,7 +43,13 @@ libc_ifunc (__libc_memset, ? __memset_falkor : (IS_EMAG (midr) && zva_size == 64 ? __memset_emag - : __memset_generic))); +#if HAVE_SVE_ASM_SUPPORT + : (IS_A64FX (midr) + ? __memset_a64fx + : __memset_generic)))); +#else + : __memset_generic))); +#endif # undef memset strong_alias (__libc_memset, memset); diff --git a/sysdeps/aarch64/multiarch/memset_a64fx.S b/sysdeps/aarch64/multiarch/memset_a64fx.S new file mode 100644 index 0000000000..02ae7caab0 --- /dev/null +++ b/sysdeps/aarch64/multiarch/memset_a64fx.S @@ -0,0 +1,574 @@ +/* Optimized memset for Fujitsu A64FX processor. + Copyright (C) 2012-2021 Free Software Foundation, Inc. + + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library. If not, see + . */ + +#include +#include + +#if HAVE_SVE_ASM_SUPPORT +#if IS_IN (libc) +# define MEMSET __memset_a64fx + +/* Assumptions: + * + * ARMv8.2-a, AArch64, unaligned accesses, sve + * + */ + +#define L1_SIZE (64*1024) // L1 64KB +#define L2_SIZE (8*1024*1024) // L2 8MB - 1MB +#define CACHE_LINE_SIZE 256 +#define PF_DIST_L1 (CACHE_LINE_SIZE * 16) +#define PF_DIST_L2 (CACHE_LINE_SIZE * 128) +#define rest x8 +#define vector_length x9 +#define vl_remainder x10 // vector_length remainder +#define cl_remainder x11 // CACHE_LINE_SIZE remainder + + .arch armv8.2-a+sve + +ENTRY_ALIGN (MEMSET, 6) + + PTR_ARG (0) + SIZE_ARG (2) + + cmp count, 0 + b.ne L(init) + ret +L(init): + mov rest, count + mov dst, dstin + add dstend, dstin, count + cntb vector_length + ptrue p0.b + dup z0.b, valw + + cmp count, 96 + b.hi L(set_long) + cmp count, 16 + b.hs L(set_medium) + mov val, v0.D[0] + + /* Set 0..15 bytes. */ + tbz count, 3, 1f + str val, [dstin] + str val, [dstend, -8] + ret + nop +1: tbz count, 2, 2f + str valw, [dstin] + str valw, [dstend, -4] + ret +2: cbz count, 3f + strb valw, [dstin] + tbz count, 1, 3f + strh valw, [dstend, -2] +3: ret + + /* Set 17..96 bytes. */ +L(set_medium): + str q0, [dstin] + tbnz count, 6, L(set96) + str q0, [dstend, -16] + tbz count, 5, 1f + str q0, [dstin, 16] + str q0, [dstend, -32] +1: ret + + .p2align 4 + /* Set 64..96 bytes. Write 64 bytes from the start and + 32 bytes from the end. */ +L(set96): + str q0, [dstin, 16] + stp q0, q0, [dstin, 32] + stp q0, q0, [dstend, -32] + ret + +L(set_long): + // if count > 1280 && vector_length != 16 then L(L2) + cmp count, 1280 + ccmp vector_length, 16, 4, gt + b.ne L(L2) + bic dst, dstin, 15 + str q0, [dstin] + sub count, dstend, dst /* Count is 16 too large. */ + sub dst, dst, 16 /* Dst is biased by -32. */ + sub count, count, 64 + 16 /* Adjust count and bias for loop. */ +1: stp q0, q0, [dst, 32] + stp q0, q0, [dst, 64]! + subs count, count, 64 + b.lo 2f + stp q0, q0, [dst, 32] + stp q0, q0, [dst, 64]! + subs count, count, 64 + b.lo 2f + stp q0, q0, [dst, 32] + stp q0, q0, [dst, 64]! + subs count, count, 64 + b.lo 2f + stp q0, q0, [dst, 32] + stp q0, q0, [dst, 64]! + subs count, count, 64 + b.hi 1b +2: stp q0, q0, [dstend, -64] + stp q0, q0, [dstend, -32] + ret + +L(L2): + // get block_size + mrs tmp1, dczid_el0 + cmp tmp1, 6 // CACHE_LINE_SIZE 256 + b.ne L(vl_agnostic) + + // if rest >= L2_SIZE + cmp rest, L2_SIZE + b.cc L(L1_prefetch) + // align dst address at vector_length byte boundary + sub tmp1, vector_length, 1 + and tmp2, dst, tmp1 + // if vl_remainder == 0 + cmp tmp2, 0 + b.eq 1f + sub vl_remainder, vector_length, tmp2 + // process remainder until the first vector_length boundary + whilelt p0.b, xzr, vl_remainder + st1b z0.b, p0, [dst] + add dst, dst, vl_remainder + sub rest, rest, vl_remainder + // align dstin address at CACHE_LINE_SIZE byte boundary +1: mov tmp1, CACHE_LINE_SIZE + and tmp2, dst, CACHE_LINE_SIZE - 1 + // if cl_remainder == 0 + cmp tmp2, 0 + b.eq L(L2_dc_zva) + sub cl_remainder, tmp1, tmp2 + // process remainder until the first CACHE_LINE_SIZE boundary + mov tmp1, xzr // index +2: whilelt p0.b, tmp1, cl_remainder + st1b z0.b, p0, [dst, tmp1] + incb tmp1 + cmp tmp1, cl_remainder + b.lo 2b + add dst, dst, cl_remainder + sub rest, rest, cl_remainder + +L(L2_dc_zva): // unroll zero fill + mov tmp1, dst + dc zva, tmp1 // 1 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 2 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 3 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 4 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 5 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 6 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 7 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 8 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 9 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 10 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 11 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 12 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 13 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 14 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 15 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 16 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 17 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 18 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 19 + add tmp1, tmp1, CACHE_LINE_SIZE + dc zva, tmp1 // 20 + +L(L2_vl_64): // VL64 unroll8 + cmp vector_length, 64 + b.ne L(L2_vl_32) + ptrue p0.b + .p2align 4 +1: st1b {z0.b}, p0, [dst] + st1b {z0.b}, p0, [dst, #1, mul vl] + st1b {z0.b}, p0, [dst, #2, mul vl] + st1b {z0.b}, p0, [dst, #3, mul vl] + mov tmp2, CACHE_LINE_SIZE * 20 + add tmp2, dst, tmp2 + dc zva, tmp2 // distance CACHE_LINE_SIZE * 20 + st1b {z0.b}, p0, [dst, #4, mul vl] + st1b {z0.b}, p0, [dst, #5, mul vl] + st1b {z0.b}, p0, [dst, #6, mul vl] + st1b {z0.b}, p0, [dst, #7, mul vl] + add tmp2, tmp2, CACHE_LINE_SIZE + dc zva, tmp2 // distance CACHE_LINE_SIZE * 21 + add dst, dst, 512 + sub rest, rest, 512 + cmp rest, L2_SIZE + b.ge 1b + +L(L2_vl_32): // VL32 unroll6 + cmp vector_length, 32 + b.ne L(L2_vl_16) + ptrue p0.b + .p2align 4 +1: st1b {z0.b}, p0, [dst] + st1b {z0.b}, p0, [dst, #1, mul vl] + st1b {z0.b}, p0, [dst, #2, mul vl] + st1b {z0.b}, p0, [dst, #3, mul vl] + st1b {z0.b}, p0, [dst, #4, mul vl] + st1b {z0.b}, p0, [dst, #5, mul vl] + st1b {z0.b}, p0, [dst, #6, mul vl] + st1b {z0.b}, p0, [dst, #7, mul vl] + mov tmp2, CACHE_LINE_SIZE * 21 + add tmp2, dst, tmp2 + dc zva, tmp2 // distance CACHE_LINE_SIZE * 21 + add dst, dst, CACHE_LINE_SIZE + st1b {z0.b}, p0, [dst] + st1b {z0.b}, p0, [dst, #1, mul vl] + st1b {z0.b}, p0, [dst, #2, mul vl] + st1b {z0.b}, p0, [dst, #3, mul vl] + st1b {z0.b}, p0, [dst, #4, mul vl] + st1b {z0.b}, p0, [dst, #5, mul vl] + st1b {z0.b}, p0, [dst, #6, mul vl] + st1b {z0.b}, p0, [dst, #7, mul vl] + add tmp2, tmp2, CACHE_LINE_SIZE + dc zva, tmp2 // distance CACHE_LINE_SIZE * 22 + add dst, dst, CACHE_LINE_SIZE + sub rest, rest, 512 + cmp rest, L2_SIZE + b.ge 1b + +L(L2_vl_16): // VL16 unroll32 + cmp vector_length, 16 + b.ne L(L1_prefetch) + ptrue p0.b + .p2align 4 +1: add dst, dst, 128 + st1b {z0.b}, p0, [dst, #-8, mul vl] + st1b {z0.b}, p0, [dst, #-7, mul vl] + st1b {z0.b}, p0, [dst, #-6, mul vl] + st1b {z0.b}, p0, [dst, #-5, mul vl] + st1b {z0.b}, p0, [dst, #-4, mul vl] + st1b {z0.b}, p0, [dst, #-3, mul vl] + st1b {z0.b}, p0, [dst, #-2, mul vl] + st1b {z0.b}, p0, [dst, #-1, mul vl] + st1b {z0.b}, p0, [dst] + st1b {z0.b}, p0, [dst, #1, mul vl] + st1b {z0.b}, p0, [dst, #2, mul vl] + st1b {z0.b}, p0, [dst, #3, mul vl] + st1b {z0.b}, p0, [dst, #4, mul vl] + st1b {z0.b}, p0, [dst, #5, mul vl] + st1b {z0.b}, p0, [dst, #6, mul vl] + st1b {z0.b}, p0, [dst, #7, mul vl] + mov tmp2, CACHE_LINE_SIZE * 20 + add tmp2, dst, tmp2 + dc zva, tmp2 // distance CACHE_LINE_SIZE * 20 + add dst, dst, CACHE_LINE_SIZE + st1b {z0.b}, p0, [dst, #-8, mul vl] + st1b {z0.b}, p0, [dst, #-7, mul vl] + st1b {z0.b}, p0, [dst, #-6, mul vl] + st1b {z0.b}, p0, [dst, #-5, mul vl] + st1b {z0.b}, p0, [dst, #-4, mul vl] + st1b {z0.b}, p0, [dst, #-3, mul vl] + st1b {z0.b}, p0, [dst, #-2, mul vl] + st1b {z0.b}, p0, [dst, #-1, mul vl] + st1b {z0.b}, p0, [dst] + st1b {z0.b}, p0, [dst, #1, mul vl] + st1b {z0.b}, p0, [dst, #2, mul vl] + st1b {z0.b}, p0, [dst, #3, mul vl] + st1b {z0.b}, p0, [dst, #4, mul vl] + st1b {z0.b}, p0, [dst, #5, mul vl] + st1b {z0.b}, p0, [dst, #6, mul vl] + st1b {z0.b}, p0, [dst, #7, mul vl] + add tmp2, tmp2, CACHE_LINE_SIZE + dc zva, tmp2 // distance CACHE_LINE_SIZE * 21 + add dst, dst, 128 + sub rest, rest, 512 + cmp rest, L2_SIZE + b.ge 1b + +L(L1_prefetch): // if rest >= L1_SIZE + cmp rest, L1_SIZE + b.cc L(vl_agnostic) +L(L1_vl_64): + cmp vector_length, 64 + b.ne L(L1_vl_32) + ptrue p0.b + .p2align 4 +1: st1b {z0.b}, p0, [dst] + st1b {z0.b}, p0, [dst, #1, mul vl] + st1b {z0.b}, p0, [dst, #2, mul vl] + st1b {z0.b}, p0, [dst, #3, mul vl] + mov tmp1, PF_DIST_L1 + prfm pstl1keep, [dst, tmp1] + mov tmp1, PF_DIST_L2 + prfm pstl2keep, [dst, tmp1] + st1b {z0.b}, p0, [dst, #4, mul vl] + st1b {z0.b}, p0, [dst, #5, mul vl] + st1b {z0.b}, p0, [dst, #6, mul vl] + st1b {z0.b}, p0, [dst, #7, mul vl] + mov tmp1, PF_DIST_L1 + CACHE_LINE_SIZE + prfm pstl1keep, [dst, tmp1] + mov tmp1, PF_DIST_L2 + CACHE_LINE_SIZE + prfm pstl2keep, [dst, tmp1] + add dst, dst, 512 + sub rest, rest, 512 + cmp rest, L1_SIZE + b.ge 1b + +L(L1_vl_32): + cmp vector_length, 32 + b.ne L(L1_vl_16) + ptrue p0.b + .p2align 4 +1: st1b {z0.b}, p0, [dst] + st1b {z0.b}, p0, [dst, #1, mul vl] + st1b {z0.b}, p0, [dst, #2, mul vl] + st1b {z0.b}, p0, [dst, #3, mul vl] + st1b {z0.b}, p0, [dst, #4, mul vl] + st1b {z0.b}, p0, [dst, #5, mul vl] + st1b {z0.b}, p0, [dst, #6, mul vl] + st1b {z0.b}, p0, [dst, #7, mul vl] + mov tmp1, PF_DIST_L1 + prfm pstl1keep, [dst, tmp1] + mov tmp1, PF_DIST_L2 + prfm pstl2keep, [dst, tmp1] + add dst, dst, CACHE_LINE_SIZE + st1b {z0.b}, p0, [dst] + st1b {z0.b}, p0, [dst, #1, mul vl] + st1b {z0.b}, p0, [dst, #2, mul vl] + st1b {z0.b}, p0, [dst, #3, mul vl] + st1b {z0.b}, p0, [dst, #4, mul vl] + st1b {z0.b}, p0, [dst, #5, mul vl] + st1b {z0.b}, p0, [dst, #6, mul vl] + st1b {z0.b}, p0, [dst, #7, mul vl] + mov tmp1, PF_DIST_L1 + CACHE_LINE_SIZE + prfm pstl1keep, [dst, tmp1] + mov tmp1, PF_DIST_L2 + CACHE_LINE_SIZE + prfm pstl2keep, [dst, tmp1] + add dst, dst, CACHE_LINE_SIZE + sub rest, rest, 512 + cmp rest, L1_SIZE + b.ge 1b + +L(L1_vl_16): // VL16 unroll32 + cmp vector_length, 16 + b.ne L(vl_agnostic) + ptrue p0.b + .p2align 4 +1: mov tmp1, dst + add dst, dst, 128 + st1b {z0.b}, p0, [dst, #-8, mul vl] + st1b {z0.b}, p0, [dst, #-7, mul vl] + st1b {z0.b}, p0, [dst, #-6, mul vl] + st1b {z0.b}, p0, [dst, #-5, mul vl] + st1b {z0.b}, p0, [dst, #-4, mul vl] + st1b {z0.b}, p0, [dst, #-3, mul vl] + st1b {z0.b}, p0, [dst, #-2, mul vl] + st1b {z0.b}, p0, [dst, #-1, mul vl] + st1b {z0.b}, p0, [dst] + st1b {z0.b}, p0, [dst, #1, mul vl] + st1b {z0.b}, p0, [dst, #2, mul vl] + st1b {z0.b}, p0, [dst, #3, mul vl] + st1b {z0.b}, p0, [dst, #4, mul vl] + st1b {z0.b}, p0, [dst, #5, mul vl] + st1b {z0.b}, p0, [dst, #6, mul vl] + st1b {z0.b}, p0, [dst, #7, mul vl] + mov tmp1, PF_DIST_L1 + prfm pstl1keep, [dst, tmp1] + mov tmp1, PF_DIST_L2 + prfm pstl2keep, [dst, tmp1] + add dst, dst, CACHE_LINE_SIZE + add tmp1, tmp1, CACHE_LINE_SIZE + st1b {z0.b}, p0, [dst, #-8, mul vl] + st1b {z0.b}, p0, [dst, #-7, mul vl] + st1b {z0.b}, p0, [dst, #-6, mul vl] + st1b {z0.b}, p0, [dst, #-5, mul vl] + st1b {z0.b}, p0, [dst, #-4, mul vl] + st1b {z0.b}, p0, [dst, #-3, mul vl] + st1b {z0.b}, p0, [dst, #-2, mul vl] + st1b {z0.b}, p0, [dst, #-1, mul vl] + st1b {z0.b}, p0, [dst] + st1b {z0.b}, p0, [dst, #1, mul vl] + st1b {z0.b}, p0, [dst, #2, mul vl] + st1b {z0.b}, p0, [dst, #3, mul vl] + st1b {z0.b}, p0, [dst, #4, mul vl] + st1b {z0.b}, p0, [dst, #5, mul vl] + st1b {z0.b}, p0, [dst, #6, mul vl] + st1b {z0.b}, p0, [dst, #7, mul vl] + mov tmp1, PF_DIST_L1 + CACHE_LINE_SIZE + prfm pstl1keep, [dst, tmp1] + mov tmp1, PF_DIST_L2 + CACHE_LINE_SIZE + prfm pstl2keep, [dst, tmp1] + add dst, dst, 128 + sub rest, rest, 512 + cmp rest, L1_SIZE + b.ge 1b + + // VL Agnostic +L(vl_agnostic): +L(unroll32): + ptrue p0.b + lsl tmp1, vector_length, 3 // vector_length * 8 + lsl tmp2, vector_length, 5 // vector_length * 32 + .p2align 4 +1: cmp rest, tmp2 + b.cc L(unroll16) + st1b {z0.b}, p0, [dst] + st1b {z0.b}, p0, [dst, #1, mul vl] + st1b {z0.b}, p0, [dst, #2, mul vl] + st1b {z0.b}, p0, [dst, #3, mul vl] + st1b {z0.b}, p0, [dst, #4, mul vl] + st1b {z0.b}, p0, [dst, #5, mul vl] + st1b {z0.b}, p0, [dst, #6, mul vl] + st1b {z0.b}, p0, [dst, #7, mul vl] + add dst, dst, tmp1 + st1b {z0.b}, p0, [dst] + st1b {z0.b}, p0, [dst, #1, mul vl] + st1b {z0.b}, p0, [dst, #2, mul vl] + st1b {z0.b}, p0, [dst, #3, mul vl] + st1b {z0.b}, p0, [dst, #4, mul vl] + st1b {z0.b}, p0, [dst, #5, mul vl] + st1b {z0.b}, p0, [dst, #6, mul vl] + st1b {z0.b}, p0, [dst, #7, mul vl] + add dst, dst, tmp1 + st1b {z0.b}, p0, [dst] + st1b {z0.b}, p0, [dst, #1, mul vl] + st1b {z0.b}, p0, [dst, #2, mul vl] + st1b {z0.b}, p0, [dst, #3, mul vl] + st1b {z0.b}, p0, [dst, #4, mul vl] + st1b {z0.b}, p0, [dst, #5, mul vl] + st1b {z0.b}, p0, [dst, #6, mul vl] + st1b {z0.b}, p0, [dst, #7, mul vl] + add dst, dst, tmp1 + st1b {z0.b}, p0, [dst] + st1b {z0.b}, p0, [dst, #1, mul vl] + st1b {z0.b}, p0, [dst, #2, mul vl] + st1b {z0.b}, p0, [dst, #3, mul vl] + st1b {z0.b}, p0, [dst, #4, mul vl] + st1b {z0.b}, p0, [dst, #5, mul vl] + st1b {z0.b}, p0, [dst, #6, mul vl] + st1b {z0.b}, p0, [dst, #7, mul vl] + add dst, dst, tmp1 + sub rest, rest, tmp2 + b 1b + +L(unroll16): + ptrue p0.b + lsl tmp1, vector_length, 3 // vector_length * 8 + lsl tmp2, vector_length, 4 // vector_length * 16 + .p2align 4 +1: cmp rest, tmp2 + b.cc L(unroll8) + st1b {z0.b}, p0, [dst] + st1b {z0.b}, p0, [dst, #1, mul vl] + st1b {z0.b}, p0, [dst, #2, mul vl] + st1b {z0.b}, p0, [dst, #3, mul vl] + st1b {z0.b}, p0, [dst, #4, mul vl] + st1b {z0.b}, p0, [dst, #5, mul vl] + st1b {z0.b}, p0, [dst, #6, mul vl] + st1b {z0.b}, p0, [dst, #7, mul vl] + add dst, dst, tmp1 + st1b {z0.b}, p0, [dst] + st1b {z0.b}, p0, [dst, #1, mul vl] + st1b {z0.b}, p0, [dst, #2, mul vl] + st1b {z0.b}, p0, [dst, #3, mul vl] + st1b {z0.b}, p0, [dst, #4, mul vl] + st1b {z0.b}, p0, [dst, #5, mul vl] + st1b {z0.b}, p0, [dst, #6, mul vl] + st1b {z0.b}, p0, [dst, #7, mul vl] + add dst, dst, tmp1 + sub rest, rest, tmp2 + b 1b + +L(unroll8): + lsl tmp1, vector_length, 3 + ptrue p0.b + .p2align 4 +1: cmp rest, tmp1 + b.cc L(unroll4) + st1b {z0.b}, p0, [dst] + st1b {z0.b}, p0, [dst, #1, mul vl] + st1b {z0.b}, p0, [dst, #2, mul vl] + st1b {z0.b}, p0, [dst, #3, mul vl] + st1b {z0.b}, p0, [dst, #4, mul vl] + st1b {z0.b}, p0, [dst, #5, mul vl] + st1b {z0.b}, p0, [dst, #6, mul vl] + st1b {z0.b}, p0, [dst, #7, mul vl] + add dst, dst, tmp1 + sub rest, rest, tmp1 + b 1b + +L(unroll4): + lsl tmp1, vector_length, 2 + ptrue p0.b + .p2align 4 +1: cmp rest, tmp1 + b.cc L(unroll2) + st1b {z0.b}, p0, [dst] + st1b {z0.b}, p0, [dst, #1, mul vl] + st1b {z0.b}, p0, [dst, #2, mul vl] + st1b {z0.b}, p0, [dst, #3, mul vl] + add dst, dst, tmp1 + sub rest, rest, tmp1 + b 1b + +L(unroll2): + lsl tmp1, vector_length, 1 + ptrue p0.b + .p2align 4 +1: cmp rest, tmp1 + b.cc L(unroll1) + st1b {z0.b}, p0, [dst] + st1b {z0.b}, p0, [dst, #1, mul vl] + add dst, dst, tmp1 + sub rest, rest, tmp1 + b 1b + +L(unroll1): + ptrue p0.b + .p2align 4 +1: cmp rest, vector_length + b.cc L(last) + st1b {z0.b}, p0, [dst] + sub rest, rest, vector_length + add dst, dst, vector_length + b 1b + + .p2align 4 +L(last): + whilelt p0.b, xzr, rest + st1b z0.b, p0, [dst] + ret + +END (MEMSET) +libc_hidden_builtin_def (MEMSET) + +#endif /* IS_IN (libc) */ +#endif /* HAVE_SVE_ASM_SUPPORT */ From patchwork Wed Mar 17 02:35:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Tamura X-Patchwork-Id: 42664 X-Patchwork-Delegate: szabolcs.nagy@arm.com Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 70C2038618AA; Wed, 17 Mar 2021 02:36:38 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from esa5.hc1455-7.c3s2.iphmx.com (esa5.hc1455-7.c3s2.iphmx.com [68.232.139.130]) by sourceware.org (Postfix) with ESMTPS id 812F13851C26 for ; Wed, 17 Mar 2021 02:36:35 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 812F13851C26 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=fujitsu.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=naohirot@fujitsu.com IronPort-SDR: mAZg3FtOj7gPvO7lcD1ehGKvRGIYw4SSGpnePX56tONf+xM3ffNnHZiIsuRIrwMQA+ehqrrwtZ mv+Gvgi+rPpXRjZzmqbOoo/VXtf4inVwvhn1fhlRKChmpy+DOcJttvq5hub8v3H7oDOKO6ecn2 mPiBYnZEVYX7PVXYi++QIcRl1T2dxoUIQk/1XlVvyK2+z+dOojnwHnG93G+Oe1SXLka9ckD+PO 76Qw3++oRPd1RbpgkXCivPKmH4RSz/RLAwDhI2E25FaTF9z3n68rM+JEeRQdl5nwYf7D7pmBOg OzA= X-IronPort-AV: E=McAfee;i="6000,8403,9925"; a="22989387" X-IronPort-AV: E=Sophos;i="5.81,254,1610377200"; d="scan'208";a="22989387" Received: from unknown (HELO oym-r1.gw.nic.fujitsu.com) ([210.162.30.89]) by esa5.hc1455-7.c3s2.iphmx.com with ESMTP; 17 Mar 2021 11:36:33 +0900 Received: from oym-m4.gw.nic.fujitsu.com (oym-nat-oym-m4.gw.nic.fujitsu.com [192.168.87.61]) by oym-r1.gw.nic.fujitsu.com (Postfix) with ESMTP id 57F1E4088FD for ; Wed, 17 Mar 2021 11:36:32 +0900 (JST) Received: from m3051.s.css.fujitsu.com (m3051.s.css.fujitsu.com [10.134.21.209]) by oym-m4.gw.nic.fujitsu.com (Postfix) with ESMTP id 5EEC944A01E for ; Wed, 17 Mar 2021 11:36:31 +0900 (JST) Received: from bionic.lxd (unknown [10.126.53.116]) by m3051.s.css.fujitsu.com (Postfix) with ESMTP id 3EF9593; Wed, 17 Mar 2021 11:36:31 +0900 (JST) From: Naohiro Tamura To: libc-alpha@sourceware.org Subject: [PATCH 4/5] scripts: Added Vector Length Set test helper script Date: Wed, 17 Mar 2021 02:35:10 +0000 Message-Id: <20210317023510.323258-1-naohirot@fujitsu.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210317022849.323046-1-naohirot@fujitsu.com> References: <20210317022849.323046-1-naohirot@fujitsu.com> X-TM-AS-GCONF: 00 X-Spam-Status: No, score=-13.6 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_SHORT, RCVD_IN_DNSWL_LOW, SPF_HELO_PASS, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Naohiro Tamura Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" From: Naohiro Tamura This patch is a test helper script to change Vector Length for child process. This script can be used as test-wrapper for 'make check'. Usage examples: ubuntu@bionic:~/build$ make check subdirs=string \ test-wrapper='~/glibc/scripts/vltest.py 16' ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 16 make test \ t=string/test-memcpy ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 32 ./debugglibc.sh \ string/test-memmove ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 64 ./testrun.sh string/test-memset --- scripts/vltest.py | 82 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 82 insertions(+) create mode 100755 scripts/vltest.py diff --git a/scripts/vltest.py b/scripts/vltest.py new file mode 100755 index 0000000000..264dfa449f --- /dev/null +++ b/scripts/vltest.py @@ -0,0 +1,82 @@ +#!/usr/bin/python3 +# Set Scalable Vector Length test helper +# Copyright (C) 2019-2021 Free Software Foundation, Inc. +# This file is part of the GNU C Library. +# +# The GNU C Library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# The GNU C Library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with the GNU C Library; if not, see +# . +"""Set Scalable Vector Length test helper. + +Set Scalable Vector Length for child process. + +examples: + +ubuntu@bionic:~/build$ make check subdirs=string \ +test-wrapper='~/glibc/scripts/vltest.py 16' + +ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 16 make test \ +t=string/test-memcpy + +ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 32 ./debugglibc.sh \ +string/test-memmove + +ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 64 ./testrun.sh \ +string/test-memset +""" +import argparse +from ctypes import cdll, CDLL +import os +import sys + +EXIT_SUCCESS = 0 +EXIT_FAILURE = 1 +EXIT_UNSUPPORTED = 77 + +AT_HWCAP = 16 +HWCAP_SVE = (1 << 22) + +PR_SVE_GET_VL = 51 +PR_SVE_SET_VL = 50 +PR_SVE_SET_VL_ONEXEC = (1 << 18) +PR_SVE_VL_INHERIT = (1 << 17) +PR_SVE_VL_LEN_MASK = 0xffff + +def main(args): + libc = CDLL("libc.so.6") + if not libc.getauxval(AT_HWCAP) & HWCAP_SVE: + print("CPU doesn't support SVE") + sys.exit(EXIT_UNSUPPORTED) + + libc.prctl(PR_SVE_SET_VL, + args.vl[0] | PR_SVE_SET_VL_ONEXEC | PR_SVE_VL_INHERIT) + os.execvp(args.args[0], args.args) + print("exec system call failure") + sys.exit(EXIT_FAILURE) + +if __name__ == '__main__': + parser = argparse.ArgumentParser(description= + "Set Scalable Vector Length test helper", + formatter_class=argparse.ArgumentDefaultsHelpFormatter) + + # positional argument + parser.add_argument("vl", nargs=1, type=int, + choices=range(16, 257, 16), + help=('vector length '\ + 'which is multiples of 16 from 16 to 256')) + # remainDer arguments + parser.add_argument('args', nargs=argparse.REMAINDER, + help=('args '\ + 'which is passed to child process')) + args = parser.parse_args() + main(args) From patchwork Wed Mar 17 02:35:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Tamura X-Patchwork-Id: 42665 X-Patchwork-Delegate: szabolcs.nagy@arm.com Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 418963850422; Wed, 17 Mar 2021 02:37:06 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from esa9.hc1455-7.c3s2.iphmx.com (esa9.hc1455-7.c3s2.iphmx.com [139.138.36.223]) by sourceware.org (Postfix) with ESMTPS id 150373851C26 for ; Wed, 17 Mar 2021 02:37:03 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 150373851C26 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=fujitsu.com Authentication-Results: sourceware.org; spf=fail smtp.mailfrom=naohirot@fujitsu.com IronPort-SDR: MVXeGq/YdjJejkMROcpHPLca1hF6eRA/Irchml34EgEmuqc1/R7YzPloGdwnlor9nTHSy+iJJj uoQ42Ak2YWFl5j36lxkboP3gVmoyKwXxdl1BGorylGGZZ3HgPVz/BEhC9us69v8zWnTZ0cJrKH FVmM0yVPyXuHhUPe/Y7LNww1+c6X6DUXlckgqBu+bvUtmQz1vU70sT1CqPGd77cwMFi8GUPVou diLk1wH9q5V21rkijXRZwsdQYSyM0gK2NG1VzJ2zFBLH2ukZ9geg9WLMPpigNKTSjIZsrClDOn vkQ= X-IronPort-AV: E=McAfee;i="6000,8403,9925"; a="11156286" X-IronPort-AV: E=Sophos;i="5.81,254,1610377200"; d="scan'208";a="11156286" Received: from unknown (HELO oym-r2.gw.nic.fujitsu.com) ([210.162.30.90]) by esa9.hc1455-7.c3s2.iphmx.com with ESMTP; 17 Mar 2021 11:37:01 +0900 Received: from oym-m4.gw.nic.fujitsu.com (oym-nat-oym-m4.gw.nic.fujitsu.com [192.168.87.61]) by oym-r2.gw.nic.fujitsu.com (Postfix) with ESMTP id 0BC21E60B5 for ; Wed, 17 Mar 2021 11:37:01 +0900 (JST) Received: from m3050.s.css.fujitsu.com (msm.b.css.fujitsu.com [10.134.21.208]) by oym-m4.gw.nic.fujitsu.com (Postfix) with ESMTP id 2D64544A6D9 for ; Wed, 17 Mar 2021 11:37:00 +0900 (JST) Received: from bionic.lxd (unknown [10.126.53.116]) by m3050.s.css.fujitsu.com (Postfix) with ESMTP id 1565DB7; Wed, 17 Mar 2021 11:37:00 +0900 (JST) From: Naohiro Tamura To: libc-alpha@sourceware.org Subject: [PATCH 5/5] benchtests: Added generic_memcpy and generic_memmove to large benchtests Date: Wed, 17 Mar 2021 02:35:39 +0000 Message-Id: <20210317023539.323311-1-naohirot@fujitsu.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210317022849.323046-1-naohirot@fujitsu.com> References: <20210317022849.323046-1-naohirot@fujitsu.com> X-TM-AS-GCONF: 00 X-Spam-Status: No, score=-10.7 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, SPF_HELO_PASS, SPF_NEUTRAL, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" This patch is to add generic_memcpy and generic_memmove to bench-memcpy-large.c and bench-memmove-large.c respectively so that we can compare performance between 512 bit scalable vector register with scalar 64 bit register consistently among memcpy/memmove/memset default and large benchtests. --- benchtests/bench-memcpy-large.c | 9 +++++++++ benchtests/bench-memmove-large.c | 9 +++++++++ 2 files changed, 18 insertions(+) diff --git a/benchtests/bench-memcpy-large.c b/benchtests/bench-memcpy-large.c index 3df1575514..4a87987202 100644 --- a/benchtests/bench-memcpy-large.c +++ b/benchtests/bench-memcpy-large.c @@ -25,7 +25,10 @@ # define TIMEOUT (20 * 60) # include "bench-string.h" +void *generic_memcpy (void *, const void *, size_t); + IMPL (memcpy, 1) +IMPL (generic_memcpy, 0) #endif #include "json-lib.h" @@ -124,3 +127,9 @@ test_main (void) } #include + +#define libc_hidden_builtin_def(X) +#undef MEMCPY +#define MEMCPY generic_memcpy +#include +#include diff --git a/benchtests/bench-memmove-large.c b/benchtests/bench-memmove-large.c index 9e2fcd50ab..151dd5a276 100644 --- a/benchtests/bench-memmove-large.c +++ b/benchtests/bench-memmove-large.c @@ -25,7 +25,10 @@ #include "bench-string.h" #include "json-lib.h" +void *generic_memmove (void *, const void *, size_t); + IMPL (memmove, 1) +IMPL (generic_memmove, 0) typedef char *(*proto_t) (char *, const char *, size_t); @@ -123,3 +126,9 @@ test_main (void) } #include + +#define libc_hidden_builtin_def(X) +#undef MEMMOVE +#define MEMMOVE generic_memmove +#include +#include