From patchwork Mon Mar 7 17:36:28 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "H.J. Lu" X-Patchwork-Id: 11237 Received: (qmail 126638 invoked by alias); 7 Mar 2016 17:36:59 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 126545 invoked by uid 89); 7 Mar 2016 17:36:59 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=1.1 required=5.0 tests=AWL, BAYES_50, FREEMAIL_FROM, SPF_SOFTFAIL autolearn=no version=3.3.2 spammy=__mempcpy_ssse3, __mempcpy_sse2, jnz, RAX_LP X-HELO: mga14.intel.com X-ExtLoop1: 1 From: "H.J. Lu" To: libc-alpha@sourceware.org Cc: Ondrej Bilka Subject: [PATCH 5/7] Enable __mempcpy_sse2_unaligned Date: Mon, 7 Mar 2016 09:36:28 -0800 Message-Id: <1457372190-12196-6-git-send-email-hjl.tools@gmail.com> In-Reply-To: <1457372190-12196-1-git-send-email-hjl.tools@gmail.com> References: <1457372190-12196-1-git-send-email-hjl.tools@gmail.com> Check Fast_Unaligned_Load for __mempcpy_sse2_unaligned. The new selection order is: 1. __mempcpy_avx_unaligned if AVX_Fast_Unaligned_Load bit is set. 2. __mempcpy_sse2_unaligned if Fast_Unaligned_Load bit is set. 3. __mempcpy_sse2 if SSSE3 isn't available. 4. __mempcpy_ssse3_back if Fast_Copy_Backward bit it set. 5. __mempcpy_ssse3 [BZ #19776] * sysdeps/x86_64/multiarch/mempcpy.S (__mempcpy): Check Fast_Unaligned_Load to enable __mempcpy_sse2_unaligned. --- sysdeps/x86_64/multiarch/mempcpy.S | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/sysdeps/x86_64/multiarch/mempcpy.S b/sysdeps/x86_64/multiarch/mempcpy.S index ed78623..1314d76 100644 --- a/sysdeps/x86_64/multiarch/mempcpy.S +++ b/sysdeps/x86_64/multiarch/mempcpy.S @@ -33,19 +33,22 @@ ENTRY(__mempcpy) jz 1f HAS_ARCH_FEATURE (Prefer_No_VZEROUPPER) jz 1f - leaq __mempcpy_avx512_no_vzeroupper(%rip), %rax + lea __mempcpy_avx512_no_vzeroupper(%rip), %RAX_LP ret #endif -1: leaq __mempcpy_sse2(%rip), %rax +1: lea __mempcpy_avx_unaligned(%rip), %RAX_LP + HAS_ARCH_FEATURE (AVX_Fast_Unaligned_Load) + jnz 2f + lea __mempcpy_sse2_unaligned(%rip), %RAX_LP + HAS_ARCH_FEATURE (Fast_Unaligned_Load) + jnz 2f + lea __mempcpy_sse2(%rip), %RAX_LP HAS_CPU_FEATURE (SSSE3) jz 2f - leaq __mempcpy_ssse3(%rip), %rax + lea __mempcpy_ssse3_back(%rip), %RAX_LP HAS_ARCH_FEATURE (Fast_Copy_Backward) - jz 2f - leaq __mempcpy_ssse3_back(%rip), %rax - HAS_ARCH_FEATURE (AVX_Fast_Unaligned_Load) - jz 2f - leaq __mempcpy_avx_unaligned(%rip), %rax + jnz 2f + lea __mempcpy_ssse3(%rip), %RAX_LP 2: ret END(__mempcpy)